The setting of this manga has really pleased me in a way I believe most people actually don't care: it combines magic and science in a interesting way. It may not be novel, but is done remarkably well. Now comes the long post.
There seems to be some trend, especially since the breakthrough of computers, to think that one day robots (or programs even) will think and behave just like us, humans. The belief that objects artificially brought to "life" are, or become, like humans is much older than computers, but apparently the quick and astonishing development of technology on this field makes people believe it will actually happen. It is no longer treated as fiction, like magic is, but some sort of realistic foreshadowing, since its science-based. This trend has become so strong and so deeply rooted on people that even people from the computer science research area, especially those from AI, actually believes that one day there will be programs that are minds and robots that are similar to us in essence. Even if the programs of today do not behave like anything human, "tomorrow", with faster computers and better algorithms, we will figure out the human mind and make things like us.
However, there is a significant, albeit small, portion of the computer scientists that are certain that such thing can never happen. Even with infinite processing power and memory, and complex theories on algorithms. It is related to the very nature of computers (digital computers specifically). I am on this side of the debate. To try to convince people of this vision is usually a very delicate and difficult process, with long arguments and examples, and I'm not that much good on making people buy what I say, but I'll give it a try.
But before, I'd like to declare I'm not against fiction based on this premise. It just bugs me, and recently kinda puts me off, but I still could like Wall-E, for an example. I guess it's all about a good, fun, interesting execution and not to keep on hammering on this whole "robots can have feelings?" idea, like Metropolis did.
First, I'm to tackle the idea of robots thinking. Later on feelings. That first part may rule out the possiblity of them being human-like, if I can convince you.
Thinking is an activity greatly related to ideas. Ideas don't actually exist phisically, but are so naturally used by us that we don't even question ourselves what in the world is an idea. Here people usually come up with silly examples. Take doors, for instance. A door is something that separates two enviroments, be it rooms, outside from inside, private from public, etc. The physical object, however, varies wildly in shape, size, how to open and close, what it's made of, names (like trapdoors or gates) and end use. However, even if you have never seen a certain door, a sliding door for instance, the moment you see someone opening one, you'll have that insight "oh, it's a door". Therefore, ideas must be universal. That's quite a stretch, though a sensible one, and not a novel one either, The world of ideas of Plato is fundamented on that. Just like a straight line, or a perfect circle, things that simply don't exist physically, the door idea doesn't belong to the material world either, yet has been implemented on it incountable times by humanity. Think about it. People that never met have come to make doors. Even when we think about aliens, they have doors! It is even more impressive when it comes to complex things. Two scientists in different countries and backgrounds may arrive in basically the same theory through different means and motivations. Something like what happened with Calculus.
Using these conclusions from describing ideas, we can get to semantics, or understanding. Let's start with texts. A text is just a sequence of words, which in turn are a bunch of symbols packed together. Not knowing the language in which a text is written renders it useless. It really is just symbols, and if one symbol is exchanged for another, it won't make any difference. However, when the language is known, then the meaning of it can be extracted, and every symbol becomes relevant. If you know very little Japanese, try reading a Japanese newspaper. You'd rather be doing extra hard sudoku instead.
Meanings are similar to ideas in which they are not physical. But, while ideas can be physically implemented, meanings cannot. Write a text with something in mind, and many people may understand a lot of other things out of it. Like this huge ass text. So, though it may be possible to put something meaningful in a text, what exactly is understood is entirely up to the reader. So, rather than universal, semantics are personal, highly dependant on each person's background. Seeing how thinking is naturally related to understanding, it is imperative, then, that thinking must involve semantics.
Therefore, if robots/programs cannot have semantics, thinking is completely out of their league. And can they have it? NO. Not in a million years and more. Why? Because computers deal with its input in a very simple manner. They are symbol manipulators, thus restricted to symbols and rules, i.e. syntax, never ever reaching the meaning of those symbols.
This may have come out of nowhere, especially if you are not from this area of knowledge. When programming, it is easy to see computers as sophisticated calculators. They basically get some numbers, do some calculation (think of it as sum, multiplication or the sort), put the partial results somewhere, do another calculation, repeating this until it give as an answer another number. After all everything in computers are represented in binary numbers (like 1001011101) and every operation is done with those binary numbers. They are actually much simpler than that. Computers don't see binary numbers as numbers, nor does it calculate anything. It merely manipulates those two symbols, 0 and 1, by following given rules, which may or may not be the implementation of some calculation. Of course, for it be useful, it better be a calculation.
This makes you understand why automatic translators suck so much. They are given text in a language, compare with their huge database on how it should be translated, and gives an answer in another language. It's easy to see it doesn't understand what the hell it may be saying, but, with enough luck, their database may have just the answer you need, given some statistical analysis. These programs are becoming better with time because better statistical models are developed and even bigger databases are used.
If you're still not convinced, you may come with an argument like "Why then Google, Siri and Amazon, for instance, are so good in predicting what we want?". Part of the answer was given above. Google counts links from pages to see if they are related. Also counts which link you click when you search for something. Couple that with many other ideas they have come up in the last decades, like privileging things near your estimated location, and the biggest database of the world, you can have quite a good guess on what someone is searching. The other part is a rather tragic trait of our generation: we're getting more and more computer-like. Since we're actually doing what computers tell us to do, like clicking on links with automatic recommendations, it becomes a walk in the park guess what we are thinking.
If you bought all this "computers can't think" idea, realizing they can't have feelings is quite easier. Given that they just manipulate symbols strictly following rules, the programs, they are all the same and do the same. They are universally designed, so to speak. Make one computer, and another of the same model will behave exactly equally. However, feelings, just like meanings, are personal. Cold, pain, sadness, joy or anger are all personal, dependant on our backgrounds, on ourselves.
Noticing how little I wrote on the feelings part made me see how little is my own capacity of arguing on that matter. In the case you're still thinking it is all BS and I wasted your precious time, I can recommend some material on this debate. For both sides actually. And you choose yourself. Since I don't remember now and ended up writing this for hours sleepy, I cannot recall any names right now so I will only post it if requested.
So what? Why all this? What does it have to do with this series? To back up the very first paragraph of this ginormous text: the setting pleases me. It throws all this BS of computers thinking by introducing magic in the mixture. Magic that could be made of the same essence as life itself. That alone makes it much more powerful than infinite processing power. If Yaya and the other automatons think, feel and have personalities, it is because their magic circuit constains something much more than any AI algorithm, database, and data analysis could ever have.
tl;dr people think robots think, which they don't and won't. This setting rocks.
Edit: so many typos...I'm ashamed really
Edited by kkio, 21 April 2013 - 07:38 AM.