Jump to content

Primary: Sky Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Secondary: Sky Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Pattern: Blank Waves Squares Notes Sharp Wood Rockface Leather Honey Vertical Triangles
Photo

About robots and feelings in fiction


  • Please log in to reply
4 replies to this topic

#1
kkio

kkio

    Potato Sprout

  • Members
  • 4 posts
  • LocationAway for the time being.

The setting of this manga has really pleased me in a way I believe most people actually don't care: it combines magic and science in a interesting way. It may not be novel, but is done remarkably well. Now comes the long post.

 

There seems to be some trend, especially since the breakthrough of computers, to think that one day robots (or programs even) will think and behave just like us, humans. The belief that objects artificially brought to "life" are, or become, like humans is much older than computers, but apparently the quick and astonishing development of technology on this field makes people believe it will actually happen. It is no longer treated as fiction, like magic is, but some sort of realistic foreshadowing, since its science-based. This trend has become so strong and so deeply rooted on people that even people from the computer science research area, especially those from AI, actually believes that one day there will be programs that are minds and robots that are similar to us in essence. Even if the programs of today do not behave like anything human, "tomorrow", with faster computers and better algorithms, we will figure out the human mind and make things like us.

 

However, there is a significant, albeit small, portion of the computer scientists that are certain that such thing can never happen. Even with infinite processing power and memory, and complex theories on algorithms. It is related to the very nature of computers (digital computers specifically). I am on this side of the debate. To try to convince people of this vision is usually a very delicate and difficult process, with long arguments and examples, and I'm not that much good on making people buy what I say, but I'll give it a try.

 

But before, I'd like to declare I'm not against fiction based on this premise. It just bugs me, and recently kinda puts me off, but I still could like Wall-E, for an example. I guess it's all about a good, fun, interesting execution and not to keep on hammering on this whole "robots can have feelings?" idea, like Metropolis did.

 

First, I'm to tackle the idea of robots thinking. Later on feelings. That first part may rule out the possiblity of them being human-like, if I can convince you.

 

Thinking is an activity greatly related to ideas. Ideas don't actually exist phisically, but are so naturally used by us that we don't even question ourselves what in the world is an idea. Here people usually come up with silly examples. Take doors, for instance. A door is something that separates two enviroments, be it rooms, outside from inside, private from public, etc. The physical object, however, varies wildly in shape, size, how to open and close, what it's made of, names (like trapdoors or gates) and end use. However, even if you have never seen a certain door, a sliding door for instance, the moment you see someone opening one, you'll have that insight "oh, it's a door". Therefore, ideas must be universal. That's quite a stretch, though a sensible one, and not a novel one either, The world of ideas of Plato is fundamented on that. Just like a straight line, or a perfect circle, things that simply don't exist physically, the door idea doesn't belong to the material world either, yet has been implemented on it incountable times by humanity. Think about it. People that never met have come to make doors. Even when we think about aliens, they have doors! It is even more impressive when it comes to complex things. Two scientists in different countries and backgrounds may arrive in basically the same theory through different means and motivations. Something like what happened with Calculus.

 

Using these conclusions from describing ideas, we can get to semantics, or understanding. Let's start with texts. A text is just a sequence of words, which in turn are a bunch of symbols packed together. Not knowing the language in which a text is written renders it useless. It really is just symbols, and if one symbol is exchanged for another, it won't make any difference. However, when the language is known, then the meaning of it can be extracted, and every symbol becomes relevant. If you know very little Japanese, try reading a Japanese newspaper. You'd rather be doing extra hard sudoku instead.

 

Meanings are similar to ideas in which they are not physical. But, while ideas can be physically implemented, meanings cannot. Write a text with something in mind, and many people may understand a lot of other things out of it. Like this huge ass text. So, though it may be possible to put something meaningful in a text, what exactly is understood is entirely up to the reader. So, rather than universal, semantics are personal, highly dependant on each person's background. Seeing how thinking is naturally related to understanding, it is imperative, then, that thinking must involve semantics.

 

Therefore, if robots/programs cannot have semantics, thinking is completely out of their league. And can they have it? NO. Not in a million years and more. Why? Because computers deal with its input in a very simple manner. They are symbol manipulators, thus restricted to symbols and rules, i.e. syntax, never ever reaching the meaning of those symbols.

 

This may have come out of nowhere, especially if you are not from this area of knowledge. When programming, it is easy to see computers as sophisticated calculators. They basically get some numbers, do some calculation (think of it as sum, multiplication or the sort), put the partial results somewhere, do another calculation, repeating this until it give as an answer another number. After all everything in computers are represented in binary numbers (like 1001011101) and every operation is done with those binary numbers. They are actually much simpler than that. Computers don't see binary numbers as numbers, nor does it calculate anything. It merely manipulates those two symbols, 0 and 1, by following given rules, which may or may not be the implementation of some calculation. Of course, for it be useful, it better be a calculation.

 

This makes you understand why automatic translators suck so much. They are given text in a language, compare with their huge database on how it should be translated, and gives an answer in another language. It's easy to see it doesn't understand what the hell it may be saying, but, with enough luck, their database may have just the answer you need, given some statistical analysis. These programs are becoming better with time because better statistical models are developed and even bigger databases are used.

 

If you're still not convinced, you may come with an argument like "Why then Google, Siri and Amazon, for instance, are so good in predicting what we want?". Part of the answer was given above. Google counts links from pages to see if they are related. Also counts which link you click when you search for something. Couple that with many other ideas they have come up in the last decades, like privileging things near your estimated location, and the biggest database of the world, you can have quite a good guess on what someone is searching. The other part is a rather tragic trait of our generation: we're getting more and more computer-like. Since we're actually doing what computers tell us to do, like clicking on links with automatic recommendations, it becomes a walk in the park guess what we are thinking.

 

If you bought all this "computers can't think" idea, realizing they can't have feelings is quite easier. Given that they just manipulate symbols strictly following rules, the programs, they are all the same and do the same. They are universally designed, so to speak. Make one computer, and another of the same model will behave exactly equally. However, feelings, just like meanings, are personal. Cold, pain, sadness, joy or anger are all personal, dependant on our backgrounds, on ourselves.

 

Noticing how little I wrote on the feelings part made me see how little is my own capacity of arguing on that matter. In the case you're still thinking it is all BS and I wasted your precious time, I can recommend some material on this debate. For both sides actually. And you choose yourself. Since I don't remember now and ended up writing this for hours sleepy, I cannot recall any names right now so I will only post it if requested.

 

 

 

So what? Why all this? What does it have to do with this series? To back up the very first paragraph of this ginormous text: the setting pleases me. It throws all this BS of computers thinking by introducing magic in the mixture. Magic that could be made of the same essence as life itself. That alone makes it much more powerful than infinite processing power. If Yaya and the other automatons think, feel and have personalities, it is because their magic circuit constains something much more than any AI algorithm, database, and data analysis could ever have.

 

tl;dr people think robots think, which they don't and won't. This setting rocks.

 

Edit: so many typos...I'm ashamed really


Edited by kkio, 21 April 2013 - 07:38 AM.


#2
Caladbolg

Caladbolg

    Potato Spud

  • Members
  • 19 posts
  • LocationDimensional Rift

I am not exactly an expert on this matter, but here's my opinion : 

Emotions are basically reactions when the brain secretes the appropriate hormones. Why do the brain secretes the hormones? When something "fun" happens to us our brain secretes hormones for the positive emotions, and vice versa. These "good" and "bad" things are of course, subjective. If the analysis of human brain progresses in the future, scientists may possibly apply this concept into the "brain" of the machine rendering it to have artificial emotions. With the advanced algorithms and faster processors, the machine can collect data and eventually forming its own parameters creating "individuality" though not as complex as humans. So it may be possible for machines close to humans to exists in the future.

I am not good with words. I apologize if there are mistakes



#3
jfforums

jfforums

    Potato Spud

  • Members
  • 29 posts

This is quite interesting theme :)

I am not expert on this theme too, but i worked some time in IT industry so i catched few things there.

First, about thinking process you have been talking, kkio.

It is still unknown how exactly work human brain (as a pure mechanics :)) during thinking process. Theory about thoughts (making,transferring...) as a chemical processes between cell parts and cells only, today looks incomplete (does not explain plenty of things in real life,some of them are like instincts, or reffered as a gene memories - like how baby knows to take food from the birth without learning etc.). Some of experts allow existence of external sources (out of personal brain/body) used during the thinking process to explain some known bizzare facts of human society, like getting same ideas around the world in approximatelly same time without visible way of spreding those ideas thru travelling/cultural exchange etc. (like wheel, tools, development of agriculture in early human societys...). That will be left unknown probably for a some time too, so as that cannot be used as a way to create artificial "being" who would be thinking in the same or similar way as humans.

So, for now we can discuss only methodology of human thinking. And you are right, humans during thinking process use concepts of objects/events... instead of physical objects we can see or feel or remember. Your example of doors is good, regardless of shape (rectangular or spherycal as on submarines or hobbits houses :)) human will usually recognize them, but sometimes would be need to see object, lets say, opening and closing to recognize the concept of doors if they are size of complete wall, or if they are shaped like "Enterprise" ship doors... There is nice scene in manga "Noblesse" when Rai looks at automatic doors :)

And yes, you are right, computers now still do not use much "concepts" when dealing with making decisions, but instead they extensively use librarys with fixed definitions of objects,events etc. 

And now we come to part where i disagree with you.

It is truth that vaste majority of software is made in same maner - more or less simplifyed algorithm used to replicate thinking process of some IT engineer for solving some type of problem using binary mathematics. But why is that? Is that the only way to solve the same problem? No. Its simply result of human lazzynes. Some 50 years ago computers were having severe limitations for data analysis, so in that time humans developed algorithmical way of programing them to optimaly use those machines. With passing of time the computers were getting better and better specifications for data analysis, but the way of human thinking when using them stayed almost the same. So, now we have impression that this is the only way to use computers, and that there is no way to create artificial inteligence similar as human.

Well, now the things are little changed, because now in plenty of places in world are conducting experiments with artificial inteligence which will use "learning" as a starting point, and deal with concepts instead of librarys of definitions, use fuzzy logic instead of binary interpretations, use biological computers instead of standard ones, etc.

In any way, this is only the begining, but in that way i see some chance to create artificial intelligence similar to ours in a not distant future.

 

The other part, feelings of those artificialy created "beings" is bigger problem, beacuse as you said, feelings are individual category, so for that would be need to make those "beings" selfconscienced, even if they are mass produced. If some of those "beings" can differentiate themselves from others of same kind, with process of learning and accumulating experience, i think even having feelings for those "beings" should not be problem. But i do not think it would be easy, or near in future (at least we know that is doable, because God (or whatever else) created us :D)...

As a hard evidence for my opinion i can offer you manga "Gepetto" :D, you can find some robots with rudimentary or complex feelings like jelousy, anger, doubt, loyalty... as a simple product of selfconscience of those robots. Even as a work of art, it have plenty of thinking invested in plot, anyway not less then Arthur Clark's SF novels :)

 

On the other way, making robots with artificial emotions, as Caladbolg suggests, should be pretty easy, even for technology in present time. Its already available software which can quite nice "guess" the mood of the human user ( by learning the shape of human face with different emotions ), and answer aproprietely to that mood by gestures, talk etc. (at least to the level of human who found himself in company of people who do not speak his language :)). So, i do not think it would be difficult to create robot like Yaya who will behave strictly by some psyhological limitations (jelous, loyal and posesive towards MC...) in some situations (without fighting abilities :)) using present time technology. But that is faking of emotions, not feeling of emotions, and i think kkio talks about real feelings of robots or whatever artificialy created "thing" with some kind of intelligence.

 

For any bad spelling, bad choice of words, or incomprehensible rambling, blame my bad english :D



#4
Remilia Scarlet

Remilia Scarlet

    Potato Sprout

  • Members
  • 6 posts

In japan there is a tradition regarding inanimate objects that doesn't exist in occident. I am talking about the fact that everything has a soul, or can gain one (tsukumogami), for instance in tsugumomo we have a love story between a boy, and an Obi (kimono sash) (well the spirit of the obi, but in the end the Japanese don't make the difference). So it really is easy for Japanese people to think that a machine can love, especially if it's a doll (Japanese doll youkais and the such, there is also another manga about a boy and a Japanese doll, forgot the name). As you stated yourself this world is a mix between magic and science and as such there shouldn't be a problem with the dolls with souls (and she is a bandoll made of human flesh if i remember well so it should be easier).

But for instance i wouldn't be shocked if one of the pupeter was to protect his doll at the cost of his health, that the after-mentioned doll would gain a soul or something like that (soul that fixes themselves in beloved tools)

Well i don't know if my point was clear enough or not :/



#5
DanYHKim

DanYHKim

    Potato Sprout

  • Members
  • 8 posts

The idea of robots not being capable of feelings, and then eventually advancing until they do is actually as old as the modern meaning of the word "robot" itself. The first 'robot' story is "R.U.R.". From Wikipedia:

R.U.R. is a 1920 science fiction play by the Czech writer Karel Čapek. R.U.R. stands for Rossumovi Univerzální Roboti. However, the English phrase Rossum’s Universal Robots had been used as the subtitle in the Czech original.

 

In this play, robots are more like homonculi. They are semi-living creatures shaped like humans, with enough intelligence to be used as workers. They have no internal motivation or emotions, however. But, predictably, they are used for war and are also worked until their bodies break down. Eventually, the robots rebel against humans, and even kill all humans, save one. The one human left was basically the janitor at the RUR company, or some such.

 

Since Rossum destroyed the robot-creating machines, and also destroyed his notes on the technique for creating robots, before he was killed in the rebellion, the robots turn to the last human to re-create the process. He despairs, knowing that when the final robot wears out, there will not even be the simulacrum of humanity left on the Earth. But then a strange thing happens. He encounters two robots, shaped male and female, who appear to have feelings for each other. This development is unexpected, and he wishes to dissect one of them to find what allows them to have emotions. The other robot objects, and threatens the human, saying that without the other, existence would be futile and empty.

 

Realizing that these two have somehow made the leap to becoming human beings, possessing a soul, the human releases them to inherit the world together.

 

If you are interested in this theme of robots with emotions, I recommend looking at "Saber Marionette J". This series also has robots who achieve sentience and emotions through s special component called a 'maiden circuit'. The series itself is full of slapstick comedy, but it inspired two exceptional fanfictions, which I highly recommend: Koyuki's Red Pinwheel, and Terrible Swift Sword, both by David Pascal extend the theme to a great degree. They can be found at the Vanished Fanfiction Archive (http://vffa.ficfan.org/author_david_pascal.html)