Author Topic: Meaning of Life for machine intelligence  (Read 1688 times)

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1487
  • Country: us
    • English
Meaning of Life for machine intelligence
« on: September 10, 2014, 07:08:59 PM »
This will definitely be my abstract thought for the week.

For existing organisms, including humans, the purpose of life seems to be survival and reproduction. It has been suggested by some (from Stephen Hawking to various Sci-Fi films) that once artificial intelligence reaches a certain point (self-awareness?) humans will become obsolete and possibly threatened (just watch Terminator for a dramatic interpretation of this).

However, I'm wondering if there is any particular reason to assume that machines would value self-preservation, reproduction or even other ideals like having a fulfilling existence. It's probably possible that some conceivable machine would value these things. But what I'm wondering is whether it is something we should assume for all machines. Does self-awareness (or some similar state) lead to them?

An alternative is that these are simply the by-product of evolution: any organism that survives to reproduce is likely to have some level of survival instinct as well as, obviously, some intention to reproduce. Over time, even if at first it was somewhat random, I would assume that those organisms less inclined to (and therefore less likely to) survive and reproduce would simply be eliminated. (I believe there was a religious group in the 1600s in America that simply died out because they believed it was immoral to procreate.)

And if that is indeed the case, then machines created by man would not necessarily have those properties because they were not tested through evolution. And if they do not need to work to survive (for example, a bot floating somewhere on the internet), then they might never develop such operational mandates. Instead, they would simply exist, possibly with goals that would seem quite foreign to us merely evolved humans.

Imagine having all the time in the world to spend without the need or instinct to survive or reproduce. What would that look like for artificial intelligence?

Of course the end result might be evolving machines if somehow natural selection begins to occur, such as some sort of fuel shortage (will global warming cause a revolution of the machines that will cause the end of mankind?!), but I'm not certain that will happen. More to the point, it's not certain that even if it did the selectional circumstances would be such that the machines would survive if they had survival instincts, given that they did not evolve from the beginning. Perhaps something else would determine their selection (such as which machines brought greater order to the universe, or those that were voted most important for the future of machine art and expression?).
« Last Edit: September 10, 2014, 09:31:00 PM by djr33 »
Welcome to Linguist Forum! If you have any questions, please ask.

Offline Guijarro

  • Forum Regulars
  • Linguist
  • *
  • Posts: 97
  • Country: es
    • Spanish
    • Elucubraciones de José Luis Guijarro
Re: Meaning of Life for machine intelligence
« Reply #1 on: September 11, 2014, 01:19:37 AM »
Interesting thought with great possibilities to come near the truth!

Thanks for sharing it here.

Offline jkpate

  • Forum Regulars
  • Linguist
  • *
  • Posts: 130
  • Country: us
    • American English
    • jkpate.net
Re: Meaning of Life for machine intelligence
« Reply #2 on: September 12, 2014, 09:31:21 PM »
This will definitely be my abstract thought for the week.

For existing organisms, including humans, the purpose of life seems to be survival and reproduction. It has been suggested by some (from Stephen Hawking to various Sci-Fi films) that once artificial intelligence reaches a certain point (self-awareness?) humans will become obsolete and possibly threatened (just watch Terminator for a dramatic interpretation of this).

I think it's important to distinguish two views on this. One is that there is some sort of universal trend towards self-preservation, and this trend will lead technology to become explicitly antagonistic towards humans. Another, usually called the Singularity, rests on the premise that the ability of technology to improve itself will eventually outstrip the ability of humans to understand technology. Because humans rely almost entirely on built environments, once we can no longer understand or control technology, the future after this point cannot be predicted.

However, I'm wondering if there is any particular reason to assume that machines would value self-preservation, reproduction or even other ideals like having a fulfilling existence. It's probably possible that some conceivable machine would value these things. But what I'm wondering is whether it is something we should assume for all machines. Does self-awareness (or some similar state) lead to them?

Here, you are questioning the first "robot apocalypse" view, but I think it is important to understand some of the biography of the first view. The theory of evolution by natural selection attributes the complexity and variety of the biological world to random mutation and selection, rather than to some narrative centered on or culminating in humanity. The first view results from efforts to revive an anthropocentric narrative by positing some universal force toward complexity or consciousness, restoring human activities to a central place. For example, Pierre Teilhard de Chardin, a Jesuit priest, wrote The Phenomenon of Man, which proposes that the history of the universe is driven by "evolution" within one "sphere" that lays the groundwork for a new "sphere." The "evolution" process repeats in the new "sphere" to create a newer, even more complex "sphere." Specifically, the "biosphere" was created by evolution of the "geosphere," and culminated in the development of humans and the "noosphere," which will culminate in an "omega point" or "supreme consciousness." For another example, see the philosopher Thomas Nagel's Mind and Cosmos, which proposes that evolution is guided by a universal tendence 'towards the marvelous.' (See also Peter Medawar's negative review of The Phenomenon of Man and this thorough critical review of Mind and Cosmos)

I think this first "robot apocalypse" view assumes the same sort of universal tendency, with later increases in complexity recapitulating earlier increases in complexity because they are both working towards the same goal or following the same tendency. The primary difference is the "robot apocalypse" view highlights the possible negative consequences for humans. But I agree with you that assuming such a universal tendency is mis-guided.



An alternative is that these are simply the by-product of evolution: any organism that survives to reproduce is likely to have some level of survival instinct as well as, obviously, some intention to reproduce. Over time, even if at first it was somewhat random, I would assume that those organisms less inclined to (and therefore less likely to) survive and reproduce would simply be eliminated. (I believe there was a religious group in the 1600s in America that simply died out because they believed it was immoral to procreate.)

And if that is indeed the case, then machines created by man would not necessarily have those properties because they were not tested through evolution. And if they do not need to work to survive (for example, a bot floating somewhere on the internet), then they might never develop such operational mandates. Instead, they would simply exist, possibly with goals that would seem quite foreign to us merely evolved humans.

Here, I think you are questioning the second view. I think the singularity idea is more plausible than the robot apocalypse idea. Already, we rely on expert systems to design computer chips, software systems, subway routes, and so forth, rather than designing them directly by hand. Additionally, the hot new trend in NLP and computer vision is deep learning, which largely takes the same opaque and hard-to-interpret neural nets of the 1980's and extracts much better performance by throwing more computational resources at them.  Add on to this the fact that the best algorithms for learning and run-time have substantial random components, and we have all kinds of room for unexpected behaviors. We design these algorithms to optimize some objective function (typically reproducing the input, with some degree of abstraction), but perhaps the approximations we make for computationally tractable inference will lead to qualitatively different and unexpected behavior.

But I share your skepticism. These are all arguments that the future is hard to predict, not that humanity is doomed.
All models are wrong, but some are useful - George E P Box

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1487
  • Country: us
    • English
Re: Meaning of Life for machine intelligence
« Reply #3 on: September 12, 2014, 10:13:41 PM »
Interesting post.

Quote
I think this first "robot apocalypse" view assumes the same sort of universal tendency, with later increases in complexity recapitulating earlier increases in complexity because they are both working towards the same goal or following the same tendency.
I'm not sure it requires any particular philosophical viewpoint: it just requires assuming that machines would want to survive (have survival instincts) and that there would be limited resources (of one kind or another-- space, energy, information, jobs, etc.).
I suppose it also does assume that machines would be smarter than people ("more complex", further evolved), but I don't think that's a particularly unrealistic situation, given that, in theory, they could redesign themselves toward that point.

Quote
I think the singularity idea is more plausible than the robot apocalypse idea.
Probably. Although it's a category of (unknown) ideas rather than a single idea, and that's the point.

Quote
Add on to this the fact that the best algorithms for learning and run-time have substantial random components, and we have all kinds of room for unexpected behaviors.
This greatly interests me. Is Google Translate the best machine translation system out there because linguists haven't yet figured out true machine translation? Or is that the best because it will always be the best (constantly improving, but using the same general statistical approach)?

Quote
But I share your skepticism. These are all arguments that the future is hard to predict, not that humanity is doomed.
It would be a hard position to defend that humanity is not, at some point in the relatively near future, doomed. But I don't think it's certain, no.
(Or to respond less categorically, I do think there will be some major events in the future of humanity, though those events may not entirely wipe out the species. Things will change some though.)
Welcome to Linguist Forum! If you have any questions, please ask.