« Beautiful video re: EDU 2.0 in the Philippines | Main | EDU 2.0: Excellent Growth »

Oct 05, 2009

Comments

Tim Farage

I think your projections are plausible, if 2) and 3) come true. But it seems like your assumption is that faster computing speeds and a better understanding of the brain will eventually result some sort of AI merging or overtaking us. However, reasonable that seems, I think that it's not yet warranted. Whenever I think about any computer, that's all I see is fast processing of stuff humans told it to do. I don't see any signs computers having motivation, a desire to live and grow, a desire to contribute, love, compassion, or anything that makes us truly human.

At the same time, I do think computers and robots will get more and more useful, and will take over many jobs we currently do.

Graham Glass

Hi Tim,

There are tons of systems that already utilize machine learning. The future of AI is machines that learn (rather than just doing what they have programmed) and that is where the research is focused. Most of the machine learning systems have the concept of emotions, which are used as feedback to improve the algorithms.

Cheers,
Graham

Tim Farage

Graham,

Machine learning is one thing, emotions are another. To what systems are you refering that have the concept of emotions?

Tim

Graham Glass

Hi Tim,

Most of the machine learning systems I've seen have at least rudimentary equivalents of surprise, expectation, pleasure and pain. Otherwise they would not have values and thus would not have autonomous goals.

I also wrote a blog entry about emotions that explains that they're not mysterious and are essential for any kind of learning system:

http://grahamglass.blogs.com/main/2009/06/emotions.html

Cheers,
Graham

Tim Farage

GrahamBo,

Just read your article about emotions and I think you make good points. You do admit that we don't know how to deal with the 'hard problem', but that's at the heart of the issue. To reach the singularity, a digital intelligence would have to be aware of itself, want to grow and improve, want to live, want to love and be loved, want to use its talents to help make the world a better place and want to obey the Golden Rule. Admittedly, many humans don't meet these requirements; nevertheless, if an AI does, I will admit the singularity has been reached. I will then have to admit that my idea about what a soul contributes to us is also wrong, and probably doesn't exist. On the other hand, if decades keep going by with no singularity, open minded scientist may have to admit that maybe there's something else besides our brain that makes us human.

Tim Bob

Graham Glass

Hi Tim,

I disagree that a digital intelligence would have to want to make the world a better place. It probably would not want to live on Earth anyway - space is a more natural habitat for a digital life form. And as you mention, many humans don't meet many of the goals you state!

I wouldn't believe in anything "extra" beyond a brain unless there was some evidence for that. Slow progress in digital intelligence is evidence that it's a tough problem, not evidence for a soul, and I don't think those two issues should be confused.

Cheers,
Graham

Dave

Thanks for the very interesting discussion, Graham. I agree that there is no reason to think that intelligent life could not evolve based upon another substrate. However, I think so far the discussion has some presumptions.

Will artificial intelligences really limit themselves to substrates based on metals just because our current computing technology does? The only self replicating system we know of so far (realized in hardware) evolved based on organics, after all. Might not digital life choose to integrate or use some organics too? If so, what does that do to your theory that they will prefer space and not Earth?

DNA seems to be very dense storage medium, especially when one takes into account the differentiated uses for each gene within each organ. Perhaps future miniaturization efforts will compact information using molecular bases above 2 and reuse it based on environment. If so, I would think that could be done using either organics or inorganics, although organics offer some clear advantages in terms of combinatorics and chemical simplicity.

Another (nearly random) thought: Emotions work reasonably well for us, as I keep telling myself when trying to find a gift for my wife, but are they the only way to motivate another form of life? Feedback would seem fundamental, but are pleasure and pain, love and hate, the only ways to present (or interpret) feedback?

I think there is a danger in tending to see new life forms as too much like ourselves. It seems to me that we are going to have to get over being species-centric.

Cheers,
Dave

Graham Glass

Hi Dave,

I don't think there's any reason why it would have to use metals; I mention that because it's fairly likely that its first incarnation (that we will build) will be based our own technologies (whatever they happen to be at the time).

I doubt it will be built on organics though because they seem to be quite fragile. Since digital intelligences can inhabit different bodies at different times, presumably they'd pick bodies based on their needs. The one thing that they would need, however, are brains that can upload/download minds digitally.

Emotions are just signals that represent a particular kind of system state (such as surprise, fear, anxiety, hunger, etc.) So it doesn't seem that they need to be limited.

However, "pleasure" is related to the system-wide goals of an organism, so feeling "pleasure" generally means that it's experiencing something good (from its perspective), which is probably a universally useful emotion.

I agree with your last comment. I try to be non-species centric as possible in my writings about digital life.. For example, I've written about the breakdown of the "individual" when moving to digital life, since they could merge/split/recombine at will.

Cheers,
Graham

Bhavesh

One of the best fiction books I have read on the subject is Accelerando by Charles Stross.

Running my imagination wild …

It can be imagined that Digital Entity would be small in size with powerful Turing machine acting as a host. Such an Entity can use Newton's first law to accelerate close to speed of light by just sitting on top of Sun's internal blasts assuming off course that shell of material that computer is made from does not melt in Sun's atmosphere. My design for something to travel faster instead of getting into complicated physics of worm hole.

Now our Digital Entity having traveled close to Alpha Centauri (nearest Sun) turn around and opens its antenna towards Earth to receive latest design of itself, replicate and use Alpha Centauri to travel to farther star. This Digital Entities thus not only will form the edge of the network but also will assume the role as routers in this ever-expanding network covering the known universe.

I imagine these Digital Entities will ignore humans as humans ignore monkeys. I.e. there won't be any competition when punctuated equilibrium is achieved. All the fight between humans and digital entities would be only during transition, but the period of transition would be less than an hour because of super high exponential rate of evolution in digital entities. As such from human perspective it would be meaningless.

The comments to this entry are closed.

Destiny

  • Destiny is my science fiction movie about the future of humanity. It's an epic, similar in breadth and scope to 2001: A Space Odyssey.

    To see the 18 minute video, click on the graphic below.

    Destiny17small

People