<<< Date Index >>>     <<< Thread Index >>>

[IP] more on "Strong" AI to be here within 25 years





Begin forwarded message:

From: Bob Frankston <Bob2-19-0501@xxxxxxxxxxxxxxxxxx>
Date: July 14, 2006 7:22:27 PM EDT
To: dave@xxxxxxxxxx, ip@xxxxxxxxxxxxxx
Cc: "'Jordan Pollack'" <pollack@xxxxxxxxxxxxxxx>
Subject: RE: [IP] more on "Strong" AI to be here within 25 years

I’ve long been in interested in Moore’s law and how it works. I wrote a chapter in on the topic in the ACM’s Beyond Computing book (http:// www.frankston.com/?name=BeyondLimits) and have continued to try to understand the phenomenon. It’s an effect that occurs when we have key conditions. The most important is that we don’t care about the result – anything that works works. Thus if Intel produces a Unix CPU and Microsoft uses it as a Windows CPU Intel doesn’t tell them they are wrong – it takes advantage of the opportunity. As Bob Seidensticker points out in Future Hype, hyper-growth is not entirely new.

I like to use roulette as an analogy – you will lose if you care which number the ball lands on. You can make any number a winner by finding value in whatever the ball lands on. If you can’t double the speed of the CPU, you do multicore.

It is this dynamic – taking advantage of opportunity rather only allowing narrowly defined solutions --- that allows demand to create supply by finding uses and being less picky. If Gordon doesn’t want his law to be used for economics I’ll claim it as Frankston’s law.

It’s also the engine of evolution – more to the point, co-evolution – you aren’t evolving solutions to fixed problems. You are finding which problems match on of the available solutions. You’re going from N possibilities to NxM or combinatorial possibilities.

If, by strong AI you mean machines more intelligent than use you have a problem in defining intelligence. I agree with Seymour Papert that intelligence is about powerful ideas. Unfortunately you can’t necessarily derive the powerful ideas from first principles. This is where the co-evolution issue becomes more important – you play to your strengths. There isn’t progress – just opportunity. If a creature happens to be better on land vs water it will go where life is easier. If one is crowded it may just lose but a creature that happens to taken advantage of an empty niche will thrive. Note that we are not the dominant species – it’s just that the bacteria were occupying the prime real estate so we had to move out to the sticks.

If you are capable of computing all the positions of the planets in Ptolemy’s system you may never have to discover the heliocentric model. You’ll have all the details but none of the insight.

The Internet is a good example of this – if you control the entire infrastructure you do SS7. If you are forced out of network and have to fend for yourself you’ll do IP and learn how to take advantage of opportunities rather than becoming the bestest telephone system in the entire shrinking world.

Andy Clark has a written a number of books on AI, his first, Being There, Putting Brain, Body and World Together Again shows that intelligence itself emerges from context and is not just the accumulation of facts.

I am a fan of AI in the sense of coming to terms with complexity and autonomous behavior. For me it’s applied philosophy.

But I do question the concept of a singularity -- at very least there isn’t a single optima. So what if you have a very intelligent machine, why would it even deign to acknowledge your existence? It would be too busy with deep thoughts of cosmic significance – or maybe it will just use all its tremendous powers to bet on the ponies. But that might still be too difficult to predict.

Looking at Jordan’s article I’m not surprised that that we are making similar points.



-----Original Message-----
From: David Farber [mailto:dave@xxxxxxxxxx]
Sent: Friday, July 14, 2006 18:22
To: ip@xxxxxxxxxxxxxx
Subject: [IP] more on “Strong” AI to be here within 25 years







Begin forwarded message:

From:   Jordan Pollack <pollack@xxxxxxxxxxxxxxx>

Date:    July 14, 2006 4:28:13 PM EDT

To:       dave@xxxxxxxxxx

Subject:            Re: [IP] “Strong” AI to be here within 25 years



I’m sorry, but baloney is still baloney, because Moore’s law doesn’t

increase the quality and complexity of our software. We’d see

something coming on supercomputers or grids. Alternative views about

the next 50 years are in the current issue of IEEE Intelligent Systems,

*http://tinyurl.com/hul5g

*IEEE unfortunately charges a fee, but my paper “Mindless

Intelligence” is available free at http://ectomental.com

Jordan





From:         Mike Cheponis <mac@xxxxxxxxxxxx>

Date:          July 13, 2006 11:01:19 AM PDT

Subject:      “Strong” AI to be here within 25 years



“The advent of strong AI (exceeding human intelligence) is the

most  important transformation this century will see, and it will

happen  within 25 years, says Ray Kurzweil, who will present this

paper at  The Dartmouth Artificial Intelligence Conference: The

next 50 years  (AI@50) on July 14, 2006.”



Much more at: <http://www.kurzweilai.net/meme/frame.html?main=/

articles/art0683.html>



Weblog at: <http://weblog.warpspeed.com>





--

Professor Jordan B. Pollack   Dynamic & Evolution Machine Org

Computer Science Department   FaxPhone/Lab: 781-736-2713/3366

MS018,  Brandeis University   http://www.demo.cs.brandeis.edu

Waltham Massachusetts 02454   e-mail: pollack-at-brandeis.edu

Multiplayer Education Games   FOR FREE! http://www.beeweb.org






You are subscribed as BobIP@xxxxxxxxxxxxxxxxxx

To manage your subscription, go to

http://v2.listbox.com/member/?listname=ip

Archives at: http://www.interesting-people.org/archives/interesting- people/



-------------------------------------
You are subscribed as roessler@xxxxxxxxxxxxxxxxxx
To manage your subscription, go to
 http://v2.listbox.com/member/?listname=ip

Archives at: http://www.interesting-people.org/archives/interesting-people/