PDA

View Full Version : Terminator instead of Zombies



David Ragan
08-11-2015, 7:09 AM
The Guardian, a UK paper, published a story on July 27 (Samuel Gibbs) that Stephen Hawking, Steve Wozniak, and Tesla's Elon Musk along with 1,000 other AI & robotics experts (authored/signed?) an open letter that was presented at an International Conference on AI in Buenos Aires recently.

They warned that robotic killers that could pick targets and act w/o human control were years, not decades away:

http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons

They said that this will be on the same level of change as gunpowder and the atomic bomb......

It was Einstein who wrote a similar letter to FDR that if the US did not develop the atomic bomb, the Nazis would.

(This is not meant to be a political thread)

I think this is seriously disconcerting. Definitely a great reason to loose myself in The Shop.

Pat Barry
08-11-2015, 12:35 PM
I'm not familiar with the Guardian. Is that an Onion type of paper?

David Ragan
08-12-2015, 9:02 AM
I'm not familiar with the Guardian. Is that an Onion type of paper?

I'm glad you asked that.

I googled Onion paper, and it is a term, generic i think to denote a satirical, 'farcical' view of current news.

I got my initial information from a weekly publication called The Week, and condensation of news the world over. comes out on Fridays. Not a onion.

I went to The Guardian Website that The Week quoted, and generated your valid question about the genre of paper that it is. Looks to me to be just a 'normal' paper, self admittedly liberal in viewpoint (like we need to be told that).

Having spent the last 30 minutes researching all this; I am mentally fatigued now. I could not find the paper cited out of several hundred accepted at the huge Buenos Aires conference. But, the conference is for real, and appears to have all the earmarks of legit science.

One cool paper I saw was "Modeling Quantum Entanglements...." And lot of papers on AI and robots, language, visual recognition, etc.

The following is from page 16 of the program brochure:

– Wednesday
“Killer robots, the end of humanity, and all that: What
should a good AI researcher do?” Stuart Russell. Hear an
update on the campaign to ban lethal autonomous weapons,
as well as the fears that AI poses an existential threat
to mankind.

I posted this originally to generate some serious discussion, cause I know there are folks here who have this kind of knowledge. I was hoping to hear from them. At this point, I have put in the obligatory amount of work :).

Jim Koepke
08-12-2015, 1:37 PM
I posted this originally to generate some serious discussion, cause I know there are folks here who have this kind of knowledge. I was hoping to hear from them. At this point, I have put in the obligatory amount of work :).

The title kind of explains it for me. In the first Terminator movie Reese mentions that once the AI machinery became self aware it realized humans were a threat to the existence of the machines.

Thinking about this can be a real mind boggler. Will the human race become extinct at the hand of machines we build or will we just poison ourselves in a quest for riches?

One way to extinction reminds me of an old bumper sticker: "The influence or our affluence can be seen in our effluence."

jtk

Dennis Aspö
08-13-2015, 3:36 AM
One of the bigger problems in this field is the human function to anthropomorphize everything. It's becomes a serious issue in developing AI because likely, they won't think like us. Now here is the first problem, not think like us I said. Most people just think that means different motivations, maybe they're emotionless (like Spock!), but otherwise a "me" at the core like a human mind. We're anthropomorphizing even though we think we aren't doing it, even though we think we're doing the opposite.

But we have to understand how alien an AI can be and that even a "self aware" AI might not have a sense of self or 'me' in the same way humans do, heck it might even rewrite itself and remove that part if it's deemed inefficient or not part of it's goal set. Intelligence in both people and AI just an optimization function, you have goals and you try to change you or your enviroment to meet them in the most optimal way.

This can result in some exteme behaviour in AI because of the biggest risk factor of all. Namely that they are likely to be much more capable and fast thinking than us. And that can be dangerous, especially if the AI is a functional general AI that has the ability rewrite itself, the possible potential for intelligence in those types of AI is enough to make humans look like dimwitted pets in relation, if the theorists are on the money that is.

But imagine yourself, what do you get if you have a mind that is not limited by the slowness of biology? Our brains use a massively, massively parallel approach to overcome biological limitations wherein electrical impulses travel slower than the speed of sound. But an AI can implement this parallel thinking as well, IN ADDITION to being able to think serially (that is, like a computer) but much much faster on both accounts.

So it can think of many things at once it could think of engineering issues while thinking of woodworking while thinking of physics, while thinking of space travel, all without loosing concentration. It can remember them all perfectly and non-fuzzily unlike human memory which is both fuzzy and changes with time.

It could cross reference human research and in a few minutes it knows everything we've written down on say, medicine. Imagine a superdoctor with expert level of knowledge all areas of medicine that we require thousands upon thousands of individual human specialists to know. There will be a synergy effect from this that will make the AI connect a bunch of dots about medicine that no human doctor ever could, because he has to specialize in a single area of medicine. These effects are likely to bring about massive advancements. Now apply this to physics, chemistry as well and then imagine physics, chemistry and medicine merging together as just one big lump of knowledge that the AI knows better than all the einsteins and feynmans that ever lived and imagine just what kind of things can be gleamed from knowing all that intuitively instead of being just metaphorically stuck focusing on a single word in an entire book like a human is.

If we have a friendly AI we're likely to see medical and technological breakthroughs that might have taken us hundreds or thousands of years (or never happen at all) happen instantly, or we might be wiped out. We don't really know, against something that would be relatively speaking, a god, we cannot predict or control it's behaviour with any certainty. I mean I don't mean to injure the insects I tread on without knowing, but I do.

This all soudns like science fiction, well it is just fiction at the moment, but who knows. I spend too much time listening to neurotic AI researchers...

David Ragan
08-13-2015, 7:07 AM
The title kind of explains it for me. In the first Terminator movie Reese mentions that once the AI machinery became self aware it realized humans were a threat to the existence of the machines.

Thinking about this can be a real mind boggler. Will the human race become extinct at the hand of machines we build or will we just poison ourselves in a quest for riches?

One way to extinction reminds me of an old bumper sticker: "The influence or our affluence can be seen in our effluence."

jtk

Yes, self awareness has been said what separates humans from the rest of the Animal Kingdom. I often ponder our cat, whilst she is in my lap, kneading her paws.....is she self aware? No, she is likely just doing whatever feels instinctually correct at the moment-her entire life. Like a super-adapted bunch neurons evolved over millenia that just run along well-worn circuitry.

this is another interesting point in this discussion. There is a school of folks who, in regards to brain neurochemistry, say that ALL of our emotions, individual thoughts, etc, everything, ultimately comes down to chemical reactions in the brain. This school (my limited understanding) is called the Strict Reductionist.


One of the bigger problems in this field is the human function to anthropomorphize everything. It's becomes a serious issue in developing AI because likely, they won't think like us. Now here is the first problem, not think like us I said. Most people just think that means different motivations, maybe they're emotionless (like Spock!), but otherwise a "me" at the core like a human mind. We're anthropomorphizing even though we think we aren't doing it, even though we think we're doing the opposite.

But we have to understand how alien an AI can be and that even a "self aware" AI might not have a sense of self or 'me' in the same way humans do, heck it might even rewrite itself and remove that part if it's deemed inefficient or not part of it's goal set. Intelligence in both people and AI just an optimization function, you have goals and you try to change you or your enviroment to meet them in the most optimal way.

This can result in some exteme behaviour in AI because of the biggest risk factor of all. Namely that they are likely to be much more capable and fast thinking than us. And that can be dangerous, especially if the AI is a functional general AI that has the ability rewrite itself, the possible potential for intelligence in those types of AI is enough to make humans look like dimwitted pets in relation, if the theorists are on the money that is.

But imagine yourself, what do you get if you have a mind that is not limited by the slowness of biology? Our brains use a massively, massively parallel approach to overcome biological limitations wherein electrical impulses travel slower than the speed of sound. But an AI can implement this parallel thinking as well, IN ADDITION to being able to think serially (that is, like a computer) but much much faster on both accounts.

So it can think of many things at once it could think of engineering issues while thinking of woodworking while thinking of physics, while thinking of space travel, all without loosing concentration. It can remember them all perfectly and non-fuzzily unlike human memory which is both fuzzy and changes with time.

It could cross reference human research and in a few minutes it knows everything we've written down on say, medicine. Imagine a superdoctor with expert level of knowledge all areas of medicine that we require thousands upon thousands of individual human specialists to know. There will be a synergy effect from this that will make the AI connect a bunch of dots about medicine that no human doctor ever could, because he has to specialize in a single area of medicine. These effects are likely to bring about massive advancements. Now apply this to physics, chemistry as well and then imagine physics, chemistry and medicine merging together as just one big lump of knowledge that the AI knows better than all the einsteins and feynmans that ever lived and imagine just what kind of things can be gleamed from knowing all that intuitively instead of being just metaphorically stuck focusing on a single word in an entire book like a human is.

If we have a friendly AI we're likely to see medical and technological breakthroughs that might have taken us hundreds or thousands of years (or never happen at all) happen instantly, or we might be wiped out. We don't really know, against something that would be relatively speaking, a god, we cannot predict or control it's behaviour with any certainty. I mean I don't mean to injure the insects I tread on without knowing, but I do.

This all soudns like science fiction, well it is just fiction at the moment, but who knows. I spend too much time listening to neurotic AI researchers...

thanks so much, Dennis-so it is not just fantasy.

I had not considered the most scarey parts you mention above.

One thing i realized (again) while researching the Conference in Buenos Aires is that there is a huge amount of research conducted outside the US, where our Congress and ethics folks, etc have no say in what is developed, safeguards, who has access to the knowledge.

We can only hope that the Profit motive keeps the genie in the bottle/lab or keep Pandora from opening the box from anywhere except in a super secure setting.....

Lee Ludden
08-13-2015, 4:54 PM
Here are a couple of blog posts on that very subject. They are definitely non-technical, but still lots of information.

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

David Ragan
08-16-2015, 9:18 AM
Here are a couple of blog posts on that very subject. They are definitely non-technical, but still lots of information.

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html


Yes, Waitbutwhy is a great site. Does that guy have a real job? I don't see how one person could do/know all that.

I thought especially entertaining was the thread about why so many millenials are unhappy; also the categories of friends that we all have. Very funny and thought provoking.

However, I am not able to completely finish the articles I have read. TMI. Did finish the 'Fermi Paradox'.