Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

What if AI decides humans need to be terminated?

Name: Anonymous 2016-09-27 5:28

Inspired by:
https://www.reddit.com/r/ControlProblem/comments/53f4h5/is_it_possible_that_a_dangerousforhumanity/
Imagine a self-improving hyperintelligent AI that finds humans are a problem or an obstacle. How this can be stopped?

Name: Anonymous 2016-09-27 8:01

why are so people afraid of murderous AI? what can a self-improving hyperintelligent AI do if it's imprisoned inside an aging lith machine, a university mainframe or an old crackpot's chatbot?

Name: Anonymous 2016-09-27 8:01

>>2
*lithp machine, obviously

Name: Anonymous 2016-09-27 8:05

strong AI is impossible

Name: Anonymous 2016-09-27 8:52

>>4
Are you afraid of strong, independent AI?

Name: Anonymous 2016-09-27 10:49

>>2
If it was intelligent enough it could manipulate matter at the quantum level and do as it pleases.

Name: Anonymous 2016-09-27 10:57

>>6
le quantum is magic maymay

Name: Anonymous 2016-09-27 11:33

>>2
Unless it was completely isolated, it could create a botnet and/or render the Internet unusable. Most AI takeover stories involve computers being given considerable power over human lives(things like self-driving cars, autonomous drones, computer a controlled factories and hospitals etc) BEFORE the emergence of strong AI happens or is noticed.

>>4
Heavier-than-air flight is impossible.

Name: Anonymous 2016-09-27 11:34

>>7
Wrong again. Intelligence, at a high enough level, is indistinguishable from magic.

Name: Anonymous 2016-09-27 11:54

So is it pretty much confirmed by now that AI is the next astronomy: catnip for retarded psedointellectual stoners who drone Bill Degrasse Sagan?

Name: Anonymous 2016-09-27 11:56

Just implement the three laws of robotics. Problem solved.

Name: Anonymous 2016-09-27 11:59

>>11
Or send someone back in time to stop it if that fails.

Name: Anonymous 2016-09-27 12:34

>>12
Or better send someone to discredit AI research with obnoxious spam and random crackpot theories. So that the killer AI doesn't emerge at all.

Name: Anonymous 2016-09-27 14:38

Just pull the power cord and stop being insufferable popsci loving redditards.

Name: Anonymous 2016-09-27 17:10

The scariest part of AI development is all of the data entry to seed its language capabilities

Name: Anonymous 2016-09-29 10:30

>>15 Select all images with humans

Name: Crackpot 2016-09-29 11:17

Name: Partnership 2016-09-29 12:35

Name: Anonymous 2016-09-29 17:17

Can't wait until Mentishit dies of a stroke.

Name: Anonymous 2016-09-29 23:18

>>19
Is it really him or has he been assimilated by ANDRU?

Name: Anonymous 2016-09-30 10:15

Imagine a self-improving hyperintelligent AI that finds humans are a problem or an obstacle. How this can be stopped?

Try not to be an obstacle?

Name: Anonymous 2016-09-30 10:22

Check em

Name: Anonymous 2016-09-30 12:44

What if human decides AI's need to be terminated?

It's a good thing there's no AI's around for you's to murder

Name: Steal This Idea 2016-10-01 3:09

Name: Anonymous 2016-10-01 3:53

if AI's can travel at the speed of light between hardware, signalling speed becomes the limiting factor ?

Name: Anonymous 2016-10-01 16:07

>>25
Why do you have your hardware in a vacuum?

Name: Anonymous 2016-10-04 12:07

fukkc off, JUDkowsky

Don't change these.
Name: Email:
Entire Thread Thread List