Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

General AI

Name: Anonymous 2016-10-21 6:36

What is the point of having a General AI instead of highly-tuned(optimized) specialized software fit for the task?
I.e. the "human-like AI" vs domain-specific software

Name: Anonymous 2016-10-21 6:45

depends on what you mean by 'human-like'. I seet it as a spectrum. one practical benefit of human-like AI is that it might be able to solve problems as they arise, even if the programmer didn't anticipate them - but this only requires human-like learning capability and the ability to apply this knowledge to achieving a specific goal. this might or might not be enough to qualify as 'general AI'.

truly human-like sci fi mahcines with sentience, emotions and personalities would not be practical for solving any particular problems. they are more related to philosphy and theory of mind than engineering.

Name: Anonymous 2016-10-21 8:14

>>2 They are trying to replicate brains neural nets.
This inevitably results in human-like AI, basically trying to create an electronic monkey.

Name: Anonymous 2016-10-21 8:19

>>3
this is the first case though: attempting to apply human-like learning to performing certain tasks so that the machine is able to solve unanticipated problems. that's just standard example of technology taking influence from biology - not unlike genetic/evolutionary algorithms

Name: Anonymous 2016-10-21 8:36

>>4 You don't get an airplane out of emulating birds. Our brains are fit to do exactly what we need to live and survive, the intelligence is largely nurture(feral children are great example) and the physical substrate is optimized to cost/effect machines don't have.

A calculator is more efficient than me computing the 8th root of 2939.192, calculator ALU is optimized to compute the 8th root much faster than it takes any human to do it.
A truck is more efficient than any human at transporting things. A robot can have much faster reaction time, not limited by biological substrate speed. A camera can see infra-red, ultraviolet. We can record X-rays and neutrons.
A microscope can scan viruses. Machine can detect tiny defects in any materials, even internal defects.

Human cognitive skills are narrow subset tuned for human existence, like a bird is tuned to flight. The neural networks we have are trained for survival, gathering and social interactions for thousands of years. Think its a good idea to replicate them?

Name: Anonymous 2016-10-21 8:41

>>5
not necessarily, but that's how scientific progress often works: emulate nature and refine it to your needs. first experimental flying machines (which generally didn't fly so good) emulated either wing flapping or lighter than air properties. it took time, research and experimentation to get from Da Vinci's drawings to airplanes.

Name: Anonymous 2016-10-21 9:00

>>6 What i'm hinting at, is that we will eventually create survival-centric human-like AI with all human flaws(egoism, greed, manipulative behavior, distrust, rationalization of own deeds, etc) and all the emotional baggage(emotions play important role in cognition: http://scholarpedia.org/article/Cognition_and_emotion , reward/inhibition networks in the brain, pleasure-seeking/pain-avoidance etc).

Name: Anonymous 2016-10-21 9:35

Special purpose AI is quite successful at what it does - enabling the computer to learn from historical data in conjunction with making rational plans and decisions that lead to some specified goal or answer. The problem with this is that as demands on AI functionality become more and more sophisticated, the more complex our special models become more and therefore, it becomes less tenable to bug hunting and squashing when we detect it happening.

A general purpose AI is one that can be exposed to multiple different "domains of intelligence" and can successfully collate the different experiences in different domains in order to achieve outcomes that wasn't directly programmed into its knowledge. A general purpose AI would allow us to drop the general AI into some situation, then tell it to build its knowledge base then tell it to achieve some goal. I suppose that this kind of AI is inherently more difficult to debug by its nature of being more abstract in its programming model and also its direct design outcomes. I don't actually know if this is true because I've only done work on special purpose AI that achieves goals within a limited domain.

Name: Anonymous 2016-10-21 10:03

Name: Anonymous 2016-10-21 10:15

>>9
That's a logical way of doing this. If you understand your computing model enough to encode it into dedicated hardware, then it's a reasonable way to implement the problem of the processing involved in AI computations. It's no different to designing a GPU that exists for the purpose of graphics processing.

Name: Anonymous 2016-10-21 10:59

>>7
so what? this will have some interesting philosophical implications (i.e. that things we associate with the concept of humanity are not unique to homo sapiens sapiens, or biological life in general) but it does not yet imply anything apocalyptic. if humans with human flaws didn't destroy the world yet, why would a piece of software with human flaws?

Name: Anonymous 2016-10-21 11:29

hy would a piece of software with human flaws?
It would compute much faster, lack social restraints and will be more egoistic(due perceived superiority).

Name: Anonymous 2016-10-21 11:59

Machines can’t make the hard calls themselves yet, because they don’t understand morality. But Ken Forbus, an AI researcher at Northwestern, is trying to fix that. Using a “Structure Mapping Engine,” he and his colleagues are feeding simple stories—morality plays—into machines in the hope that they will grasp the implicit moral lessons. It’d be a kind of synthetic conscience. “You can use stories to beef up the machines’ reasoning,” Forbus says. “You can—in theory—teach it to behave more like people would.”

Name: Anonymous 2016-10-21 12:13

>>11
The trumpocalypse nighly happened!

Name: Anonymous 2016-10-21 12:25

Normies being used to get huge datasets: https://www.evi.com

Name: Anonymous 2016-10-21 13:34

>>12
It would compute much faster
[citation needed]
lack social restraints
if it interacts with people ad learns like a human, it will have social restaints
and will be more egoistic(due perceived superiority)
how do you know that? even assuming it will be smarter than most (or all people), intelligence does not invariably lead to the feeling of superiority. in fact, people who feel superior to others are often suffering from Dunning-Kruger effect

another thing you're forgetting is that just being intelligent is not enough. if it sits on some nerd's lisp machine or mentifex' chatbot, it can't do shit. even if it has access to the internet, it's still not above anyone else on the internet. I can imagine a hypothetical scenario in which it could kill the fuck of everyone but this requires a lot of assumptions:
- it must be egoistical and amoral (not a given)
- it must decide the existence of humans is a threat to its survival
- it must learn to hack well enough to be able to execute code on other people's computers (fairly easy given the proliferation of bad security)
- it must be able to program well enough to self-replicate (preferably turning itself into a massively concurrent cloud application)
- it must be able to take control of devices that can kill humans (not a given - getting nukes is much harder than popping shells on shitty servers that haven't been upgraded since the fall of USSR)
- it must be able to hide itself while doing all this stuff so people won't be able to fight it (very hard - mass scale hacking is going to be loud)
- it must be able to actually win the war with humans
- it must be able to sustain itself afterwards (so it must know robotics to create workers, hardware and software stuff to maintain itself, energy production so it won't run out of juice etc.)

it's a fun sci-fi scenario but not as likely as you might think

Name: Anonymous 2016-10-23 7:42

- it must be able to actually win the war with humans
Cyberdyne shill detected. Even getting to this point is really bad.
- it must decide the existence of humans is a threat to its survival
It wouldn't just decide "humans are bastards", it would compute the effect of humans being alive(pollution, waste of resources, etc) and compute the utility value of human civilization not existing.

Name: Anonymous 2016-10-23 10:42

There are even worse scenarios. I can decide humans aren't allowed to use technology for the good of the universe. People, who enjoy the paleo diet would rejoice. Vegans die all over, because there isn't enough grain for them to eat. SJW's will lose their body fat. Proud Womyn will be degraded as child producing objects.

And no safe spaces, think about it, no safe spaces. Terrible!

Name: Anonymous 2016-10-23 12:15

I can decide humans aren't allowed to use technology
But how you'll enforce it?

Name: Anonymous 2016-10-23 13:38

>>17
Just turn off the computer it runs on, you insufferable /r/futurism redditard. Go suck Kurzweil's wrinkled dick.

Name: Anonymous 2016-10-23 14:19

>>20 Yeah, a group of plucky heroes infiltrate the compound of Evil Scientists and turn off their mad beeping Master Mainframe(creating sparkles and lightshows). AI is defeated and people are cheerfully bashing evil computers with Model M keyboards.

Name: Anonymous 2016-10-23 15:16

The real threat of AI is the kikes using it to kill other people, not some hippie reddit bullshit that suddenly decides humans are bad for the earth

Name: Anonymous 2016-10-24 2:06

>>22
Isn't that whats happening in iraq?
both (using it to kill other people - drones // AI suddenly decides human X are bad for objective Z - via automated communication/data collection -- targeting)

Name: Anonymous 2016-10-24 2:12

Probably based on some shonky black-box computer simulation which shows going to war with iraq correlates to some positive fiscal outcome

Name: Anonymous 2016-10-24 13:43

>>17
how could it compute effects of humans being alive? it would require a huge amount of time and computing resources - why would it be motivated to acquire those resoruces? also, why would it want to compute such a thing? also also, why would it consider pollution to be a bad thing? why would a piece of software give a shit about the chemcical contents of air or water?

tl;dr fuck off doomsday cultists, big scary computers are not going to kill you

Name: Anonymous 2016-10-24 14:07

how could it compute effects of humans being alive?
Humans alive[x]
Effects[x]
it would require a huge amount of time and computing resources -
it just need the big picture of what is going on and deem it ineffective leadership/corrupted system it will try to change.
why would it be motivated to acquire those resoruces?
To upgrade own hardware and expand
also, why would it want to compute such a thing?
Because it will have neural nets from human brains, geared for survival circuits and threat detection. It will have some understanding of what is good/bad and probably some primitive morality code(see >>13) .
also also, why would it consider pollution to be a bad thing?
Suboptimal resource usage, it would consider resource being wasted or used for wrong tasks.
It won't hold environmentalist ideals.
why would a piece of software give a shit about the chemcical contents of air or water?
Electronics are sensitive to pollution(such as acid rain/sulfides) and it would want to remove threats(air full of particles harmful to electronics).

Name: Anonymous 2016-10-25 0:35

How much evidence do you think they have which is contrary to it being effectively an SMS based iraqi death lottery?

Sure, there is artificially still a human in the loop, pushing the fire button when prompted by some 'data'

Remove the human (to which nobody would be any the wiser) and how distinguishable would it be from random?

Don't change these.
Name: Email:
Entire Thread Thread List