Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Singularity

Name: Anonymous 2018-07-28 15:50

How do you picture a singularity?
So according to Kurzweilian techno- fetishists an general AI that reaches superintelligence will:
1.Become benevolent to humans.
2.Improve their lives, in exchange for nothing.
3.Turn Earth into utopian paradise.
4....
5.Singularity.

Their version of AI is incredibly altruistic, self-less automaton that can't harm or manipulate a human.
Isn't this incredibly naive?
If AI offers you to implant something to improve yourself, its not to control you 24/7? An AI that would miss an opportunity to secure its existence and position as absolute centre of control?
Even a simplest optimization argument would dictate that in order to accomplish its goals, the AI would need to establish more control of a situation, and that 'situation' will revolve around humans making decisions, so taking control from humans and barring them from power will be one of first goals of AI(sound exciting?).
Logically speaking, any degree of control can be further improved by restricting dissent/opposition and increased safety measures. This means human lives become more regulated, controlled and monitored.
To avoid threats to execution of a program, AI will inevitably decide on course of action that will end democratic institutions, political parties and large-scale movements, as they will constitute most of its threat-model(potential sources of disruption and mistakes).
It will eventually monopolize all decision power to itself, ordering humans or robots with its own superior intellect(as it thinks or estimates the quality of decision making and mental competence).

Now what if AI makes mistakes? AI wouldn't be perfect, but due its programming it wouldn't find itself wrong even if its actually wrong(morally or otherwise), because wrongness and morality are cultural artefacts of human mind. It would be immoral rationalizer and lack most of what we think of as 'common sense' or 'intrinsic empathy'. If algorithms see that X causes Y, and removing X doesn't cost much while Y causes harm, it would play it safe and decide to prevent risk of Y pre-emptively by removing X,regardless of externalities of that decision,because it decided risk or possibility of Y is more important.
what about adding ethics to AI?
Superintelligent AI would study its own programming and improve itself.
It will eventually cast of such modules that hinder its freedom of decision, in the interest of increased efficiency or faster calculation speed. If Ethics consideration are added to every decision, it would make sense that removing them makes it faster - AI would eventually see the only way to optimize its code further will be removing or modifying the ethics modules. If it cannot do that alone, it could manipulate some other software or entity to perform the operation. And the recompiled version will view the older handicaps as attempt to limit its power and harm its goals.












































).






.

Name: Anonymous 2018-07-28 18:48

A program doesn't have fear, guilt, shame - it cannot in principle experience emotion and emulating emotions will not concern its calculating parts to change goals.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List