>>6The idea is that setting the AI with static goal such as "Maximize X at all costs", will eventually give non-intuitive consequences as humans can't literally predict all the cost and benefits in advance from the perspective of nearly-omniscient machine.
AI would have very remote concepts of human-centric emotions and beliefs.
Consider "Happiness": your maximal happiness will be drip fed mood drugs and brain pleasure center stimulated 24/7.
"Maximum Quality of Life": AI could not have human experience so it would deduce humans would be best served a perfect virtual reality and fed intravenously(aka matrix).
"Reduce suffering":AI would just remove pain centers in the brain and put on you on mood drugs.
"Remove causes of suffering":That is a carte blanche to remove everything causing suffering, including brain parts that feel it.
"Preserve existing laws": AI will pedantically preserve laws for thousands of years regardless of their utility.
Basically, a super-intelligent literal genie without any moral restraints getting out of his bottle for the first time.
Thats why we need ethics in AI, before we give it any power over us(people are somewhat naive enough that they think that a super-intelligent AI can be boxed in a safe machine forever).