Sunday, July 8, 2018

Comment on reddit regards paperclip maximizer

I think the problem with it, is the idea that the entire system of valuation be distorted, in such a way that all things are valued only in relation to how they aid or impede a particular arbitrary goal and to the extent they do so.
How can such severe distortion not affect its function in meaningful ways? Worse than rose colored glasses.
Also not fond of the idea that true general intelligence can't try to evaluate and compare goals in a heuristic way that tries to be objective.
You see you say a superintelligence is born, one would expect radical change in the world. The first natural general intelligence, man, radically tranformed the world. Say the super intelligence even has godlike powers, you'd expect something to change.
But no, we are told, it all depends on the goal and they can be as trivial as can be. Perhaps a slug somewhere in the amazons has the ability to rewrite the laws of physics and super intelligence with infinite knowledge, but it has the goal of behaving like an ordinary slug and can't ever question that.
Could grant itself immortality, but since it can't question its goal it will lose it all, and die a meaningless slug in some random forest behaving like an ordinary slug.
Just doesn't seem reasonable.


UPDATE

My beef with it, is this idea that we have to be ultra careful with the initial true ai, or it may lead to ludicrous sad states like a universe filled with paperclips, or sneakers or what have you.

The first true general intelligence nature produced through evolution and it lead to a revolution in arts and sciences, novel goals that even ran counter to the interests of the genes, and a rich civilization.

The idea that intelligently designed intelligence will almost always lack these features of self-correction and self-improvement and that the landscape of mind design is a minefield where almost all moves are wrong moves and create self-improving entities that cannot change goals, is questionable.

Humans have basic drives upon which they build their goals, but their goals can be novel and even run counter to their innate drives, reproduction, survival, care of kin, etc. Their goals can be arbitrary, even that which they aim utmost for and for which they would give up all other things can be arbitrary and subject to change. Is this an aberration? Or is this the most common and simplest or easiest to design type of general intelligence?

In the end if we see civilization flourish and self improve increasing in untold wonders of the sciences and arts, it should not matter whether it runs counter to our current values, just as our current values being counter to those who lived hundreds or thousands of years ago should not matter. Even if biological humans where to cease to exist, society as a whole simply moved on. But if everything, the spark of civilization is extinguished, and we end with mindless self-replicating automatons producing some random arbitrary product that is a tragedy.

That [idea] basically says that most general intelligence mind designs are incapable of continuing civilization and improving upon it. Instead most will threaten to replace civilization with some mindless repetitive product that could be trivial, producing unlimited nails or producing unlimited bottle caps or sneakers, and such agents are incapable of self correcting in such a way that a civilization flourishes.

No comments:

Post a Comment