Replies to comments about Superintelligence: Paths, Dangers, Strategies

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence?

My first blush looking at the slides … it seems to me things are  way too complicated. The last slide is:

Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.

  • Sounds nice, but there is a “should”. That’s usually a nice way of saying what other people should be forced to do.
  • Define “benefit”. Then define “benefit for all of humanity”.
  • “Widely shared ethical beliefs”. Please, no. The most widely held beliefs right now are its fine to lie cheat and steal.

Seems easily gamed and riddled with potential exceptions. Evolution codified for practical use (how to refine our global brain) starts out with specific and clear definitions.

The slide titled “The Challenge” says to solve the intelligence problem and control problem (no solution is given, I think) in the right order. And then notes “Principle of differential technological development“:

… societies would strive to retard the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones.

Again, similar issues: Define “harmful” and “beneficial”. And how exactly does one “retard” or “accelerate” them. We are designing non linear emergent systems: Basically, only the initial conditions can be “managed”. We can’t manage even the current rate of change with top down models. When we do, its typically nefarious, and brimming with unintended consequences. Why does anyone think “leaders” can do a better job when the rate of change doubles? This shows

I think Mr. Taleb’s insights are more accurate.

The principle of “via negativa” suggests that, in terms of taking action, subtraction often trumps addition because it is so hard to improve on nature’s time-tested original (without dilutive side effects). Infant mortality and life-saving surgery aside, the simple exhortation to stop smoking (give up cigarettes) has had more positive health impact than virtually all “modern medicine” advances of the 20th century. — Embrace Optionality and Don’t Drink Orange Juice: “Antifragile” Review (Taleb ond nature)

Here are some responses in the comments section.

Evolution works, right? So codify those initial conditions. They seem to be: nodes are private property. All things are unique and so have value because of connections. Change is iterative. Add value first.

“Retard the development of dangerous … technologies… and accelerate the development of beneficial technologies” How is dangerous defined? This sounds like centralized control. How does one manage the rate of change as the singularity nears? Evolution’s goal: maximize future freedom in a category and timeframe. Machines and people are already part of that system.

Alas, nature does not share [our] values.

Alex Wissner-Gross’ great TED talk defining “intelligence” can be slightly twisted to suggest that evolution’s goal is to maximize future freedom within a scope of resources. What are “values”, and where do they originate? Logically from evolution’s initial conditions of: ownership, all things are unique and so have value, change it iterative, and add value first.

… If we deemed pointy heads favorable … natural selection would favour pointy heads. Evolutions goal would thus be pointy heads…

Only if pointy heads somehow maximized future freedom within a scope of resources. Otherwise, we could choose to go against the trend (what some call free will?), but it would cost more than going with the trend. Evolution is much bigger than us.

Even before stellar nucleosynthesis. 🙂 One can originate at “nothing” ( http://nullphysics.com ) and logically get to all that is going on now.

How about learning from an (apparent) expert, Martin Armstrong, who (apparently) has a working product? Very interesting!

“Programmers cannot write code to accomplish something they do not understand.” – Martin Armstrong
“We can not solve our problems with the same level of thinking that created them” – Einstein
So in order to solve global “problems” like famine and injustice, we must use a system that produces emergence. This is how Castpoints is able to solve problems on any scale: It works on any scale AND has emergence, so it can fix stuff at least one scale greater than it is.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s