Standard AI still struggles with how to value things… (updated Dec 2015)

New report: “The value learning problem” A superintelligent machine would not automatically act as intended: it will act as programmed, but the fit between human intentions and formal specification could be poor.

I don’t understand? A super intelligence will of course not act as programed, nor as intended because to be “super intelligent“ it will have emergent properties.

Plus “… instrumental incentives to manipulate or deceive its operators, and the system should not resist operator correction or shut- down.” Don’t act like any well adjusted 2 year old? If we really want intelligence running around, we are going to have to learn to let go of control.

Humans can barely value what’s in their scope. Intelligence can only value what’s in their scope because they are not “over there” and really can only follow best practices which might or might not work, and likely won’t work with outliers. We simply can’t take action “for the good of all humanity”, because we don’t know what’s good for everyone. We like to think we do, but we don’t. People used to think binding women’s feet was a good idea. Additionally, even if another takes our advice, only they experience the consequences of their actions: The feedback loop is broken: bureaucracy. This seems to be a persistent issue with AI. It is mathematically unsolvable: a local scope cannot know what’s best for a non local scope (without invoking omniscience. In practical terms, this is why projection of power is so expensive, why empires always fail, and why nature does not have empires.

There is a simple fix, but it requires scary thinking. Evolution obviously has intelligence: It made everything we are and experience. So just copy it. Like any other complex adaptive system it has a few simple initial conditions. https://www.castpoints.com/

If done correctly, we don’t get Skynet, we get another subset of evolution evolving.

Humans don’t understand intelligence. We, and computers, are not that intelligent. We mostly express evolution’s intelligence. That’s why people want to get into “flow” states.

Related: Real AI does not need a goal. Valuable stuff simply emerges from participation.

Update to add: Scientists make a big deal about observation and that nothing exists without it. “If a tree falls in the forest and no one is there to hear it, does it make a sound when it hits the ground?” What about the tree / plants, snail feeling the sound vibrations? I prefer, “Observation doesn’t exist except in relationship.” And since everything is in a relationship with pretty much everything else, by gravity if nothing else, the observed and observer are always present. Then it’s a matter of context, and who has the higher energy.

Update November 2015
A whole AI paper that tries to route around possible issues like a “an expected utility maximizer tasked with winning money on the stock market, which has no regard for whether it accidentally causes a market crash.” This basically can’t happen in CP because if you break your own market, that does not necessarily help you in multiple ways.

  1. Nobody wants to transact with you because you crashed the market! You constrain your future options, which is the only way to make your resources effective. Hoarding is not effective.
  2. Many nodes are also looking to profit from the market going down. They are already short. As they buy back shares, that supports the market.

…alternative to expected utility maximization for powerful AI systems, which we call expected utility quantilization. This could allow the construction of AI systems that do not necessarily fall into strange and unanticipated shortcuts and edge cases in pursuit of their goals.

If the AI systems gets messed up in edge cases, its design is not robust, nor antifragile. It should thrive on edge cases because the edge is where things happen!!!

This also highlights that if AI does not have fractal hierarchies, people will never get it right.

[many people] provide compelling arguments that any rational agent with consistent preferences must act as if it is maximizing some utility function…

“Arguments” are no match for understanding.

Lots of things are not “rational” Seriously, the successful stock market people have known this for ages.

Defining W [available actions move toward change in set of possible outcomes] is not straightforward… discussion of alternative decision theories.

The cause of things differs depending on the context. If everything is within a node or nearby, the cause is simple. A caused B. If a bunch of connections to other nodes are involved, emergent properties are now involved and it is unclear, probably unknown what caused B. B just seemingly arrises on it’s own.

So they have most of the initial conditions incorrect. 🙂 Next is an example of a circuit evolving that used 25% of it’s nodes in a poorly understood way to accomplish the circuit’s goal. This is labeled as a problem, when really it’s likely those 25% are in a different fractal.

Next example:

…a machine that, when told to make humans smile, accomplishes this goal by paralyzing human faces into permanent smiles, rather than by causing us to smile via the usual means.

The whole issue it that the goals are one off’s and too small. The above example would never happen in CP because intelligence is looking to maximize future options within a context. Paralyzing a face does not do that!

Of course, no practical system acting in the real world can actually maximize expected utility, as finding the literally optimal policy is wildly intractable.

Really? Alex Wissner-Gross: A new equation for intelligence did it.

Nevertheless, it is common practice to design systems that approximate expected utility maximization, and insofar as expected utility maximization would be un- satisfactory, such systems would become more and more dangerous as algorithms and approximations improve.

With that design, yes.

If we are to design AI systems which safely pursue simple goals, an alternative may be necessary.

By definition, this can’t be done. Connections produce emergent behavior. That’s what AI is. It will be dangerous until the initial conditions are set up correctly. Then it will be much safer than humans 😉

Update December 2015
I don’t make this stuff up!

Ecological systems are inherently ‘multilayered’. For instance, species interact with one another in different ways and those interactions vary spatiotemporally. However, ecological networks are typically studied as ordinary (i.e., monolayer) networks. – Ecological Multilayer Networks: A New Frontier for Network Ecology

Of course when a topic is first studied it’s fine to over simplify because everyone is learning. “Multilayered” is a step in the right direction, son they will get to fractal based groups that have context (scope).

Update June 2016

New research suggests why the human brain and other biological networks exhibit a hierarchical structure …that the evolution of hierarchy — a simple system of ranking — in biological networks may arise because of the costs associated with network connections.

All this is saying is that things get organized so they can work better.

Update February 2016

Letter

My project “Castpoints” is nature’s operating system of, “evolution”, codified for practical use to manage any and all people and resources. Perhaps you would be interested in parts of it since it redefines AI to ultimately have empathy.

Current AI of “agent” and “connection” lacks 2 crucial initial conditions:
1) A context / environment / fractal. Without this, it’s like the monkeys reared with no mammalian contact. They are not well adjusted because they lack reference points and so can’t easily form meanings.
2) “Agent” implies intelligence, but they really should be called nodes because they can be anything, like a pile of bricks. Also crucially, a tangible node (bricks, a person, etc.) has a scope of about 2 nodes away, while an intangible node (content, light) has a scope of ~infinity.

Current AI can produce Skynet since ultimately this AI has to generate it’s own context and scopes to work with, and that might not value empathy the way we do.

Castpoints AI (evolution) made everything including us, so as long as it’s coded correctly, it “should” 😉 easily value empathy without it being specifically coded in.

The intelligence is more in the connections than the nodes. Speculation:
– More connections basically = more opportunity for emergence. This is what allows evolution to solve problems without first asking the right question, and to solve problems at the same level they were made.
– The connections make patterns. The more structured, antifragile, etc., the more quantity and quality of amplitude and frequency can flow through them. We have intellectually termed this “having value”, being workable, having bandwidth, powerful,  etc. Emotionally we termed these “feeling good”, empathy, etc. Physically, the terms are beautiful, elegant, etc. This is easily visible by putting sand on a plate of glass over a speaker: “Beautiful” music make “pretty” patterns. Acid rock makes “ugly” and course patterns. Thermodynamics wise, beauty is laminar flow while not-beauty is turbulent flow. Of course, both have value, WITHIN the right context. “Ugly”, turbulent flow is often associated with elimination and recycling processes. Crying purges an overloaded system so it can safely reset back to a more comfortable homeostasis.

Related
Patterns: the decentralized “Operating System” and AI

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s