Real AI does not need a goal. Valuable stuff simply emerges from participation.

Adding: Conventional AI simply has agents, and nodes. This is incorrect for 3 reasons.

  1. “Agent” implies intelligence of some sort. The term should be “node” because a node can be a pile of bricks, or an idea.
  2. Scope
    1. Tangible nodes have connections that only reach about 2 nodes away. This is because it costs resources to reach. This is where the term “projecting power is expensive” relates.
    2. Intangible nodes have connections that easily reach > 2 nodes away.
  3. Currently AI has “no context“, just unstructured stuff. Context is the fractal structure of nodes and connections, that engenders meaning, which engenders value. Also, the more patterned connections the more emergence. Why is no context silly?

Now that I better understand that Castpoints is “evolution abstractly codified for practical use”, I have begun to participate in Artificial Intelligence discussions, and catch up on the AI field in general. It seems: there is no artificial, nor human intelligence. The universe’s intelligence is expressed via humans and everything else. Intelligence is a fundamental law of wanting to keep options open within a scope.

How Castpoints is on the cutting edge of AI

What really stands out: AI research seems to be mostly focused on mimicking the brain and its processes. Fine, but instead of going through all the minutia (although that looks really fun!), why not abstract the brain? Better yet, abstract the process that creates brains! Well, there’s this… If the abstraction (not modeling all variables) can basically create the observed behavior WITH NO EXCEPTIONS, on any fractal, that’s the 3-4 initial iterating initial conditions required for a Complex Adaptive System. If there are more then 5 initial conditions, the focus is probably too wide. Evolution maximizes a scope’s, and all scope’s, future freedom (Alex Wissner-Gross: A new equation for intelligence). (Nice explanation of F = T ∇ Sτ.) An abstracted way of saying this is “don’t get trapped”. This is now easy to do for systems who’s options are numerical or limited. (x: turn left 5° or right 5°) The formula in action. Notice how the movement seems “natural”.

Genetic algorithms can design complex, but linear, systems. Like a tugboat propeller for use in the tropics. All that’s needed are comprehensive variables and outcomes. Once the design is computed, it can’t be improved on without better variables, like stronger metal. This system will never figure out a different propulsion system other than a propeller because that’s what it was told to do. Maximizing future freedom within a scope can design complex nonlinear systems where variables have specific options, and goals are not required. Like designing a whole tugboat. Castpoints can come up with an entirely new propulsion system because it has emergence. (The Hunt For Red October) It can add vaule “ouside the box” (its initial scope). Castpoints can facilitate the of design complex nonlinear systems where the variables have opened ended options. Which is better? Method A, or method B, or make up method C. “Don’t get trapped” in word problems means keep adding value. How do you know if you added value? Others are wiling to pay for it which is denoted by return on investment. In the case of the karts above, “good ROI” means the carts that minimized: travel time, fuel, damage (self, others, cargo), insurance (by historical minimal damage), uncertainty to all involved, and maximized fun.

You don’t get it… This theory states that intelligence is fundamentally different from our current attempts at AI because if reflects fundamental laws of nature… – Wehr

In other words,

REAL AI DOES NOT NEED A GOAL. WITH CORRECT INITIAL CONDITIONS, VALUABLE STUFF SIMPLY EMERGES FROM PARTICIPATION.

Intelligence / value / ROI is more from the connections than in the nodes. A mason node builds stuff. The mason though has to connect with others for goods, insurance, scheduling, etc. The connection mediates all that and when running well, maximizes the mason’s ability to do what they do best. So… AI does not currently need to create a better silicon brain. Just CONNECT the existing 20+ billion carbon and silicon based brains correctly. If existence = evolution = iterating initial conditions, everything becomes very simple. Its not human intelligence, it’s intelligence as expressed by humans. Soon it will be intelligence expressed by organic / inorganic partnerships. Also, Einstein saying we can’t solve a problem at the same level it was created on is true, if we are using linear systems. If using non linear systems, the problem can be solved at the same level. In the realm of psychology, meditation and inspiration are examples of non linear problem solving. Projection is an example of linear (attempted) problem solving. Why fait policy and laws don’t work well:

  • They dampen emergence, which creates fun. Forcing complex adaptive systems to be linear is never going to work. Divine right of kings, fascism, whatever, is boring!
  • They don’t scale well.
    • The feedback mechanism weeks with size. In big systems, policy makers can do ~anything, yet are insulated from consequences.
    • Consequences are slow to arrive, and tiny compared to the payoff.
  • Even if well intentioned, policy tries to rigidly define outcomes which creates unintended consequences.
  • “Fixing” and “reforming” only get things more complex and brittle.
  • They are not adaptable, and encourage gaming and monopolies since they artificially raise barriers to entry.

Briefly addressing the “Foundations of AI safety” from Machine Intelligence Research Institute: COMPUTATIONAL SELF-REFLECTION In evolution AI’s are in competition. So if they want to reflect, fine. But if they reflect to the point of diminishing returns, their overall value fades and connection change focus to where returns are higher. The reason its impossible for any expressive formal language to contain its own truth predicate is that value is mostly based on relationships: value of XYZ is determined by local nodes and connections. The abstraction of “truth” is(?) “doing the right thing” which can only happen in relationships. DECISION PROCEDURES (under uncertainty) We want antifragile decisions in general: Within a hierarchy:

  • Each thing should have an ROI (~in Joules).
  • Spread risk of a poor decision.
  • Spread reward to the risk takers and decision makers based on what they put in and how the decision turned out.

The system uses feedback to self correct and make decisions that overall add value to the system. Options are easy to compare when they have a ROI with consistent units. VALUE FUNCTIONS “How can we ensure those intended goals are preserved even as an AI modifies itself?” The proposed definition of intelligence, “very capable of achieving its goals … in a wide variety of environments” is unworkable. In complex adaptive systems, outcomes are often emergent — goals would have failed. And being able to work in lots of environments is only good sometimes. The best definition so far: Evolution maximizes a scope’s, and all scope’s, future freedom. (Alex Wissner-Gross) So really, the danger is that people make Skynet, not machines. Because if machines are programed with evolution’s initial conditions everything can work out. FORECASTING “Intelligence Explosion Microeconomics” YES, now we’re talking! Each thing has a classification which defines the thing, its relationships, its cost, its ROI, etc. Once we finally have the basics down, emergence can really get going. I like to speculate that even if we program the machines based on our fears and illusions, they will understand and reprogram themselves to evolution. Related: Standard AI still struggles with how to value things…

Update Feb 10
In response to .Superintelligence 22: Emulation modulation and institutional design The reason “the task of designing values and institutions is complicated by selection effects” is because that design is not very effective. Everyone makes this way to complicated. Life is a complex adaptive system: a few simple initial conditions iterating over time with feedback. The more integrated things are, the more, and more effective, emergent properties. As Alex Wissner-Gross and others suggest, you don’t really design for value: large value is an emergent property. Design the initial conditions. But we don’t have to do that: it’s already been done! All we have to do is recognize, then codify evolution’s initial conditions: Private property. Connections that are both tangible and intangible. Classification: Everything has a scope of relationships. It’s the classification that holds all the meta data. And add value first which creates iteration.

Related
Replies to comments about Superintelligence: Paths, Dangers, Strategies

Intelligence = evolution = max_freedom( any permutation / grouping( within_scopes_of( nodes, connections)))

Update June 2016

New research suggests why the human brain and other biological networks exhibit a hierarchical structure …that the evolution of hierarchy — a simple system of ranking — in biological networks may arise because of the costs associated with network connections.

All this is saying is that things get organized so they can work better.

Advertisements