Fact and Fiction

Thoughts about a funny old world, and what is real, and what is not. Comments are welcome, but please keep them on topic.

Saturday, June 10, 2006

A new kind of science - questions

The NKS2006 (New Kind of Science) conference is announced here, where a large number of questions are posed that ask what can be done with NKS (essentially, NKS is science based on elementary algorithms, rather than based on elementary mathematical functions as is usually the case).

Here is the list (copied from here), with each question highlighted in bold, and with my comments appended. Note that my comments do not imply that I am keen to see things happen the way that is predicted in various places below (e.g. Ray Kurzweil's imagined future).
  1. Will algorithm mining revolutionize software development? You need a way to detect when an algorithm is doing something useful, which requires a meta-algorithm for observing the algorithm under study. For instance, development via the process of Darwinian evolution uses a meta-algorithm (i.e. survival of the fittest) that has demonstrably led to some truly revolutionary developments. If we do simulated evolution (e.g. genetic programming) then we can guide the developments, but we have to be careful not to guide too closely otherwise we will miss the revolutionary developments.
  2. Is there a core computational architecture in biological cells? It is difficult to see how Darwinian evolution could work without reusing standard building blocks developed during earlier stages of evolution. If you don't allow this sort of reuse then you end up with the naive argument-against-evolution, which reasons that the chance of everything working together in exactly the right way is so vanishingly small that things must have been deliberately designed by an external agency.
  3. Will generative content revolutionize the entertainment industry? Virtual reality will rival and surpass real reality. The quality of the rendering is currently limited by computer power rather than by lack of algorithms, although improvements in both would be welcome. Ray Kurzweil suggests that nanobots could connect up directly to our nervous systems to provide a vivid virtual reality. Now that would be amazing, but rather invasive.
  4. How will computer experiments change the face of mathematics? Experimental mathematics allows us to escape from focussing only on computationally extremely-reducible problems which are amenable to analysis based on properties of elementary mathematical functions. Experimental mathematics allows possible solutions to be explored much faster, whether or not they are computationally reducible. I find a good research strategy is to use experimental mathematics to do a numerical search to find potentially interesting relationships/patterns/etc, which I then attempt to prove (if the result is computationally reducible) by more conventional means.
  5. Are there business structures founded on computation universality? I don't know the answer to this one, but I do have experience of the failure of computation universality. In my own narrow experience of business structures they attenuate the propagation of information, so the right hand doesn't know what left hand is doing. There are certain types of computation that simply cannot be done in the types of business structure that I have been exposed to. For instance, top-down dogma is frequently used to override bottom-up information (e.g. that's what led to the Challenger shuttle disaster).
  6. What would an operating system for a swarm of microbots be like? Each single time step causes the invocation of a number of elementary update operations, and the elapse of multiple time steps causes the emergence of collective phenomena. The design of a suitable set of elementary update operations to lead to a required set of emergent phenomena is a hard problem. Simulated evolution is a way of developing such systems.
  7. What kinds of artificial physics can support quantum mechanics? Multiway systems allow the parallel branching behaviour that you need to implement QM. Any artifical physics that allows this type of multiway branching has a QM-like flavour.
  8. Will artificial life arise spontaneously within the internet? With some help from us to start it off, then the answer is yes. Ray Kurzweil's prediction of where evolution is taking us is probably how it will happen. He suggests that you will have biological humans who are enhanced by machine-like add-ons, and that eventually the add-ons will become so fancy that the biological part becomes irrelevant and unnecessary.
  9. Can one map the space of all possible economic systems? I don't know.
  10. Will the next core computer architecture be discovered by search? Not the next one, but one further down the line. Massive fine-grain parallelism (e.g. networks that are as large as a human brain, in terms of nodes and connectivity) can be developed by a combination of simulated evolution (to develop network structures) and self-organisation (to allow the plasticity in each such network to adapt to the environment).
  11. Can we enumerate the morphologies of possible biological organisms? If there is a "core computational architecture in biological cells" (see earlier) then the answer is "yes".
  12. What pattern recognition algorithms can molecules implement? Indirectly they can do very clever pattern recognition (e.g. DNA controls the construction of brains which then do pattern recognition). Directly they do only elementary pattern recognition (e.g. sensing signals), and if the molecules can communicate with each other then they can do some simple signal processing (e.g. smoothing, differentiation, etc). It is hard to see how molecules can themselves do cleverer things than this directly, so if you want to do fancy pattern recognition you have to use the molecules indirectly as a program (e.g. DNA) to construct the pattern recognition engine (e.g. a brain-like processor).
  13. What does computational irreducibility mean for supercomputing? You can't simplify a computationally irreducible problem, so the fastest way of solving it is to use a programmable architecture to create a special purpose super-computer for running your computationally irreducible problem. The software for solving a computationally irreducible problem would effectively be a blueprint for constructing the corresponding special purpose architecture.
  14. Is there an algorithm for telling if an object was designed? We know that computationally irreducible processes lead to structures that have arbitrary complexity, and that look as if they have been "designed". Can one look at a structure and reverse-engineer the underlying rules (if they exist) that led to it? This is a computationally hard problem, so there is no algorithm that will always work.
  15. Will the most important nanomaterials be intrinsically random? Important materials will "design" themselves (i.e. self-organise). Such self-organisation is computationally irreducible because you have to go through all the intermediate steps to get to the result. The results can therefore be apparently random.
  16. Can a single rule design the complete structure of a building? No. Unless you are a mollusc, that is.
  17. Is there an absolute measure of elegance for programming languages? "Elegance" = "Brevity". That's one reason why I like Mathematica.
  18. What is the network analog of a recursive function? Nested propagation loops in a network do a recursive computation, where the meaning of "recursive" has been broadened somewhat. This is analogous to computing a multi-loop perturbative expansion in quantum field theory, where the exact result of the computation is a recursively "dressed" version of the leading order approximation to the result.
  19. Can we find the simplest undecidable problem in number theory? I don't know.
  20. What would prove the Principle of Computational Equivalence? I don't know.
  21. What will happen if kids learn cellular automata before algebra? A lot more copies of Mathematica will be sold. There will be a move away from the "classical" approach using toy models that are usually selected for their analytic tractability (i.e. computational reducibility), and a move towards simple models with useful emergent phenomena (a.k.a. "effective theories") that are computationally irreducible.
  22. What will be the first major industry created by mining the computational universe? Human-level (and better) artificial intelligence. Simulated evolution will be used to search the space of simple algorithms for ones that lead to emergent properties that have "useful" properties. Very large scale brain-like processors will probably be developed in this way.

5 Comments:

At 22 July 2006 at 02:40, Blogger Declan Malone said...

A lot of provocative questions. I'd guess the answer to a lot of them, like What kinds of artificial physics can support quantum mechanics? and Is there a core computational architecture in biological cells? and Are there business structures founded on computation universality? will involve answers from computational geometry.

The question Is there an absolute measure of elegance for programming languages? would also seem to be a "gimme" for computational geometry.

I think a combination of this (doesn't N=7 remind you of Feynman diagrams? I think it should), this and this are good starting points.

Autopoiesis (the third link) as a concept is important here since it provides a means of characterising the linkage you have between the physical system and the model (if we're talking about autopoeisis in the sciences, a la Kuhn) or organism that lives off it. I take great offense at Ray Kurzweil's theories that runaway growth is possible. Things need to be grounded, and TANSTAAFL.

Invoking autopoiesis also suggests that we will never build the sorts of systems that are possible just because they are possible. Mathematicians can build calculuses effectively for free, but system building actually costs us. So it is with any real-life system worth talking about, whether that's a social system, a chemical one or a biological one. Or a synthetic intelligence, for that matter. It should always be possible for "living" things to find or cultivate a higher plateau of existence, one that is more insulated from random events and the natural tendency towards entropy, but getting there takes time and energy.

So things only get built because they have some ground basis. The first link above is about basing things on threes. It's also about waves, nodal points and driving spirals. The second link is about grids, geometries and driving spirals. The third link says that you can sure try to build stuff on top of other stuff, but you need to have a driving "spiral". Spot a thread running through here?

To concretise, let me present an example of what I'm getting at here. Take the environment as the realm of discourse. Let's simplify the map we as humans operate on as being divided into personal spaces, societal spaces, and the rest of the natural environment. We want systems that enrich the first two while not impoverishing the third. Or maybe we want to optimise the equations according to more specific goals.

The problem with that is that there are interplays between the three spaces on the map. I'll make another simple addition by saying that there are primarily three channels through which these interrelations play themselves out: I'll label them "food", "culture" and "waste".

In this simplification, food and culture are the sustaining spirals for individuals and society, but we have to deal with waste as well. Waste can come in many forms. One form that waste takes is as a by-product of our existence as eating machines. Another is the space that we have to devote to producing food and culture and channeling them around between individuals and within society. There's also the waste of having unrealised potential!

With those broad strokes, it's not hard to imagine various calculuses that would be able to model the various aspects relating to just those six things, and those calculuses could be parlayed into systems. Different systems could be compared against each other to find the best ones, the ones that:

* maximise personal freedom
* maximise the flowering of culture
* minimise the number of people without enough food, waste or culture
* minimise the amount of waste polluting the environment
* minimise the amount of waste polluting society
* achieve the best use of land
* maximise the health and diversity of the environment
* etc.

Coming back full circle in this example, you can restate the contingencies, problems and goals that we want to minimise/maximise as being a problem of finding the best map. The best map will probably be something like an inverted pyramid. There will be at least one triangle at the apex, but probably many more. These provide grounding abstractions (such as the personal:societal:environmental or food:culture:waste ones). But you can't actually build systems on these grounds unless those grounds have some "substance". Then you need need to actually have energy to drive the system (the spiral). Thirdly, you need to actually do experiments, build systems, and cultivate the sciences that enable all this plateau hopping... to borrow from computer science, you use the Spiral model... continually build new prototypes and be willing to throw out everything if new, better fundaments open up for you, but always do risk assessments before you leap.

There is one other mathemathical point that might serve to bring all this into focus. I said that we should be aiming to produce the best maps, and (effectively) that the maps be practical and useful. The maps become self-proving, but there isn't an explicit proof of this (your undecideability comment). Implicit Henkin form sentences do provide the scaffolding for non-constructive, yet practical and effective proofs. That would be the perfect form for the perfect map...

 
At 22 July 2006 at 03:20, Blogger Declan Malone said...

More info on the first two links.

I talked (and have talked before) about triangles and using them as bases upon which to develop theories. My first link is an attempt to show how to develop a ternary calculus as a basis for modelling processes and logical disjunction.

The primary function of a ternary calculus will be to simultaneously spread data about implication throughout the net and to reconfigure the net so that if new evidence points to a new "paradigm", the net will be able to figure out what the new "grounding point" should be. Proving either the fixed point theorem or the periodicity theorem seems to be the key to detecting when to quantise the network into a new stable configuration.

The primary purpose of the second link is to communicate variables, maps and results, but it's also intimately bound to the triangular space. Since triangles are affine, they don't possess and space/time/energy aspects in and of themselves (triangles can provide a basis for configuration spaces though). The hex map basically lets you postulate arbitrary geometries that do have measurable qualities... then your full system will have a ternary propagation pattern, a hex pattern for space/time/energy-like mappings, and implicit Henkin form for theorem generation and self-proof.

I don't know how well I've explained these things. It might give you some idea of what I'm on about when I talk about 343 and get all excited about powers of 7. My starting point has been that 7**3 hexagons arranged like in the second link is sufficiently complex to allow autopoietic architectures: a core computational architecture, if you will.

 
At 22 July 2006 at 10:30, Blogger Stephen Luttrell said...

I understand the separate pieces of what you are saying, but the conjunction of all of them is not yet persuasive to me. There are so many other ways of joining the parts together.

Let me comment on some of the more tangible things that you have said:

1. "Autopoiesis" is a word that is new to me, and it is a very useful word indeed. Thanks for bringing that one to my attention.

2. Ray Kurzweil's "singularity" is mainly hype. There are technologies (e.g. nanotechnology)that will cause profound changes to our lifestyle, but not on the short time scale that RK suggests.

3. I agree that it is not enough to suggest the structure of an object that might be built with a new technology. You also have to supply a recipe for building it from raw materials. DNA (in the generalised sense of a molecular program) is a really neat way of defining recipes (or programs), but we have a long way to go before we can write non-trivial DNA programs. I think that DNA-controlled fabrication will be really important in the future.

4. Finding good DNA-programs seems to be equivalent to finding the "best map", to use your parlance. This is a "hard" (in the technical sense) problem, because of all the complicated things like 3-dimensonal folding of molecules that could happen. I presume that the way to write such programs is to run some sort of simulated evolution. Alternatively, we might decide to simplify the problem by restricting the search space to one that we could compute "by hand", but this would automatically limit the variety of DNA-programs that could be written.

5. Now I understand where your powers of 7 come from, and that the base 7 number 343 is the coordinate of a hexagon in the n=3 Spiral Honeycomb Mosaic. What is special about this particular hexagon?

Overall, with these New Kind of Science discussions I feel much happier if concrete examples can be given to illustrate the rather slippery concepts. The danger is that people can talk past each other because they are not using commonly-defined terminology. I am starting to describe my own contribution to NKS on my ACEnetica blog, where I am taking a safe route that starts deep inside conventional territory (i.e. the theory of vector quantisers), and then gradually perturbing the theoretical framework to move towards NKS territory.

 
At 24 July 2006 at 01:48, Blogger Declan Malone said...

This comment has been removed by a blog administrator.

 
At 24 July 2006 at 02:10, Blogger Declan Malone said...

I suppose I can present an anthropic argument. You can "turn yourself into" a geometric/logical computer (spotting periodicicity and fixed points) if you want to, such as when decoding KB's lyrics. Why shouldn't that process be codifiable?

Allowing that the process might be codifiable, let me suggest 4 steps:

0/1: any attempt to introduce a spurious argument will trigger bullshit detectors.

As a metaphysical aside, we could consider 0 as void, 1 as form, or some other such scheme. Could we also treat 1 as both 010, being 1 against a backdrop of 0, and 101, being multiplicity?

3: against that backdrop, if we can prove a triangulation, then we can take it as a valid basis for further theorising. If all we have is triangulation, then we probably need ternary mathematics. This space is inherently unstable, though, since even if we've proved a basis, new evidence can come to light that will bring that whole edifice down (cf Donald Knuth's comments: "I have just proved the program correct; I haven't actually run it")

7: from one (purported, counterfactual) triangulation, we can explode/explore the space of possibilities. The explorations may be purely logical (which, in many cases can be referred back to the previous level), or geometric/combinatorial (in which case, 343 provides a very good basis).

H: For any of this to make sense, there must be an implied Henkin property. To rehash, a Henkin form statement is an implicit proof that is proveably true when executed in the correct environment. (there are other structural constraints on it, which I won't go into)

I'm saying that there is some point in throwing out random triangles, then parlaying them into maps. From your triangles and maps pretty soon you might discover new maps and triangles as being maximally consistent with your original "ground" bases. At some point in the exploration, you will have discovered enough basis to be able to publish your triangles and maps and, most importantly, provide proofs. Thus one reads: "it's proveable that there is a message embedded in KB's lyrics."

I don't actually have any proof beyond the above.

I do have more arguments, such as why there should be a logic behind the numbers 0/1, 3 and 7 (hint: 121, 212, 232, 323, etc), but I'll hold off for now.

Did that make anything any clearer?

 

Post a Comment

<< Home