Fact and Fiction

Thoughts about a funny old world, and what is real, and what is not. Comments are welcome, but please keep them on topic.

Tuesday, December 26, 2006

Possible worlds

I have just finished Richard Dawkins book The God Delusion, the last paragraph of which reads as follows:

How should we interpret Haldane's 'queerer than we can suppose'? Queerer than can, in principle, be supposed? Or just queerer than we can suppose, given the limitation of our brains' evolutionary apprenticeship in Middle World? Could we, by training and practice, emancipate ourselves from Middle World, tear off our black burka, and achieve some sort of intuitive - as well as just mathematical - understanding of the very small, the very large, and the very fast? I genuinely don't know the answer, but I am thrilled to be alive at a time when humanity is pushing against the limits of understanding. Even better, we may eventually discover that there are no limits.

Some of the above terminology needs to be explained by citing its earlier use in the last chapter of the book:
  1. ... the universe is not only queerer than we can suppose ...
  2. ... our brains ... evolved to help us survive in a world - I shall use the name Middle World - where the objects that mattered to our survival were neither very large nor very small ...
  3. ... I want to use the narrow slit in the veil [i.e. burka] as a symbol of something else. Our eyes see the world through a narrow slit in the electromagnetic spectrum ... The metaphor of the narrow window of light ... serves us in other areas of science ...

I had one of those eerie feelings of déjà vu when I read the paragraph quoted above. Its message is spookily similar to what I was saying in my earlier posting Demystifying hard subjects, which was itself based on thoughts that I had long before I read The God Delusion.

Anyway, my message to Richard Dawkins is we can "achieve some sort of intuitive - as well as just mathematical - understanding of the very small, the very large, and the very fast". The trick is to bang the right sort of mental rocks together.

Nikola Tesla portrait

A while ago I received a gift of the pencil portrait below (drawn by Malcolm Victory):


This is a portrait of Nikola Tesla who was "the man who invented the 20th century". He looks as if he is in one of his more hectoring moods. Naturally, I have mixed feelings about receiving this gift!

Monday, December 25, 2006

Demystifying hard subjects

Some subjects are called "hard" because you need years of training to understand and practice them. Examples of hard subjects are mathematics, physics, chemistry, etc, and examples of soft subjects are sociology, media studies, etc. Essentially, the difference between these two categories is that hard subjects are far from our everyday common sense so they are not intuitively accessible, whereas soft subjects are close to our everyday common sense so they are intuitively accessible.

It seems to me that the "hardness" of a subject is strongly correlated with the following two properties to do with reasoning about and sensing the world:

  1. How mathematical/logical its formulation is; this property is to do with reasoning about the world. Anyone who does not have the appropriate mathematical/logical training is automatically excluded from the subject, which thus falls into the "hard" category.
  2. How accessible it is to observation and verification by our senses; this property is to do with sensing the world. Any subject that relates to phenomena that are accessible only by using specially designed sensors falls into the "hard" category.

Either way, our common sense is limited by what our bodies can do unaided, whether it is sensing (e.g. using our eyes) or reasoning with our brains. To develop an enhanced common sense we need to enhance our sensors (e.g. use a microscope) and/or our enhance our ability to reason (e.g. attend further education classes).

Here I want to focus on the improvements to common sense that can be gained by enhancing our ability to reason, which are gained (often at great expense in time and money) by further education. Of course, I have a vested interest in this aspect because my own further education didn't finish until I was around 27 years old (or 1/3 of a typical lifetime), and it was heavily mathematical towards the end. However, I have a very ambivalent attitude to the type of enhancements to my common sense that I gained as a result of my lengthy education. I can do fancy maths to calculate results that are both correct and useful, and which thus can serve as enhancements to my prior common sense. But I can't do these calculations in real-time, because I have to do each calculation laboriously off-line rather than on-line, so they are rather slow enhancements to my common sense. It would be much better if these enhancements could be "compiled down" to become fast response (i.e. on-line rather than off-line) enhancements, which could then more justifiably (i.e. they now work in real-time) be called enhancements to my common sense reasoning ability.

All of this "compiling down" amounts to finding ways of short-circuiting mathematical calculations so that results can be seen directly without having to go through all the intermediate steps of the mathematical derivation. This possibility is called computational reducibility, and it is not guaranteed to even be possible, e.g. many simulations in NKS are strictly computationally irreducible. Where a calculation is computationally reducible it is because there is a higher level principle at work (whose inner details are implemented by the individual steps of the mathematical derivation) which allows us to take large strides forwards through a derivation rather than achieve the same thing by a large number of time consuming little steps. Generally speaking, the higher the level of the principles the greater the speed-up that can be achieved. Note (for physicists) that another way of viewing these high-level principles is that they bear the same relationship to the low-level mathematics as effective theories bear to underlying theories.

Here is a simple example. When I was about 10 years old I realised that the formula p=ρ.g.h for pressure p at a depth h in a medium of density ρ could be understood directly and intuitively; in effect I never memorised that formula (even when I was 10 years old) because I always rederive it in real-time every time that I need it. This early insight was a complete revelation to me, and it has influenced the way that I do science and mathematics ever since. I carry very little memorised information in my head, but I do carry a small number of principles that I use to quickly rederive whatever I need. Naturally, I have now gone far beyond p=ρ.g.h but the same general approach still applies. Very conveniently, this approach (i.e. rederive rather than memorise) served me very well in various school/university examinations!

So, how are computational reducibility, higher level principles, and enhanced common sense related? How is it possible to see results directly and intuitively? Here I will have to assume that what works for me also works for other people, so I will focus on the use of visual intuition where low-level mathematics become elementary operations for manipulating a visualised world, and then high-level principles emerge as composite operations on this visualisation. It should be clear that the amalgamation of many low-level operations into fewer high-level operations amounts to a form of visual intuition about the behaviour of this visualised world.

Let's discuss a concrete example to illustrate the strengths and weaknesses of visualisation. How about the simple p=ρ.g.h example given above? The medium can be visualised as a cloud of small blobs (atoms), where the average number of blobs per unit volume (number density) is n, the mass of each blob is m, so the average mass of blobs per unit volume (density) is ρ=n.m. Each unit area of whatever this cloud rests upon must supply an (upward) force that is equal to the (downward) weight of the blobs in the cloud resting on the unit area, otherwise the upward and downward forces would not be in balance. The thickness of the cloud (depth) is h, so the average mass of blobs per unit area is ρ.h. Finally, the (downward) acceleration due to gravity g causes this mass per unit area ρ.h to have a (downward) weight g.ρ.h. Gathering it all together, the (downward) weight per unit area is ρ.g.h, which must be equal to the (upward) force per unit area, which is one and the same thing as the pressure p at the base of the cloud of blobs, so p=ρ.g.h.

Whew! Written out like that the derivation of p=ρ.g.h is incredibly verbose. However, I am not suggesting that you actually write things out in this way, but that you create a visualisation whose properties are described by the various sentences in the above paragraph. With exercise this process of visualisation becomes semi-automatic, and the whole of the above paragraph can be quickly "simulated" in your head. This simulation then becomes a virtual derivation (written in visual symbols) of the formula p=ρ.g.h, which can be converted into a real derivation (written in standard algebraic notation) as necessary. This example illustrates the relationship between the visual and the algebraic approaches to doing mathematics.

Of course, p=ρ.g.h is a very simple example where the visual representation is so easy to create from the algebraic representation that you might assume that you could always start from the algebra and derive the visualisation from it, rather than the other way around. However, almost everything else is more complicated than p=ρ.g.h, and although the algebraic approach always works correctly, it rapidly becomes so complicated that its results can no longer be obtained directly and intuitively, which is where the alternative visual approach allows you to take intuitive visual shortcuts to quickly derive qualitative results.

For an example of a more substantial use of a visualisation type of approach have a look at my Spooky action at a distance? posting where I discuss the so-called EPR paradox in quantum mechanics. In fact there is no paradox there, but only confused thinking caused by a failure to use the mathematics of QM consistently, i.e. assuming that "observers" somehow exist outside the QM system that they are "observing". I was taught QM using this inconsistent approach, which was fine for passing physics exams, but started me off with a hopelessly confused point of view when I eventually had time to sit down and really think about QM. It took me a while, and several false starts, before I finally realised that "observers" have no special status, and should therefore be lumped in with the system that they are observing, which led inevitably to me rediscovering what I later learnt was the Everett interpretation of QM (see the The Everett FAQ) which I wrote about in State vector collapse? Anyway, I now have a visual language (e.g. Spooky action at a distance?) that enhances my everyday common sense so that I can intuitively and directly obtain qualitative QM results. Why can't this type of approach be taught in QM courses?

Of course, I can't prove that it will always be possible to find intuitive visual representations of complicated mathematics, but I certainly feel that I understand the mathematics much better if I can find such a representation, and use it to intuitively and directly derive qualitative results. I have heard it claimed that some fields of work lie outside the realm of direct intuitive comprehension (e.g. when a simple result seems to magically pop out of an enormously complicated piece of mathematics), but I hope that this claim is the result of a lack of imagination rather than there being a fundamental limitation to what we can visualise using appropriate techniques.

The visual approach described above can be used to enhance your everyday common sense, so that it becomes able to intuitively reason about what are normally viewed as "hard" subjects. In effect, "hard" subjects become "soft" subjects, when addressed using a common sense that has been enhanced by use of visualisation techniques.

Sunday, December 17, 2006

We don't really know what string theory is

I learn from Not Even Wrong that Steven Weinberg has said:

The critics are right. We have no single prediction of string theory that is verified by observation. Even worse, we don’t know how to use string theory to make predictions. Even worse than that, we don’t really know what string theory is.

I have deep respect for Steven Weinberg's views on physics, not merely because he is a Nobel laureate but because he is deeply thoughtful about what he says, so his damning comment on string theory carries a lot of weight with me.

The situation with string theory always reminds me of the brilliant quotation by Richard Feynman:

Physics is to math what sex is to masturbation.

It seems that I had a narrow escape when I left physics for even sexier research back in the early 1980's, because I suspect that had I stayed I would have been enticed into working on string theory, and would have thus unwittingly tossed away my entire research career.

Friday, December 15, 2006

Four preconditions for civilisation

A propos my previous posting It pays to keep a little craziness, here are the four intellectual processes underlying human achievement, as listed on the back cover of the book Pioneering Research: A Risk Worth Taking by Donald W Braben that I discussed a few months ago here.

  1. Ask questions not on the agenda
  2. Explore ideas wherever they lead
  3. Pursue goals because they are important
  4. Create options not yet perceived

Here are the sorts of questions (numbered as above) that "bureaucracy" asks which strangle these processes:

  1. Does your proposal directly address the issues outlined in the invitation for tender?
  2. What is the breakdown of your proposed project into work packages and deliverables?
  3. What's the business case for your proposed line of research?
  4. Does your proposal directly address the issues outlined in the invitation for tender?

I am regularly asked all of these questions, and, yes, they do have the predicted strangling effect. All of this is to satisfy those people who like to reduce everything to a bunch of spreadsheet cells, so that they can control things. It seems that the admin-geeks (see here for more details) now rule the world.

What could be done to avoid these control freaks? Hmm ... tricky problem.

Saturday, December 09, 2006

It pays to keep a little craziness

This week's New Scientist has an editorial entitled It pays to keep a little craziness, which makes a case for supporting a small number of maverick scientists. The entire editorial is reproduced here:

Time was when all scientists were outsiders. Self-funded or backed by a rich benefactor, they pursued their often wild ideas in home-built labs with no one to answer to but themselves. From Nicolaus Copernicus to Charles Darwin, they were so successful that it's hard to imagine what modern science would be like without them.

Their isolated, largely unaccountable ways now seem the antithesis of modern science, with consensus and peer review at its very heart. Yet the "outsider" tradition persists. Think of Alfred Wegener, the father of plate tectonics and, more controversially, of Gaia theorist james Lovelock. Both pursued their theories in the face of strong opposition from their peers.

Such mavericks can be crucial to progress, but are they a dying breed? Beyond young disciplines such as neurobiology, where the territory is largely uncharted, or esoteric areas like quantum theory, where it's hard to prove anything, the consensual nature of science can make it hard for lone voices to thrive.

This may be inevitable. Peer review is inherently conservative, and increasingly only proposals that fit the research framework get funding. The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting geniuses from crackpots.

The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost. We shouldn't allow it to freeze out individuals who are courageous, brilliant or foolhardy enough to go it alone.

Why does this issue grab my attention? It's because it strikes very close to "home" for me, because throughout my research career I have marched to the beat of a differerent drum. Except for very early on (e.g. during my PhD years) I have never been attracted to the idea of doing "fashionable" research, i.e. of a sort that would gain the approval of my peers, lead to lots of publications in high profile journals, and lots of recognition generally. Surely, this list of potential rewards is enough to make the decision about what sort of research to do a no-brainer? For most researchers this is indeed the case, but for a small subset of researchers the mere idea of running with the herd is deeply repulsive.

My rate of publishing in respectable journals has reduced as my research work has advanced, because I have gradually moved away from the accepted mainstream of research in information processing. I have a magnificent collection of rejection letters and referee's reports that I have collected over the years, which clearly demonstrate how conservative and cliquey the peer review process can be. Oh well, I will have to find other publication channels to disseminate my ideas, and with some imagination it isn't too hard to find unconventional ways to publish one's work. Wouldn't it be fun at some point in the future for mainstream researchers to ascend a hitherto unscaled peak, only to find an old flag with the monogram "SPL" already planted there?
Is it just a dream? Who knows? It all depends on whether I am courageous, brilliant or foolhardy (to quote from the above editorial).

Update: In Richard Dawkins book The God Delusion I found this very relevant passage (on page 196 in my 2006 edition):

As with genes in the gene pool, the memes that prevail will be the ones that are good at getting themselves copied. This may be because they have direct appeal ... or it may be because they flourish in the presence of other memes that have already become numerous in the meme pool.

So, to get your paper published it has to either stand all by itself as a self-contained body of work, or it has to dovetail neatly with the papers that have already been published. So what happens to papers that depend on other papers that have not been published? Memetics is a harsh midwife, methinks.