Formalism and Coalition

Aretae insists that all government is coalitional.

Maybe so, but that doesn’t mean it’s a good thing to widen the coalition further, and spread power about randomly.

The point of formalism is that power should be aligned with some form of responsibility, so that the powerful not benefit from destructive behaviour, and that attempting to obtain more power should be illegitimate, so that energies not be directed to destructive competition for power.

Formalists tend to believe that stable, effective and responsible government would follow a largely libertarian policy, choosing to limit government action to maintaining order and protecting private property, and taking its own loot in the form of predictably and efficiently levied taxation rather than by making arbitrary demands of random subjects. Such a policy would maximise the long-term revenue stream from the state.

Given a policy which sets limits on government, it becomes reasonably straightforward to deal with those centres of power which are not sovereign but which cannot be eliminated. They get subsidies, but not power over policy. Given that the sovereign chooses, for reasons of efficiency, to take taxes and buy food with them rather than to take food directly from whereever he fancies, there is no problem in giving pensions or subsidies to those whose support is needed.

The key formalist idea is that if those with informal power go beyond what they are entitled to but seek to influence general government policy, then they are doing something anti-social and immoral. All those who have an interest in the continuation of stable, effective and responsible government will see such an attempt as a threat. Fnargl does not have a ring, and I do not much fancy engineering weapon locks implementing a bitcoin-like voting protocol, so a combination of popular will and, in due course, force of tradition is all we have to fill the gap. In as much as there is a general interest in anything, there is a general interest in good government, and I do not think it is all that far-fetched to to see sovereign authority as something that people will reflexively stand to defend, were it not that that they have been taught for 250 years to do the opposite.

What’s striking is that our current political morality holds the opposite view: that attempting to influence policy is everyone’s right, but to receive direct payoffs is unjust. The powerful are therefore rewarded indirectly via policies with enormously distorting effects on the economy or on the administration of government, whose general costs greatly outweigh the gains obtained by the beneficiaries. Further, it is easier for them to seek to protect and increase their power, than to seek reward for giving it up, even if the general interest would benefit from the latter.

I could do with an example to illustrate this — if a person has necessary power, such as a military officer, then he should keep his power and be rewarded for it. If alternatively his arm of the military is no longer needed, but he still has power because he could potentially use the arm against the sovereign, then it is preferable to pay him extra to cooperate in disbanding the arm, rather than to maintain it just to keep him loyal. The same logic might apply in the organisation of key industries, or sections of the bureaucracy.

It would not necessarily be easy to resolve these things perfectly, but it would be made easier by recognising that concentrating power over general policy — sovereignty — is a good thing, as far as it is possible, and that the sovereign who has control over policy has the right to use it in whichever way he sees fit: to hand out cash presents as much as to award monopolies.

The exercise of democracy makes things very much worse, by adding to the number of those with necessary power anybody who can sway a bloc of voters, and enabling them to make demands for more inefficient indirect sharing of the loot.

A Case for Ispettore Zen

I’ve probably mentioned before that I read a lot of crime novels. My favourites of the modern era are probably the Aurelio Zen series by Michael Dibdin. Zen, a detective of the Polizia di Stato, solves his cases with a blend of staggering luck and an involuntary bloody-mindedness which distracts him from his more important tasks of attempting to understand and navigate the women in his life and the political machinations of the Italian bureaucracy.

I have no idea how realistic Dibdin’s grotesque presentation of the corruption and hidden motivations of Italian life really is, but I have not been able to see the Costa Concordia story in any other context than as an Aurelio Zen mystery. The captain who accidentally fell into a lifeboat and then argued with the coastguard on the phone, the mysterious blonde on the bridge, the cruise line that was blaming their own captain for everything even while the passengers were still being rescued:  all we can be sure of is that nothing is what it seems to be, and nobody is telling the truth. Only Zen can actually get to the truth of it, and even if he does, we probably won’t know, because the official story might be completely different…

Monarchism and Stability in the Middle East / North Africa

Tyler Cowen at Marginal Revolution posts a link to a paper by Victor Menaldo, The Middle East and North Africa’s Resilient Monarchs.

It’s well worth a read; it’s not long, though frankly I’ll need to spend more time with it than I have this evening.

First and foremost, it’s a challenge to the Bueno de Mesquita theory that all that matters is the size of the ruling coalition and the selectorate — a theory that I found valuable but simplistic. Menaldo addresses political culture, observing that the political culture serves to distinguish regime insiders from outsiders. He finds that monarchical governments have less conflict and better economic development.

Particularly interesting to me is the account of elites within the monarchical society. These kingdoms are not the absolute autocracies of my “degenerate formalism”, but actually existing monarchies, in which the extended royal family and other important groups hold significant power. Menaldo’s argument is that the fact that the political culture defines who shares in power, the struggles between in-groups are limited. Unlike a faction in a revolutionary republic, you can lose a power struggle and still be an insider with some power.

In my view, this is also the strength of our somewhat corrupted democracies: if you’re an insider but you’re losing, it’s still not worth being extremely destructive. Better to admit defeat and preserve the system that keeps you an insider even as a loser.

Because of that, this paper doesn’t really make my argument: it shows that monarchy is better than a revolutionary republic, but not that it is better than a western democracy. Still, it’s useful that it’s showing some of the strengths that monarchy has.

It’s not without weaknesses, either. As with other work of this kind, I don’t really take the mathematics seriously. Checking that a statistical analysis bears out the impression you get from drawing a couple of graphs and watching CNN is not what I call verifying a testable hypothesis. And a relatively small data set of somewhat subjective categorisations of events seems inadequate for the amount of analysis being done on it.

Also, the paper, as far as I have seen, does not explore the possibility that foreign influence is the explanation for the difference in violence. Bahrain faced nothing like the outside pressure that Libya or Syria did. I don’t think foreign action is affected directly by whether the regime is monarchical or republican, but there might be an indirect link with foreign policy stance.

Diane Abbot

@bimadew White people love playing “divide & rule” We should not play their game #tacticasoldascolonialism

Offensive? Of course not. How can that possibly be offensive? Just because it implies that it is possible to generalise about what “white people” like? You mean like this? What rubbish.

Well, is it wrong, then? I think so, but so what? She’s a Labour MP — saying things that are wrong is her job. Further, it’s worth arguing about.

Speaking on behalf of white people, we do not love playing “divide & rule”. It’s strictly a last resort — keeping track of different groups of black people gives us a headache. Which ones are the Tutsis again? We much prefer to have “community leaders” deal with all that stuff for us¹.

I would not have been able to say that had Diane Abbot not raised the issue. She was right to raise the issue, despite being wrong: like I said, that’s her job. She should not have been shut up or made to apologise.

The reflex to hang her out to dry is understandable: we are frustrated at not being allowed to say things about race, and when one of “them” does it, we take revenge. But I think that is a bad mistake — ironically, this is one time where we have to risk that headache and play “divide & rule”. Abbott is not one of “them” that want us to shut up about race. Rod Liddle says that she has used the same tactics in the past, but when he talked about black crime, she at least disagreed with him on the merits. Probably wrongly, mind, but, Labour MP, etc. Yes, she used the R-word as well, but if everyone complaining had also engaged the argument like her, they wouldn’t have been able to shout it down. It is the likes of Alex Massie and Bonnie Greer weighing in that make it near impossible to have such a discussion.

Non-white politicians are generally willing to talk about race. (Sometimes at enormous length). Being offended is Stuff White People Like. And that’s not something I’m going to apologise for saying.

¹ If it turns out that the “community leaders” are all from one group, and are using the power we give them to exterminate another, we would rather not know about it, thank you very much.

AI, Human Capital, Betterness

Let me just restate the thought experiment I embarked on this week. I am hypothesising that:

  • “Human-like” artificial intelligence is bounded in capability 
  • The bound is close to the level of current human intelligence  
  • Feedback is necessary to achieving anything useful with human-like intelligence 
  • Allowing human-like intelligence to act on a system always carries risk to that system

Now remember, when I set out I did admit that AI wasn’t a subject I was up to date on or paid much attention to.

On the other hand, I did mention Robin Hanson in my last post. The thing is, I don’t actually read Hanson regularly: I am aware of his attention to systematic errors in human thinking; I quite often read discussions that refer to his articles on the subject, and sometimes follow links and read them. But I was quite unaware of the amount he has written over the last three years on the subject of AI, specifically “whole brain emulations” or Ems.

More importantly, I did actually read, but had forgotten, “The Betterness Explosion“, a piece of Hanson’s, which is very much in line with with my thinking here, as it emphasises that we don’t really know what it means to suggest we should achieve super-human intelligence. I now recall agreeing with this at the time, and although I had forgotten it I suspect it at the very least encouraged my gut-level scepticism towards superhuman AI and the singularity.

In the main, Hanson’s writing on Ems seems to avoid the questions of motivation and integration that I emphasised in part 2. Because the Em’s are actual duplicates of human minds, there is no assumption that they will be tools under our control; from the beginning they will be people with which we will need to negotiate — there is discussion of the viability and morality of their market wages being pushed down to subsistence level.

There is an interesting piece “Ems Freshly Trained” which looks at the duplication question, which might well be a way round the integration issue (as I wrote in part 1, “it might be as hard to produce and identify an artificial genius as a natural one, but then perhaps we could duplicate it”, and the same might go for an AI which is well-integrated into a particular role).

There is also discussion of cities which consist mainly of computer hardware hosting brains. I have my doubts about that: because of the “feedback” assumption at the top, I don’t think any purpose can be served by intelligences that are entirely isolated from the physical world. Not that they have to be directly acting on the physical world — I do precious little of that myself — but they have to be part of a real-world system and receive feedback from that system. That doesn’t rule out billion-mind data centre cities, but the obstacles to integrating that many minds into a system are severe. As per part 2, I do not think the rate of growth of our systems is limited by the availability of intelligences to integrate into them, since there are so many going spare.

Apart from the Hanson posts, I should also have referred to an post I had read by Half Sigma, on Human Capital. I think that post, and the older one linked from it, make the point well that the most valuable (and most renumerated) humans are those who have been succesfully (and expensively) integrated into important systems.

Relevance of AI

I felt a bit bad writing the last post on artificial intelligence: it’s outside my usual area of writing, and as I’d just admitted, there are a number of other points within my area that I haven’t got round to  properly putting in order.

However, the questions raised in the AI post aren’t as far from the debates Anomaly UK routinely deals in as I first thought.

Like the previous post, this falls firmly in the category of “speculations”.  I’m concerned with telling a consistent story; I’m not even arguing at this stage that what I’m describing is true of the real world today.  I’ll worry about that when the story is complete.

Most obviously, the emphasis on error relates directly to the Robin Hanson area of biases and wrongness is human thinking. It’s not surprising that Aretae jumped straight on it. If my hypothesis is correct, it would mean that Aretae’s category of “monkeybrains”, while of central importance, is very badly named: the problems with our brains is not their ape ancestry, but their very purpose: attempting to reach practical conclusions from vastly inadequate data. That is what we do; it is what intelligence is, and the high error rate is not an implementation bug but an essential aspect of the problem.

(I suppose there are real “monkeybrains” issues in that we retain too high an error rate even when there actually is adequate data. But that’s not the normal situation)

The AI discussion relates to another of Aretae’s primary issues: motivation. Motivation is getting an intelligence to do what it ought to be doing, rather than something pointless or counterproductive. When working with human intelligence, it’s the difficult bit. If artificial intelligence is subject to the problems I have suggested, then properly specifying the goals that the AI is to seek will quite likely also turn out to be the difficult bit.

I’m reminded in a vague way of Daniel Dennett’s writings on meaning and intentionality. Dennett’s argument, if I remember it accurately, is that all “meaning” in human intelligence ultimately derives from the externally-imposed “purpose” of evolutionary survival. Evolutionary successful designs behave as if seeking the goal of producing surviving descendants, and seeking this goal implies seeking sub-goals of feeding, defence, reproduction, etc. etc. etc. In humans, this produces an organ that explicitly/symbolically expresses and manipulates subgoals, but that organ’s ultimate goal is implicit in its construction, and not subject to symbolic manipulation.

The hard problem of motivating a human to do something, then, is the problem of getting their brain to treat that something as a subgoal of its non-explicit ultimate goal.

I wonder (in a very handwavy way) whether building an artificial intelligence might involve the same sort of problem of specifying what the ultimate goal actually is, and making the things we want it to do register properly as subgoals.

The next issue is what an increased supply of intelligence would do to the economy.  Though an apostate libertarian, I have continued to hold to the Julian Simon line that “Human inventiveness is the Ultimate Resource”. To doubt that AI will have a revolutionarily beneficial effect is to reject Simon’s claim.

Within this hypothesis, the availability of humanlike (but not superhuman) AI is of only marginal benefit, so Simon is wrong. Then, what is the ultimate resource?

Simon is still closer than his opponents; the ultimate resource (that is the minimum resource as per the law of the minimum) is not raw materials or land. If it is not intelligence per se, it is more the capacity to endure that intelligence within the wider system.

I write conventional business software.  What is it I spend my time actually doing? The hard bit certainly isn’t getting the computer to do what I want. With modern programming languages and and tools, that’s really easy — once I know what it is I want.  There used to be people with the job title “programmer” whose job it was to do that, with separate “analysts” who told them what the computer needed to do, but the programmer was pretty much an obsolete role when I joined the workforce twenty years ago.

Conventional wisdom is that the hard bit is now working out what the computer needs to do — working with users and defining precisely how the computer fits into the wider business process. That certainly is a significant part of my job. But it’s not the hardest or most time-consuming bit.

The biggest part of the job is dealing with errors: testing software before release to try to find them; monitoring it after release to identify them, and repairing the damage they cause. The testing is really hard because the difficult bits of the software interact with multiple outside people and systems, and it’s not possible to fully simulate them. New software can be tested against pale imitations of the real world, and if it’s particularly risky, real users can be reluctantly drafted in to “user acceptance” testing of the software. But all that — simulating the world to test software, having users effectively simulate themselves to test software, and running not-entirely-tested software in the real world with a finger hovering over the kill button — is what takes most of the work.

This factor is brought out more by the improvements I mentioned in the actual writing of software, but it is by no means new. Fred Brooks wrote in The Mythical Man-Month that if writing a program took n days, integrating it into a system would take 3n days, properly productionising it (so that it would run reliably unsupervised) would take 3n days, and these are cumulative, so that a productionised, integrated version of the program would take something like ten times as long as a stand-alone developer-run version to produce.

Adding more intelligences, natural or artificial, to the system is the same sort of problem. Yes, they can add value. But they can do damage also. Testing of them cannot really be done outside the system, it has to be done by the system itself.

If completely independent systems exist, different ideas can be tried out in them.  But we don’t want those: we want the benefits of the extra intelligence in our system.  A separate “test environment” that doesn’t actually include us is not a very good copy of the “production environment” that does include us.

All this relates to another long-standing issue in our corner of the blogosphere: education, signalling and credentialism. The argument is that the main purpose of higher education is not to improve the abilities of the students, but merely to indicate those students who can first get into and then endure the education system itself. The implication is that there is something very wrong with this. But one way of looking at it is that the major cost is not either producing or preparing intelligent people, but testing and safely integrating them into the system. The signalling in the education system is part of that integration cost.

Back on the Julian Simon question, what that means is that neither population nor raw materials are limiting the growth and advance of civilisation. Rather, civilisation is growing and advancing roughly as fast as it can integrate new members and new ideas. There is no ultimate resource.

It is not an original observation that the things that most hurt our civilisation are self-inflicted. The organisation of mass labour that produced industrialisation also produced the 20th century world wars. The flexible allocation of capital that drove the rapid development of the last quarter century gave us the spectacular misallocations with the results we’re now suffering.

The normal attitude is that these accidents are avoidable; that we can find ways to stop messing up so badly. We can’t.  As the external restrictions on our advance recede, we approach the limit where the benefits of increases in the rate of advance are wiped out by more and more damaging mistakes.

Twentieth Century science-fiction writers recognised at least the catastrophic risk aspect of this situation. The concept that the paucity of intelligence in the universe is because it tends to destroy itself is suggested frequently.

SF authors and others emphasised the importance of space travel as a way of diversifying the risk to the species. But even that doesn’t initially provide more than one system into which advances can be integrated; at best it reduces the probability that a catastrophe becomes an extinction event. Even if we did achieve diversity, that wouldn’t help our system to advance faster, unless it encouraged more recklessness — we could take a riskier path, knowing that if we were destroyed other systems could carry on. I’m not sure I want that; it raises the same sort of philosophical questions as duplicating individuals for “backup” purposes. In any case, I don’t think even that recklessness would help: my point is not just that faster development creates catastrophic risk, but that it increases the frequency of more moderate disasters, like the current financial crisis, and so wipes out its own benefits.

Speculations regarding limitations of Artificial Intelligence

An older friend frequently asks me, as a technologist, when computers will have human-like intelligence, and what the social/economic effects of that will be.

I struggle to take the question seriously; AI is something that was dropped as a major research goal around the time I was a student twenty years ago, and it’s not an area I’m well-informed about. As I mentioned in my review of the rebooted “Knight Rider” TV series, a car that could hold up a conversation is a more futuristic idea in 2008 than it was back when David Hasselhof was doing the driving.

And yet for all that, it’s hard to say what’s really wrong with the layman’s view that since computing power is increasing rapidly, it is an inevitability that whatever the human brain can do in the way of information processing, a computer should be able to do, quite possibly within the next few decades.

But what is “human-like intelligence”?  It seems to me that it is not all that different from what the likes of Google search or Siri do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.

There must be more to it than that; for one thing, trained humans can sort of do actual proper logic, about a billion times less well than this netbook can, and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.

If that is true, then a couple of things follow. First, the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

There are good reasons to suspect that human intelligence is very close to being as good as it can get.
One is that thinking about things longer doesn’t reliably produce better conclusions. That is the point of Malcolm Gladwell’s “Blink” (as far as I understand it; I take Gladwell to be the champion of what Neal Stephenson called “those American books where once you’re heard the title you don’t even need to read it”).

The next, related, reason is that human intelligence doesn’t scale out very well; having more people think about a problem doesn’t reliably give better answers than having just one do it.

Finally, the fact that, in spite of evolutionary pressure, there is enormous variation in the practical usefulness of human intelligences, suggest that making it better is not simply a case of improving the design. If the variation were down to different design, then the better designs would have driven out the worse ones long ago. I think it is far more to do with circumstances, and with the fundamental difficulty of identifying the correct problems to solve.

The major limitation on conventional computing is that it can only do so much per second; only render so many triangles, only price so many positions or simulate so many grid cells. Improving the speed and density of the hardware is pushing back that major limitation.

The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong.  Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.

What I’m saying is that the major cost of human intelligence is not in the scarce resources required to execute the decision-making, but the damage caused by all the bad decisions that humans make.

The major real-world expense in obtaining high-quality human decision-makers is identifying which of the massive surplus available are actually any good.  Being able to supply vastly bigger numbers of AI candidates would not drive that cost down.

Even the specialisms that humans have might be limited more by the cost they impose on the quality of general decision-making than by the cost of actually implementing the capability.

If that’s the situation, then throwing more computing resources at AI-type activity might not change things that much: computers can be as intelligent as humans, but not more intelligent. That’s not nothing, of course: it opens the door to replacing a lot of human activity with automated activity, with all the economic effects that implies.

There will be limitations in application because if human-like intelligence really is what I think it is, then the goals being sought by an AI are necessarily as vague as everything else: they will be clumps of associations, and the “intelligence” will just do the things that are associated with the goal clump. We won’t be able to “program” it the way we program a logic-based system, just kind of point it in the right direction in the same we we do when we type something into a Google search box.

I don’t know if what I’ve put here is new: I think the view of what the major issue in intelligence is is fairly widespread (“associationism”?), but in all previous discussions I’ve seen or participated in, there’s been an assumption that if in x years from now we will have artificial human-like intelligence, then in 2x years from now, or probably much less, we will have amazing superhuman artificial intelligence. That is what I am now doubting.

With intelligences available “in the lab” we might be able to prepare and direct them more effectively than we do now. But even that’s not obviously helpful: with human education, again, the limitation is not so much how long it takes and how much work it is, rather how sure we are it is actually doing any good at all.  We may be able to give an artificial intelligence the equivalent of a hundred years of university education, but is a person with that experience really going to make better decisions? The things we humans work most hard at learning and doing: accumulating raw information and reasoning logically, are the things that computers are already much better than us at. The things that only humans can do are the things we simply don’t know how to do better, even if we were to re-implement on an electronic platform, speeded up, scaled up, scaled out.

Note that all the above is the product of making statistical guesses using masses of ill-understood unreliable associations, and is very likely to be wrong.

(Further thoughts: Relevance of AI)

Not Much

I’ve still been a bit too busy/distracted/unwell to produce very much here, but I am following the discussions. (Here‘s an interesting one at Isegoria). I have unformed thoughts that need more work.

Nydwracu is a young man with a fair bit to say, mostly on twitter, and only some of it incomprehensible to those of us not au fait with Japanese cartoons and whatever popular beat combos the kids are listening to these days.  I’ve stuck him in the sidebar now that I’ve bitten the bullet of moving to the new Blogger renderer.  “Why I am not” seems to have gone dark these last 6 months.

I’ve ploughed through a lot of Breivik’s rant/manifesto/whatever, without being very impressed.  He could have done with paying more attention to what sort of society he wants to live in, and less to what medals the Knights Templar should be handing out to terrorists.

I’m pondering whether I should reconsider my attitude to feudalism; I’ve maintained that the drive to appropriate feudal privileges to the crown up to the 1600s was a good thing, enabled by better communication and administration, but Buckethead makes the point that military technology was a key driver of the process, in the form of the shift of power from mounted knights to massed pikemen/musketeers.  Is current military technology compatible with unitary power? Something to think about.

I’m still concerned about the viability of an atheist reactionary movement. Since I’m opposed to political activism, I see the only reactionary possibility being a cultural development laying the basis for a future reactionary regime. I’m not sure it’s realistic for us to advance a reactionary culture outside of the churches. It may be the most we can be is cheerleaders for Christian reactionaries.  But their struggle is initially and primarily against progressives within their own churches, and there’s little we can do to help them from outside.

Does Arnold Kling’s vision of a Diamond Age style emergence of traditionalism offer an alternative? The problem is surely that the progressive regime does not permit such traditionalist groups to live within it. In Stephenson’s version, I think the old order collapsed first, and “Vicky” society originated within the political vacuum.

Detaching from politics

I do not read a newspaper. The only television I watch is “Doctor Who”, “Strictly Come Dancing”, snooker, and occasionally “Mythbusters” if I’m around when the kids are watching it. I used to to watch “Have I Got News For You”, but now I find it too unpleasant to watch anything that takes politics as seriously as it does. I cannot remember ever being able to watch “Question Time” or any serious political reporting without descending into a screaming rage.

Should you be like me? Absolutely not. I am not nearly detached enough from politics. I look at Google News. I follow people on Twitter who talk about current affairs. I see the headlines on the newsstands.  All these are things that should be avoided as if they were heroin or crystal meth. Maybe a better analogy would be that they are ritually unclean and one should be cleansed or purged after exposure to them.

An example of my contaminated, junkie state is that I became aware, somehow, that Jeremy Clarkson had said that striking public sector workers should be shot. O, for a mode of living by which I could have avoided knowing such a thing!

Now I find out, from Language Log, that when he made those remarks, not only was he joking, as everybody already knows, but he was explicitly, in so many words, parodying himself and his “BBC token right-wing nutjob” persona.

With the proper perspective, this makes no difference. Whether he was making a joke about the strikers, making a joke about his other jokes, or even if he was completely serious, it still wouldn’t be important enough for any intelligent person to give it a moment’s thought. But for those without the proper perspective; for those, like myself, who are far too wrapped up in the political process, in that we look at the headlines on Google News a couple of times a day and know who the Prime Minister is, it is a vital reminder. This thing, which was obviously a pointless fuss about something of absolutely no importance, was actually a pointless fuss about nothing at all.

And every other story is the same.”Nick Clegg has committed the government to a crackdown on excessive executive pay”. What does that mean? It means nothing. It means no more than that Jeremy Clarkson wants to shoot strikers. It means less than that Holly Valance’s paso doble was better than Chelsee Healey’s jive. Nick Clegg is a meaningless figurehead of a meaningless junior coalition partner involved in meaningless posturing, while the decisions actually being made, which have an effect somewhere between nothing and negligible, are being made elsewhere. That sounds like I am positing some hidden conspiracy—if only! The real decisions are being made essentially at random, swayed by forces that are as large and as ill-understood as the climate, and by whoever by accident happens to be in the wrong place at the wrong time, for reasons that are as remote from anything we might care about as a butterfly’s wings in Brazil.

Teach us to care and not to care.

Teach us to sit still.

Queens and Kings

It has been agreed at the Commonwealth Heads of Government meeting
that the laws governing the succession of the British Monarchy will be
changed to give older sisters priority over their younger brothers.

There are pros and cons to this decision, but on balance I think it is
probably for the best.

The drawbacks: first, making any change at all weakens the authority
of tradition. If this can be changed because fashion requires it,
what will be changed next? I’m not too disturbed by this argument,
because a couple of hundred years at least of tradition will have to
be upended when we restore the monarchy as the government and get rid
of parliament and elections and the rest of it.

Second, I would prefer to have a King than a Queen. I worry that a
woman is more likely to be dominated by an outside establishment than
a man is. Note that the considerations are quite different than when
drawing up requirements for a job. When appointing someone to a
position, the reasonable thing is to evaluate their qualities as an
individual. If the best man for the job happens to be a woman, that’s
perfectly fine. But a monarch is a different matter: nobody is making
the appointment, the whole point is that we get who we get, and
individual qualities don’t come into it. Given that, we want the best
odds of getting a sufficiently strong personality, and the odds seem
better with a law that disproportionately selects males. A
restoration is likely to need exceptionally strong characters for at
least a couple of reigns.

The conventional wisdom is that of the last four ruling queens, three
at least were very successful. In the cases of Victoria and Elizabeth
II, I have my doubts: I think their reputations rest more on their
acquiescence towards the ruling establishment than anything else.
Elizabeth I kicked serious arse, though, which goes a long way towards
alleviating my worries on this score.

So much for the disadvantages. The advantages are clear. The monarch
must have as strong a claim to his title as possible. If this step is
not taken now, it will always be floating around as a possibility, and
can be used as a weapon against any King with an older sister. If we
are going to have the potential uncertainty settled for good, it can
only be settled in this direction.

And, as a more minor point, it is satisfying that this is being
treated as significant. We are talking about which of the Queen’s
great-grandchildren will become monarch; the implication is that that
monarchy will be with us for another three generations. A lot will
happen in that time, and through all of it, the option will be there
in the background to write off the demagogues and the apparatchiks and
take another path.

It is also satsfying that this has not, so far, been a matter for
public consultation or debate. I’m expressing an opinion here, but I
don’t want the decision to be based on popular opinion — much better
that it be announced by a ruling clique, even if that be our current
shower of politicians.

Into the bargain, they’re allowing a monarch to marry a Catholic.
Again, I’m unsure. I can think of no direct problem with having a
monarch who is married to a Catholic. But have I thought of everything?