Relevance of AI

I felt a bit bad writing the last post on artificial intelligence: it’s outside my usual area of writing, and as I’d just admitted, there are a number of other points within my area that I haven’t got round to  properly putting in order.

However, the questions raised in the AI post aren’t as far from the debates Anomaly UK routinely deals in as I first thought.

Like the previous post, this falls firmly in the category of “speculations”.  I’m concerned with telling a consistent story; I’m not even arguing at this stage that what I’m describing is true of the real world today.  I’ll worry about that when the story is complete.

Most obviously, the emphasis on error relates directly to the Robin Hanson area of biases and wrongness is human thinking. It’s not surprising that Aretae jumped straight on it. If my hypothesis is correct, it would mean that Aretae’s category of “monkeybrains”, while of central importance, is very badly named: the problems with our brains is not their ape ancestry, but their very purpose: attempting to reach practical conclusions from vastly inadequate data. That is what we do; it is what intelligence is, and the high error rate is not an implementation bug but an essential aspect of the problem.

(I suppose there are real “monkeybrains” issues in that we retain too high an error rate even when there actually is adequate data. But that’s not the normal situation)

The AI discussion relates to another of Aretae’s primary issues: motivation. Motivation is getting an intelligence to do what it ought to be doing, rather than something pointless or counterproductive. When working with human intelligence, it’s the difficult bit. If artificial intelligence is subject to the problems I have suggested, then properly specifying the goals that the AI is to seek will quite likely also turn out to be the difficult bit.

I’m reminded in a vague way of Daniel Dennett’s writings on meaning and intentionality. Dennett’s argument, if I remember it accurately, is that all “meaning” in human intelligence ultimately derives from the externally-imposed “purpose” of evolutionary survival. Evolutionary successful designs behave as if seeking the goal of producing surviving descendants, and seeking this goal implies seeking sub-goals of feeding, defence, reproduction, etc. etc. etc. In humans, this produces an organ that explicitly/symbolically expresses and manipulates subgoals, but that organ’s ultimate goal is implicit in its construction, and not subject to symbolic manipulation.

The hard problem of motivating a human to do something, then, is the problem of getting their brain to treat that something as a subgoal of its non-explicit ultimate goal.

I wonder (in a very handwavy way) whether building an artificial intelligence might involve the same sort of problem of specifying what the ultimate goal actually is, and making the things we want it to do register properly as subgoals.

The next issue is what an increased supply of intelligence would do to the economy.  Though an apostate libertarian, I have continued to hold to the Julian Simon line that “Human inventiveness is the Ultimate Resource”. To doubt that AI will have a revolutionarily beneficial effect is to reject Simon’s claim.

Within this hypothesis, the availability of humanlike (but not superhuman) AI is of only marginal benefit, so Simon is wrong. Then, what is the ultimate resource?

Simon is still closer than his opponents; the ultimate resource (that is the minimum resource as per the law of the minimum) is not raw materials or land. If it is not intelligence per se, it is more the capacity to endure that intelligence within the wider system.

I write conventional business software.  What is it I spend my time actually doing? The hard bit certainly isn’t getting the computer to do what I want. With modern programming languages and and tools, that’s really easy — once I know what it is I want.  There used to be people with the job title “programmer” whose job it was to do that, with separate “analysts” who told them what the computer needed to do, but the programmer was pretty much an obsolete role when I joined the workforce twenty years ago.

Conventional wisdom is that the hard bit is now working out what the computer needs to do — working with users and defining precisely how the computer fits into the wider business process. That certainly is a significant part of my job. But it’s not the hardest or most time-consuming bit.

The biggest part of the job is dealing with errors: testing software before release to try to find them; monitoring it after release to identify them, and repairing the damage they cause. The testing is really hard because the difficult bits of the software interact with multiple outside people and systems, and it’s not possible to fully simulate them. New software can be tested against pale imitations of the real world, and if it’s particularly risky, real users can be reluctantly drafted in to “user acceptance” testing of the software. But all that — simulating the world to test software, having users effectively simulate themselves to test software, and running not-entirely-tested software in the real world with a finger hovering over the kill button — is what takes most of the work.

This factor is brought out more by the improvements I mentioned in the actual writing of software, but it is by no means new. Fred Brooks wrote in The Mythical Man-Month that if writing a program took n days, integrating it into a system would take 3n days, properly productionising it (so that it would run reliably unsupervised) would take 3n days, and these are cumulative, so that a productionised, integrated version of the program would take something like ten times as long as a stand-alone developer-run version to produce.

Adding more intelligences, natural or artificial, to the system is the same sort of problem. Yes, they can add value. But they can do damage also. Testing of them cannot really be done outside the system, it has to be done by the system itself.

If completely independent systems exist, different ideas can be tried out in them.  But we don’t want those: we want the benefits of the extra intelligence in our system.  A separate “test environment” that doesn’t actually include us is not a very good copy of the “production environment” that does include us.

All this relates to another long-standing issue in our corner of the blogosphere: education, signalling and credentialism. The argument is that the main purpose of higher education is not to improve the abilities of the students, but merely to indicate those students who can first get into and then endure the education system itself. The implication is that there is something very wrong with this. But one way of looking at it is that the major cost is not either producing or preparing intelligent people, but testing and safely integrating them into the system. The signalling in the education system is part of that integration cost.

Back on the Julian Simon question, what that means is that neither population nor raw materials are limiting the growth and advance of civilisation. Rather, civilisation is growing and advancing roughly as fast as it can integrate new members and new ideas. There is no ultimate resource.

It is not an original observation that the things that most hurt our civilisation are self-inflicted. The organisation of mass labour that produced industrialisation also produced the 20th century world wars. The flexible allocation of capital that drove the rapid development of the last quarter century gave us the spectacular misallocations with the results we’re now suffering.

The normal attitude is that these accidents are avoidable; that we can find ways to stop messing up so badly. We can’t.  As the external restrictions on our advance recede, we approach the limit where the benefits of increases in the rate of advance are wiped out by more and more damaging mistakes.

Twentieth Century science-fiction writers recognised at least the catastrophic risk aspect of this situation. The concept that the paucity of intelligence in the universe is because it tends to destroy itself is suggested frequently.

SF authors and others emphasised the importance of space travel as a way of diversifying the risk to the species. But even that doesn’t initially provide more than one system into which advances can be integrated; at best it reduces the probability that a catastrophe becomes an extinction event. Even if we did achieve diversity, that wouldn’t help our system to advance faster, unless it encouraged more recklessness — we could take a riskier path, knowing that if we were destroyed other systems could carry on. I’m not sure I want that; it raises the same sort of philosophical questions as duplicating individuals for “backup” purposes. In any case, I don’t think even that recklessness would help: my point is not just that faster development creates catastrophic risk, but that it increases the frequency of more moderate disasters, like the current financial crisis, and so wipes out its own benefits.

Speculations regarding limitations of Artificial Intelligence

An older friend frequently asks me, as a technologist, when computers will have human-like intelligence, and what the social/economic effects of that will be.

I struggle to take the question seriously; AI is something that was dropped as a major research goal around the time I was a student twenty years ago, and it’s not an area I’m well-informed about. As I mentioned in my review of the rebooted “Knight Rider” TV series, a car that could hold up a conversation is a more futuristic idea in 2008 than it was back when David Hasselhof was doing the driving.

And yet for all that, it’s hard to say what’s really wrong with the layman’s view that since computing power is increasing rapidly, it is an inevitability that whatever the human brain can do in the way of information processing, a computer should be able to do, quite possibly within the next few decades.

But what is “human-like intelligence”?  It seems to me that it is not all that different from what the likes of Google search or Siri do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.

There must be more to it than that; for one thing, trained humans can sort of do actual proper logic, about a billion times less well than this netbook can, and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.

If that is true, then a couple of things follow. First, the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

There are good reasons to suspect that human intelligence is very close to being as good as it can get.
One is that thinking about things longer doesn’t reliably produce better conclusions. That is the point of Malcolm Gladwell’s “Blink” (as far as I understand it; I take Gladwell to be the champion of what Neal Stephenson called “those American books where once you’re heard the title you don’t even need to read it”).

The next, related, reason is that human intelligence doesn’t scale out very well; having more people think about a problem doesn’t reliably give better answers than having just one do it.

Finally, the fact that, in spite of evolutionary pressure, there is enormous variation in the practical usefulness of human intelligences, suggest that making it better is not simply a case of improving the design. If the variation were down to different design, then the better designs would have driven out the worse ones long ago. I think it is far more to do with circumstances, and with the fundamental difficulty of identifying the correct problems to solve.

The major limitation on conventional computing is that it can only do so much per second; only render so many triangles, only price so many positions or simulate so many grid cells. Improving the speed and density of the hardware is pushing back that major limitation.

The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong.  Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.

What I’m saying is that the major cost of human intelligence is not in the scarce resources required to execute the decision-making, but the damage caused by all the bad decisions that humans make.

The major real-world expense in obtaining high-quality human decision-makers is identifying which of the massive surplus available are actually any good.  Being able to supply vastly bigger numbers of AI candidates would not drive that cost down.

Even the specialisms that humans have might be limited more by the cost they impose on the quality of general decision-making than by the cost of actually implementing the capability.

If that’s the situation, then throwing more computing resources at AI-type activity might not change things that much: computers can be as intelligent as humans, but not more intelligent. That’s not nothing, of course: it opens the door to replacing a lot of human activity with automated activity, with all the economic effects that implies.

There will be limitations in application because if human-like intelligence really is what I think it is, then the goals being sought by an AI are necessarily as vague as everything else: they will be clumps of associations, and the “intelligence” will just do the things that are associated with the goal clump. We won’t be able to “program” it the way we program a logic-based system, just kind of point it in the right direction in the same we we do when we type something into a Google search box.

I don’t know if what I’ve put here is new: I think the view of what the major issue in intelligence is is fairly widespread (“associationism”?), but in all previous discussions I’ve seen or participated in, there’s been an assumption that if in x years from now we will have artificial human-like intelligence, then in 2x years from now, or probably much less, we will have amazing superhuman artificial intelligence. That is what I am now doubting.

With intelligences available “in the lab” we might be able to prepare and direct them more effectively than we do now. But even that’s not obviously helpful: with human education, again, the limitation is not so much how long it takes and how much work it is, rather how sure we are it is actually doing any good at all.  We may be able to give an artificial intelligence the equivalent of a hundred years of university education, but is a person with that experience really going to make better decisions? The things we humans work most hard at learning and doing: accumulating raw information and reasoning logically, are the things that computers are already much better than us at. The things that only humans can do are the things we simply don’t know how to do better, even if we were to re-implement on an electronic platform, speeded up, scaled up, scaled out.

Note that all the above is the product of making statistical guesses using masses of ill-understood unreliable associations, and is very likely to be wrong.

(Further thoughts: Relevance of AI)

Not Much

I’ve still been a bit too busy/distracted/unwell to produce very much here, but I am following the discussions. (Here‘s an interesting one at Isegoria). I have unformed thoughts that need more work.

Nydwracu is a young man with a fair bit to say, mostly on twitter, and only some of it incomprehensible to those of us not au fait with Japanese cartoons and whatever popular beat combos the kids are listening to these days.  I’ve stuck him in the sidebar now that I’ve bitten the bullet of moving to the new Blogger renderer.  “Why I am not” seems to have gone dark these last 6 months.

I’ve ploughed through a lot of Breivik’s rant/manifesto/whatever, without being very impressed.  He could have done with paying more attention to what sort of society he wants to live in, and less to what medals the Knights Templar should be handing out to terrorists.

I’m pondering whether I should reconsider my attitude to feudalism; I’ve maintained that the drive to appropriate feudal privileges to the crown up to the 1600s was a good thing, enabled by better communication and administration, but Buckethead makes the point that military technology was a key driver of the process, in the form of the shift of power from mounted knights to massed pikemen/musketeers.  Is current military technology compatible with unitary power? Something to think about.

I’m still concerned about the viability of an atheist reactionary movement. Since I’m opposed to political activism, I see the only reactionary possibility being a cultural development laying the basis for a future reactionary regime. I’m not sure it’s realistic for us to advance a reactionary culture outside of the churches. It may be the most we can be is cheerleaders for Christian reactionaries.  But their struggle is initially and primarily against progressives within their own churches, and there’s little we can do to help them from outside.

Does Arnold Kling’s vision of a Diamond Age style emergence of traditionalism offer an alternative? The problem is surely that the progressive regime does not permit such traditionalist groups to live within it. In Stephenson’s version, I think the old order collapsed first, and “Vicky” society originated within the political vacuum.

Detaching from politics

I do not read a newspaper. The only television I watch is “Doctor Who”, “Strictly Come Dancing”, snooker, and occasionally “Mythbusters” if I’m around when the kids are watching it. I used to to watch “Have I Got News For You”, but now I find it too unpleasant to watch anything that takes politics as seriously as it does. I cannot remember ever being able to watch “Question Time” or any serious political reporting without descending into a screaming rage.

Should you be like me? Absolutely not. I am not nearly detached enough from politics. I look at Google News. I follow people on Twitter who talk about current affairs. I see the headlines on the newsstands.  All these are things that should be avoided as if they were heroin or crystal meth. Maybe a better analogy would be that they are ritually unclean and one should be cleansed or purged after exposure to them.

An example of my contaminated, junkie state is that I became aware, somehow, that Jeremy Clarkson had said that striking public sector workers should be shot. O, for a mode of living by which I could have avoided knowing such a thing!

Now I find out, from Language Log, that when he made those remarks, not only was he joking, as everybody already knows, but he was explicitly, in so many words, parodying himself and his “BBC token right-wing nutjob” persona.

With the proper perspective, this makes no difference. Whether he was making a joke about the strikers, making a joke about his other jokes, or even if he was completely serious, it still wouldn’t be important enough for any intelligent person to give it a moment’s thought. But for those without the proper perspective; for those, like myself, who are far too wrapped up in the political process, in that we look at the headlines on Google News a couple of times a day and know who the Prime Minister is, it is a vital reminder. This thing, which was obviously a pointless fuss about something of absolutely no importance, was actually a pointless fuss about nothing at all.

And every other story is the same.”Nick Clegg has committed the government to a crackdown on excessive executive pay”. What does that mean? It means nothing. It means no more than that Jeremy Clarkson wants to shoot strikers. It means less than that Holly Valance’s paso doble was better than Chelsee Healey’s jive. Nick Clegg is a meaningless figurehead of a meaningless junior coalition partner involved in meaningless posturing, while the decisions actually being made, which have an effect somewhere between nothing and negligible, are being made elsewhere. That sounds like I am positing some hidden conspiracy—if only! The real decisions are being made essentially at random, swayed by forces that are as large and as ill-understood as the climate, and by whoever by accident happens to be in the wrong place at the wrong time, for reasons that are as remote from anything we might care about as a butterfly’s wings in Brazil.

Teach us to care and not to care.

Teach us to sit still.

Queens and Kings

It has been agreed at the Commonwealth Heads of Government meeting
that the laws governing the succession of the British Monarchy will be
changed to give older sisters priority over their younger brothers.

There are pros and cons to this decision, but on balance I think it is
probably for the best.

The drawbacks: first, making any change at all weakens the authority
of tradition. If this can be changed because fashion requires it,
what will be changed next? I’m not too disturbed by this argument,
because a couple of hundred years at least of tradition will have to
be upended when we restore the monarchy as the government and get rid
of parliament and elections and the rest of it.

Second, I would prefer to have a King than a Queen. I worry that a
woman is more likely to be dominated by an outside establishment than
a man is. Note that the considerations are quite different than when
drawing up requirements for a job. When appointing someone to a
position, the reasonable thing is to evaluate their qualities as an
individual. If the best man for the job happens to be a woman, that’s
perfectly fine. But a monarch is a different matter: nobody is making
the appointment, the whole point is that we get who we get, and
individual qualities don’t come into it. Given that, we want the best
odds of getting a sufficiently strong personality, and the odds seem
better with a law that disproportionately selects males. A
restoration is likely to need exceptionally strong characters for at
least a couple of reigns.

The conventional wisdom is that of the last four ruling queens, three
at least were very successful. In the cases of Victoria and Elizabeth
II, I have my doubts: I think their reputations rest more on their
acquiescence towards the ruling establishment than anything else.
Elizabeth I kicked serious arse, though, which goes a long way towards
alleviating my worries on this score.

So much for the disadvantages. The advantages are clear. The monarch
must have as strong a claim to his title as possible. If this step is
not taken now, it will always be floating around as a possibility, and
can be used as a weapon against any King with an older sister. If we
are going to have the potential uncertainty settled for good, it can
only be settled in this direction.

And, as a more minor point, it is satisfying that this is being
treated as significant. We are talking about which of the Queen’s
great-grandchildren will become monarch; the implication is that that
monarchy will be with us for another three generations. A lot will
happen in that time, and through all of it, the option will be there
in the background to write off the demagogues and the apparatchiks and
take another path.

It is also satsfying that this has not, so far, been a matter for
public consultation or debate. I’m expressing an opinion here, but I
don’t want the decision to be based on popular opinion — much better
that it be announced by a ruling clique, even if that be our current
shower of politicians.

Into the bargain, they’re allowing a monarch to marry a Catholic.
Again, I’m unsure. I can think of no direct problem with having a
monarch who is married to a Catholic. But have I thought of everything?

Nothing To Envy

I’ve started to take more interest in North Korea. The reason for this is an embarrassment: I have argued that a possible route to a form of government closer to what I want to see is that a one-party state comes under the control of a single strong leader who is able to convert it into a hereditary monarchy, by concentrating power to himself so strongly that he is able to leave it to his heir. It later occurred to me that the country which has come closest to doing that is North Korea, now anticipating the succession of the third generation of the Kim dynasty.

Like I said, an embarrassment. Probably the one-party-state to hereditary monarchy thing isn’t such a good idea. But I’m amusing myself by studying my own reaction to this inconvenience to my theories. It’s interesting to play at being rather more attached to the theory than I really am, and look for cynical ways to rebut arguments based on the evidence of North Korea.

The most fun approach would be to argue that North Korea is actually really well governed, and the problems it is perceived to have are either falsified by the media, or are the results of steps taken against it by jealous republicans abroad.

It is the sheer ludicrousness of that argument that has induced me to look at the question at this “meta” level. North Korea is pretty much the poorest and most backward country in the entire world, while the part of Korea given a different form of government by an arbitrary line of latitute has become one of the dozen or so richest and most advanced. If North Korea had been merely bad, I might have seriously attempted a defence of its system, but as things are it is impossible to do so with a straight face. That situation makes some degree of self-examination inevitable: exactly how stupid does an argument have to be for me to reject it as I have the “North Korea is actually really well governed” line. And what does that say about me?

(This interesting point from Nathan Bashaw seems relevant).

Part of the question is how easy it is to dodge the problem. And here I can really do it. For one thing, we don’t really know who has the power in North Korea — for all we can tell, Kim may be an empty figurehead entirely under the control of military and party officials. In any case, the problem in North Korea is not who is in charge, it is that it is attached to a collectivist economic system. Kim is legitimate not because he is the annointed heir of Kim Il-Sung, but because he is the carrier of the flame of communism.

That gives us another data point: North Korea does not in fact convince me that hereditary government is a bad idea. Despite the problem that everywhere else in the world has dumped NK-style collectivism, with the possible exception of Cuba, which… is ruled by the brother of the previous leader. Hmmm.

I don’t think I can really draw conclusions about attachment to ideology here. But the question’s still open: I’m going to keep an eye on the process of my adapting judgement to ideology and vice versa. I’m well placed to do that, because I am not in a social group united by my ideology — other than a few other bloggers. Also the fact that I’ve recently abandoned ideological positions I held for most of my adult life gives me an extra reserve of cynicism to draw on.

I already started with yesterday’s post, where I deliberately went through the motions of drawing ideological conclusions from the undercover policing scandal.

Aretae has also been writing along these lines recently. One of his most important points is that there is no basis for anyone to be certain or even nearly certain about these difficult ideological issues. When he puts forward ideas, it’s all 60% this and 70% that.

That’s very sound. But is that the way anyone really sees things? The reason I’m able to take this detached approach to my royalist ideology is that I genuinely do have doubts. Again, that’s probably because it’s fairly new to me, and it’s out beyond the lunatic fringe in the public debate.

For a comparison, take the issue of climate change. I am persuaded by the evidence, and have written here, that there is considerable room for doubt of the pronouncements of the climate science experts. I claim that the evidence tends to support the position that dangerous climate change is not happening and will not happen.

That’s fine. But what I haven’t said in so many words is that I have a deep inner certainty that anthropogenic global warming is all rubbish. That certainty cannot be justified by a reasoned analysis of the evidence: in no way do I have sufficient knowledge or understanding of the science to achieve such confidence in any conclusion. Where does this certainty come from?

If it is simply overconfidence, that’s almost the least bad possibility. At least in that case, the direction of my conclusion is based on reason. What’s more worrying is the possibility that the inner certainty is totally independent of my reason, and the reasoned conclusions I have drawn are only rationalisations of my faith.

If that’s the case, where did the faith come from? I would have to have made some kind of intuitive, rather than rational, judgement on one side of a very complex issue. What is the source of that intuition? I don’t know, though I could take a few guesses. Is that intuition to be trusted? In general, absolutely not. There are too many cases of people reaching opposite certainty on the basis of intuition, and there is no basis for judging one person’s intuition against another.

Now maybe my intuition, unlike yours, is reliable. It does have a fairly decent track record. Also, I’m not in the habit of being certain: of all the other things I have written about on this blog, I don’t think there are any that I have the same inner certainty about that I have about AGW.

Freemail

In The Guardian, a journalist tells of her experience of having her email account hacked.

“The realisation dawns that the email account is the nexus of the modern world. It’s connected to just about every part of our daily life, and if something goes wrong, it spreads. But the biggest effect is psychological. On some level, your identity is being held hostage.

“The company that presents itself as the friendly face of the web doesn’t have a single human being to talk to in these circumstances.”

I love free stuff. I use free blog services and free email services, and I see it as a double advantage that, as well as not costing me anything, these services are somewhat at arms length from my identity. Possession of a few keys and passwords are what make me “anomalyuk”, nothing more than that.

My real-world identity is another matter. My personal email accounts, with which I support my personal relationships and business relationships, are provided to me — here’s a novelty — as a paying customer. The providers’ customer services may be good or bad, but at least they exist and I can use them. It makes no difference to a Gmail user how good Google’s customer service is, because Ms Davis and other Gmail users are not Google’s customers at all.

I actually pay a couple of quid a month just for my email service, but that isn’t necessary. Like you, Rowena Davis has an ISP — possibly more than one, if she gets her mobile separate from her home internet. They will provide her an email address, as part of the service she is paying for. They know it belongs to her, because she pays the bill, and if, as the bill-payer, she phones up and needs it reset, they will do it for her. However, for this service which she correctly observes is the nexus of her life, she has chosen to rely instead on a handed-out-on-the-street freebie instead.

I hereby declare that to be a Bad Idea.

Davis’s story links to another recent one, of a 79-year-old charity volunteer who went through the same ordeal. Twice. The police told her: don’t use free email services. Her conclusion at the end of the article: the police need to devote more resources. Not her — she’s sticking with free.

There is one drawback with using your ISP’s email service, which is that you may lose it if you want to change ISPs. As it happens, two generations of free services have come and pretty much gone (remember bigfoot? rocketmail?) in the time I’ve been with my current ISP, but that may be a fluke. And in any case, the old addresses are still supported.

If that concerns you, then do what I do and pay for it. One leading provider charges 69p a month for email hosting, plus £2.99 a year for domain registration — giving you an address that is transferable across providers and that looks more professional than a vodaphone or gmail address. And they have 24×7 telephone support. Alternatively, Yahoo! do an email service for $19.99 a year. Bigfoot, it emerges, are still around, and charge $19.95 a quarter. Is £1 or £3 a month really not worth paying for “the nexus of the modern world”? I should emphasize: it’s not just that paying for the email makes it feasible for the provider to offer you some level of support: the mere fact of there being a payment makes it enormously easier for them to identify you, and therefore to clear up these fraud issues.

The surprising thing is that they’re not marketing this more aggressively. The problems Davies had have been common for a few years: everyone in her position should be paying for decent email, but the providers aren’t advertising on that basis. Google don’t offer a premium service like Yahoo’s, Microsoft charge $9.95 a month, which is a bit steep, and the services just aren’t marketed.

ISPs could offer domain and mail hosting as an extra, but the consumer-oriented ones don’t, or don’t push it.

Possibly the providers are worried about adverse selection: if they advertise on the basis of being able to handle hacking incidents, they’re offering hostages to fortune in terms of the inevitable dissatisfied customers undermining their name with complaints.

As a disinterested (and irresponsible) third party, I will do it for them: Do not use Gmail. Do not use MSN Hotmail, unless you are paying the $9.95 a month for premium (which I don’t recommend, because it’s too much). Use your ISP’s email account if you’re not planning to move or switch in the next five years. Otherwise get a personal domain and get a basic email service from the likes of 1and1, or, if that’s too complicated (and it is a bit complicated), get Yahoo! Plus for $19.95 a year. I’m not recommending these through experience, just through looking for email services that cost a little money and offer telephone support.

If you’re not willing to pay, or you’re not willing to give up Gmail (which, I admit, is a very nicely done service), then remember that you have nobody to whine to if your Gmail is hacked. You have other options, and you have chosen to trust your email to a company you have no commercial relationship with. I have nothing against Google, but if you want a company to have responsibilities towards you, you have to pay them.

Who has the power to authorise perjury?

One of the most striking things about the last few decades is that relatively low-ranking elements of the state apparatus have arrogated power to themselves without any legal or legislative basis, and that this has been calmly accepted by the public at large.

Because these seizures of power are technically illegal, they can be challenged in the courts, and occasionally are. See for instance Neil Herron’s campaign against imposition of arbitrary parking rules by local councils.

While the courts can, and technically should, rule in favour of eccentrics such as Herron, they sometimes exhibit reluctance to contradict the common assumptions of society, which are that someone who works for the council or the police or a government department can do whatever they decide within the area relevant to their job.

Because it is so accepted, it is not easy to spot, and only becomes really obvious when they overreach. What is interesting about the police decision to “authorize” an undercover officer to give false personal and identity details under oath in a criminal prosecution is not whether they will actually get away with it this time (I assume they won’t), but that they ever imagined they could.

The same effect was evident with the MP expenses affair: I quoted at length Nadine Dorries’ insistence that a group of party whips and civil servants had encouraged MPs to make false expenses claims, and that that actually made it OK.

A more significant example is the Foot and Mouth cull back in 2001, in which, it is widely argued, the culling of healthy cattle was done without any legal authority.

At this stage in the post, I should turn these observations into a neat argument in favour of whatever broad political position I am in favour of at the moment (formalism, monarchy, etc.) I suppose I just about could manage it: lines of authority are unclear, nobody ultimately admits to being responsible for anything, so people on the spot feel obliged to just assume responsibility, blah, blah, blah. If I thought about it and worked on it for a while, I might really come to take it seriously as an argument, but right now it feels a little dishonest, so I’d rather just put the whole thing forward as an observation and a point for further consideration.

Slavery

One issue that comes up when you declare that the last 400 years of political “progress” are a bad thing is slavery. Lobbyists, the International Olympic Committee, sustainability facilitators, interior design licensing, bank bailouts, the Milk Marketing Board, these are indeed changes for the worse, but are you saying you want to bring back slavery?

There are a couple of answers to that. One is to argue that the lot of many in the modern world is no better than slavery, so that, even if slavery is bad, it’s not necessarily worse than what we have now.

In “The Servile State”, Hiliaire Belloc predicted that capitalism would necessarily lead ultimately to nationalised slavery, as the state would be forced to take responsibility for the poor landless, and would still need them to work.

That things haven’t evolved quite as Belloc predicted is due only to the decline in the social usefulness of unskilled work. When, from time to time, the question comes up of forcing the unemployed to do some kind of government-organised work in exchange for their handouts, there is only a little opposition premised on the basis that it is unfair to inhumane to the slaves themselves. The idea fails on the grounds that it will cost more than paying them not to work, and that it will constitute cheap competition against those that are in jobs. The fact that the unemployable are in essence slaves of the state is not widely disputed.

(Of course, the distributivists did not themselves intend this argument as a defence of older forms of slavery; they sought a compromise between feudalism and capitalism)

The true argument for slavery is this: that those who are not able to support themselves are necessarily slaves, and abolition ultimately amounts to an exercise in creative linguistics.

A liberal will object, correctly, that ability to support oneself is a can of worms. The ‘inability’ of the propertyless is an artificial condition. None of us are able to support ourselves if every hand is against us, and very few would manage in the hypothetical, and impossible, state where neigbours neither helped nor hindered us. The ability of a particular person to support himself is a social fact as much as a physical one.

Even so, given any social arrangement, there are those who can, in and with that society, support themselves, and those who cannot. The distributivists aimed, admirably, for a society of smallholders in which all could live free, but even if their plans were implemented there would still be some failures.

The natural arrangement for such failures has been demonstrated for us by the Irish travellers of Leighton Buzzard. If a person cannot live independently, someone must take charge of him, and if they can profit by doing so, then a solution has been found.

It is alleged that the workers in the charge of the travellers were not looked after at all well. That may be so, though a significant proportion of those “rescued” appear willing to go back. But when this natural arrangement is illegal, and therefore carried out only among that section of the population which cannot be policed without the UN getting involved, it is not reasonable to expect it to be done very impressively.

The conditions of slavery are a matter of compromise: legitimately a matter of public policy. The bulk importation and inhumane handling of captured tribesmen from a remote continent quite understandably gave slavery a bad name. I am not here to argue for any and all forms of slavery. However, drawing the line of what is unacceptable to include all forms of coercion is clearly an error when so many cannot actually live adequately without being coerced somehow. There have been many varieties of slavery, and I will use the term serfdom to emphasise a distinction from the form of slavery most familiar to us from history and fiction, but not to pretend that I am not talking about a form of slavery.

Back to those conditions: ideally, all those capable of freedom would be free, and the incapable should be given the best chance of becoming both capable and free. But there needs to be some compromise here. The welfare state is geared to the capable but unfortunate, is grossly unsuitable for the most incapable, while at the same time dragging far too many of the marginally capable down into dependency. There seems ample room to improve on it with a system of humane serfdom under which a serf is subject to a lord who his responsible for his support and humane treatment. Such an arrangement would probably require a long-term commitment on both sides, in order to work adequately. The lord has insufficient motivation to improve the serf’s knowledge and behaviour if he can wander out onto the job market as soon as he has learned enough skill and discipline to do so. I think it is essential that such a step would require some compensation to the lord, or a minimum period, or both. At the same time, every capable person who is not free is a cost of the sytem, so there should be some calibration to minimise that cost. It is worth bearing in mind that assisting those who would most benefit from exiting serfdom – by raising the necessary compensation – would be an obvious and worthy aim of charity.

All this really only leaves one question to answer; one which has probably occured to the reader, which is, “are you actually serious you mad loony???!??”

My answer is, “kind of”. The argument above is not presented to convince: I am not convinced by it myself. Rather, as I intimated initially, I am exploring the limits of the reactionary position.

If slavery is unthinkably evil, then the political wisdom of most historical civilisations is basically disqualified by it. If it is defensible, even in some limited way, then that wisdom becomes relevant again, not as infallible authority, but as something to be taken into account. Do I want to reintroduce medieval serfdom? It’s not high on my to-do list. But I refuse to accept that political thought begins in the 1780s.

Public Order

Distractions have prevented me from writing recently, which is a shame. This tweet of Old Holborn’s is worth a book, as I believe it, bizarre as it sounds, to be true, but it is over a month old, and I haven’t got round to it.

On the other hand, my silence has at least prevented me from embarrassing myself over the riots, since they look very different with hindsight than they did at the time. The one public comment I made was this, which is not too bad.

The riots lasted two nights in London, with a third in Birmingham and Manchester. They were in no way out of the ordinary; just something that happens every few years in the warm bit of summer.

The police response was initially hesitant and inadequate, but, within 48 hours, that was corrected. My theory was that the police originally thought that these were good rioters, like the anti-cuts riots in March. Good rioters have to be allowed to riot: it is just part of their duty as citizens.

However, as Wikipedia tells us, the 2011 London anti-cuts protest is Not to be confused with 2011 England riots. Those are bad riots, and the police must keep order in the streets, whatever it takes. “Kettling” of good rioters is an infringement of their civil liberties, but when bad rioters are running around, the police must find excuses for not having water cannon and baton rounds to hand.

I don’t think they can be blamed for their confusion. I’m not sure if they weren’t aware of the distinction between good and bad rioters, or if, like Jody McIntyre, they mistakenly thought that these were good rioters. In any case, once the police understood the distinction, the trouble was cleared up pretty quickly.