Kamala is (Whale) Cancer

The first real lesson of the election is that whales don’t get cancer because the cancers get cancer before they get big enough to kill the whale1.

Put another way, the ideals of the modern left are very bad for society, because they obstruct effective organisation by encouraging disruptive behaviour, allow corruption by removing personal responsibility, and assign people to functions based on identity rather than ability.

However, a political movement also is a society, and those ideals are not only destructive to society, they are destructive to the political movement that advances them.

Trump could easily have been beaten by a good candidate. A below-average career democrat like Biden was able to beat him (OK, maybe that was fraud. I don’t know. The fact that there was a somewhat even national swing towards Trump compared with 2020 suggests possibly not, to me)

Trump talked about running for president for decades, but he didn’t do it until 2016 because he couldn’t win. He beat Clinton in 2016 and Harris now, because they were bad candidates who got the nomination through a combination of corruption and diversity ideology.

If you can’t say that a bad candidate is bad, if she is a woman or a minority or both, then you will necessarily get bad candidates. If you are corrupt, then you will get bad candidates through corruption rather than good candidates on their merits. If you do not hold people responsible, your staff will spend campaign money on meeting their favourite pop stars rather than on getting votes.

The Democratic Party has poisoned itself with the same poisons that it poisons the country with.

The catch, of course, is that the Republican Party is not much better. It wasn’t only the Democratic opposition that was so unusually weak in 2016 that Trump could beat it. Perhaps the Republican party is less captured by its own bureaucracy, and their third-rate candidates were not so vulnerable to maverick outsiders appealing to the primary electorate against the party machine? Sanders had his own popularity, but he was more effectively nobbled than Trump was, although at least as much effort was devoted to nobbling Trump. On top of their corruption and diversity ideology, the Democratic Party’s bureaucracy and authoritarinism undermined its ability to select an electable candidate.

I think this is also a big part of the mechanism of one of the big questions of our age, “Why did politics go insane?”

As mass media became more appealing — newspapers to radio to television to social media — what Americans call the “ground game” of politics became less important. The parties of the 19th and early-to-mid 20th centuries were really serious organisations, with millions of members, regular meetings, publications, social events, and fully organised and directed for campaigning. People who were heavily into politics lived and breathed this organisation. Today you can be important in politics by making memes on social media, and have no idea of what goes into creating and maintaining an organisation the size of a 1950s political party. Thousands of people can evolve ideologies on Twitter or Tumblr and never notice that those ideologies are a complete barrier to getting anything done in the outside world.

However, the organisations are still important to the process of selecting candidates, even with America’s primary system. And what just happened was that an organisation made disfunctional by anti-organisation ideology picked a terrible candidate.

Democracy’s Strength

Not paying attention, I missed my only regular commenter when he finally came round to my position on democracy 2:

But it is highly wasteful. Enormous amounts of money are being extracted from the public to secure the election of someone who does not rule the country, this function being really exercised by his “advisors”.

I would warn my new acolyte, though, against overstating the case. As opponents of democracy, we have to recognise and explain the fact that the most successful societies in human history have had this democractic form of government, (even while the actual elected politicians were senile or powerless) or else we have no right to be taken seriously.

Yes, democracy is certainly expensive. But to conclude that it is wasteful you have to show that the same end can be achieved more peacefully.

The useful purpose of democracy is to persuade the population that their rulers are legitimate. There are other ways of doing this, but they aren’t obviously more efficient. I quite like the era of divine right, myself, but that required the state to run a religion that would proclaim the legitimacy of the monarch. Brute force has a long track record, but the policing bill gets pretty expensive. Running government as a sideline of the entertainment industry is probably significantly cheaper than running it as a sideline of either religion or the military.

I do think that this crude comparison is in essence the basis of democracy’s strength and its dominance of the twentieth century. At the same time, it is very obviously missing the point. The cost of political parties, campaigns and voting machinery makes up an insignificant fraction of the real cost of democracy, just as the cost of employing soldiers and secret police is a small fraction of the cost of military dictatorship, and the cost of supporting a priesthood is a small fraction of the cost of theocracy. In each case, the true price paid for maintaining the legitimacy of a regime comes from the incentives it puts on people to behave in destructive ways, inside and outside of the governing institutions.

The impact of democracy is the gross dilution of power and responsibility that comes from giving the population a role in resolving disputes among the rulers. The way for a faction to succeed is to take control of and expand the organs of propaganda — the media and educational institutions — until we get to the situation we are in today where the society’s ideological commitments are to those ideas which succeed in power struggles within those institutions, no matter how destructive they are in both the population at large and the actual government.

We are not talking about simple clean categorisation. In practice every government employs a combination of democratic rhetoric, armed force, and appeal to higher moral authority, to improve its perceived legitimacy. The other obvious price paid for legitimacy — the subsidy of supporters — is huge in democracies but at least comparably large in any of the alternatives.

The most effective road to legitimacy is for the regime to be just accepted as inevitable, or as obviously superior to any available alternative. We do see that in some places, generally after catastrophic civil war or economic collapse, but it tends not to last for more than a generation or two before the next round of rebels or radicals or foreign agents manages to undermine it.

I don’t, then, have any silver bullet to fix government. I fear that the current world-ruling regime is past saving and will collapse, but I am not impatient for that to happen or optimistic about what will follow it3. The principle I stand for is that government just is and it is better to accept and support it, even in its imperfections, than to oppose it and force it to spend even more on self-defence. I would apply that even to the present establishment, but the tragedy of democracy is that by supporting it in fact I am opposing it in theory: my unconditional support is a denial of its very premise of legitimacy — that it is and ought to be subject to the whims of the populace.

If someone is to play Chief at this time of day, it should be the right Thain and no upstart.

Yes2ID

There’s a specifically English tradition that the government doesn’t concern itself with the identities of the ordinary men and women of the country. Prior to the twentieth century, births and deaths were registered by the church, taxes were collected on land or trading of particular goods. There was never a national bureaucracy keeping records of individuals.

(There’s a famous quote about how prior to 1914 Britons would hardly have any routine contact with any officials of the government. Orwell? Keynes? I can’t find it and I’m quite annoyed).

A census was introduced in 1801 to guide recruitment strategy for the Napoleonic wars, and National Registration was brought in in 1939 for the Second World War, and abolished later. Measures such as National Registration smacked of Napoleonic totalitarianism. The government exists to serve the people, not the people for the government. My life is no business of the government until I bring myself to its notice, by committing a crime, or travelling abroad, or handling large amounts of money, etc.

I was firmly aligned with that tradition, supporting No2ID, opposing Voter ID, even grumbling incoherently about CCTV cameras.

I still really like the idea of such a light-touch, minimalist state that has no reason to know how many people live in a town or what that bloke’s name is who is sitting on the bench outside Costa. Warm feelings of free Anglo-Saxons and the Wintagemot, and all that (although of course in pre-modern societies nobody had anonymity, so that’s a kind of fantasy).

But we don’t live in such a state, or anything remotely resembling one. Today we live in a state which relies on at least a quarter of the money earned by each member of the working population for its survival, which provides an array of services from traffic direction to heart surgery to everyone, and also in which a dozen private companies already know how many people live in each street and what the bloke on the bench outside Costa watched on TV last week.

As I mentioned at the weekend, the state also has a register of births, a passport database, a register of electors, a driver licensing database, a National Insurance database.

We are not talking in 2024 about whether or not identity details are a concern of government, we are only talking about whether the government should manage its identity database efficiently or inefficiently.

People who are of any positive value to society are massively visible to the state. Citizens of the nation of car drivers, taxpayers, glow in .gov.uk cyberspace like planes approaching an airport. The only people moving in darkness are illegal immigrants, gypsies and underclass, flashing on just once a fortnight to collect their cheques.

Totalitarian is a strong word, but it is obviously the case today that to the extent that a government of an advanced country leaves any area of its citizens’ lives alone today, that is a policy choice, and not either a result of any limit of capability or of tradition. For better or worse, limitation on government today comes from government, and there’s no sense pretending otherwise.

I’ve written a few times before that Feudalism cannot exist today because it was caused by the technological incapability of central government to supervise regions. It seems equally true that the individualism of classical liberalism cannot exist in a world of £20 CCTV cameras and 4TB SSDs. It depends not on limited government but hogtied government.

Of course surveillance does not directly impact our freedom of action. It doesn’t necessarily mean we will become much more tightly limited in our actions. But of course, in practice we already are. We can’t say what we like, we can’t burn what we like, we can’t buy or sell what we like —not those of us with regular jobs and fixed addresses and cars, anyway. Why weep over the hostile underclass facing the same supervision?

Is growing totalitarianism the only future? Yes, probably; as I say, it’s a matter of technology. I would prefer otherwise, but if you’re going to act politically as if the world were other than it is, you might just as well be an anarcho-communist.

Ineffective government is bad government. Effective government is often bad government too, but at least there’s a chance. My view is that the intense stupidity of politics is to a large extent an effect of the practical impotence of politicians. Make those with responsibility less impotent, and at least there’s an incentive for them to become less stupid. (The aligning of power with responsibility is the other requirement, the central NRx principle, but doing that is a separate question. Today it’s the case that nobody has power).

I feel bad writing this. I am betraying what I once stood for. Give me a programme for achieving personal freedom that starts with keeping government databases more incomplete and inaccurate than Amazon’s, and I’ll recant.

The Senate and People of Ukia

After the 2019 British General Election produced a large conservative majority for Prime Minister Boris Johnson, I wrote a “projection” / fantasy of how Britain could progress to a one-party state.

A one-party state on the Chinese model isn’t my ideal form of government. I would prefer an absolute hereditary monarchy such as the one I described in 2012. (Next year we will pass the half-way point of the 25 years between when I wrote it and when I set it, so I will review that then). But I never put forward a mechanism for getting to the absolute monarchy, only vaguely having in mind some serious political collapse and recovery. One-party states do exist today and some of them are governed much better than multi-party democracies. They are equally oligarchic, but the oligarchies are more rational, effective, and marginally less embroiled in infighting.

The central point of neoreactionary theory is that the root problem of our society is its structure of government. The most obvious problem is the people in charge, and if you look a bit deeper you see bad and harmful ideologies, but the theory is that the ideologies are the expected product of internal competition within an oligarchy, and that the people are the product of the structure and the ideologies.

If that is accepted, then the critical step is to change the system. Changing the system will in time change the ideologies and the people. So movement away from a system of oligarchic competition is a benefit, whether the one party is Labour or Conservative. It doesn’t matter whether a cat is black or white, if it catches mice it is a good cat.

Admittedly, when I imagined Borisland, it was very much as a monarchical form with a Supreme Leader. I have heard suggestions that Xi is effectively sovereign over the PRC, but I don’t know and if I were to guess I would think it unlikely. Is Starmer a man who can dissolve ministerial responsibility? Or maybe there is a more ambitious successor waiting in the wings? Either could work. Every Prime Minister who is not universally pilloried as baffled and ineffectual (and some who are) is accused of introducing presidential government; it does not appear to be an impossibility.

Again, I would prefer not to be dragging even the pretence of democratic legitimacy behind the monarch, but, after all, the Roman Empire managed it.

What does the incoming Starmer administration have going for it? Quite a bit:

  1. Weak parliamentary opposition
  2. A prominent internal opposition
  3. A large majority to enable it to combat the internal opposition
  4. A leader who intimately understands the permanent government
  5. A leader young enough to last a couple of decades
  6. The support of the permanent government and the press (at least to start with)

The weak conservative opposition means that the government will not initially be too pressed to compete with it for popularity. My expectation will be that the government’s biggest fights for the first year will be against the left of the Labour party, and particularly the Islamic / pro-Palestine elements, plus the independent MPs that were elected specifically on that platform. Starmer’s pragmatic programme, coupled with his Jewish family, mean he will never be able to satisfy that wing, and would be unwise to try. Losing the Labour party’s traditional support from that population will be initially affordable given the huge parliamentary majority, and in the medium term will gain him much more support from the wider population.

In the modern democratic and media environment, the best way to advance a programme is to have unpopular people oppose it, and the worst way is to have unpopular people support it. If Reform are wise, they will keep a low profile for the next few years, take the money and quietly build an organisation. The government is much more likely to take action on immigration because George Galloway is against it than because Nigel Farage is in favour of it.

The knowledge of the permanent government is very important. In my lifetime, only two Prime Ministers have shown any real evidence of being in charge. Margaret Thatcher and Tony Blair were both lawyers. They both had allies in the civil service (which was much more conservative 40 years ago than it is now). Kier Starmer and Harriet Harman are coming into government with an agenda that we can assume is very much in line with that of the permanent government. But they now have their own role and their own personal goals, and if, over time, they find they need to act against the wishes of that permanent government — they know where the bodies are buried. They know how the system functions, where its strong points and weak points are.

Again, the neoreactionary theory is that if they want to exercise power they will inevitably come into conflict with the permanent government. They want results that look good in the press. The most obvious reason that the Conservatives were useless is that they were just incompetent. The next most obvious reason is that they were traitors to conservatism. The deeper reason is that actually achieving any conservative goals was impossible, so many of them adopted more liberal positions because only by doing so could they avoid being ridiculous failures.

(For people my age, the most vivid examples are Michael Portillo and John Redwood; the two Conservatives seen as the ideological heirs of Thatcher, and the thorns in the right side of the moderate John Major, both of whom over time moved steadily more and more left decade by decade, finishing well to the left of Blair)

Achieving conservative goals was impossible for the Conservatives because the permanent government was united against them, and could obstruct them with legal and administrative bullshit to the point that anything they did achieve would cost them politically far more than it was worth (the two years of failure of the Uganda scheme is of course the prime example, but the pattern was everywhere). If I am right about the advantages that Starmer’s past experience gives him, he might not find things so impossible.

I do expect these conflicts to happen. Starmer will not want to deport illegal immigrants in order to get Sun front pages that will impress Essex Man — but he may find he wants to deport illegal immigrants in order to get the crime rate down and the welfare bill down, and to prevent his own children being blown up in their synagogue. He will want it to just happen, quietly. Can he do that? That’s the question.

If in five years’ time the economy is a bit better (and there is a ton of scope to achieve that by removing obstacles), the immigration situation is no worse, and the Conservatives are still in disarray (the huge error I made five years ago was in thinking that Labour would today still be largely engaged in fighting off Corbynist holdouts, so that’s a big open question), then he could carry as big a majority into the next decade. Technology today is very favourable to absolutism. A leader who is seen as legitimate will have many mechanisms available to him to cement his position.

I’m not going to try to imagine details. Armies under the absolute control of an Emperor carried the standard of the Senate and People of Rome, a Britain that has become “UK” (the latest constitutional proposals apparently include a Senate), perhaps without even being any longer an official kingdom, could also be directed by a single hand.

The horror of foreign policy

I’ve not said much about the whole Gaza / Israel thing since October. I have a pretty strong dislike of islamic terrorists, and no equivalent antipathy to Jews, although I do worry from time to time about their understandable but inconvenient tendency to oppose any kind of nationalism (except their own). So my inclination is towards the Israeli side. However, I try to stifle this on the grounds that I don’t know all the facts, though I’m swimming in propaganda, and it isn’t really any of my business.

While discussing yesterday’s General Election, yesterday, it became clear that the main way that that terrible, bloody conflict affects me is through its impact on British politics. Specifically, if British Muslims become estranged from the Labour Party over it, that will significantly change national politics, and will completely overturn local politics where I live.

Now, I don’t generally concern myself with practical politics, for a number of reasons explained at length on this site. I paid attention to the election for entertainment value rather than because I needed to know anything about it. But that’s just me, it’s an unusual view to take. For many people deeply concerned with politics, these questions of party alignment are among the most important things in their lives. Most people with influence over policy fall into that category.

For those people, the most important question about any actual or potential thing that could happen in the Middle East is: would that help me or my enemies in my local political struggle?

Think about that for a while. Peace talks, escalations, terrorist attacks, blockades — how do they affect my department, my constituency association, my party, parliament? Are they good for me, or bad for me?

I have written before that intervention in foreign conflicts tends to be harmful in humanitarian terms, even when specifically predicated on humanitarian aims.

I have seen it alleged (and don’t know whether to believe), both that Hamas intended a vast catastrophe to be inflicted on Palestinians, and that Israeli Prime Minister Netanyahu intended atrocities to occur against Israelis, in both cases because their political positions depend on the conflict continuing and escalating. If true, these are instances of the same thing, but less clear cut because the participants are much more connected to the direct harms of the conflict than remote foreigners. If someone in Ramallah or Tel Aviv is willing to stir things up in order to strengthen his position, then it is surely much easier for someone in Birmingham or Hendon to come to a similar conclusion.

So expecting the foreign policy directed by people in that position to be humanitarian in effect is very optimistic.

Elite Misinformation

I kind of like Matthew Yglesias. He comes out with some wild things occasionally, but mostly he’s careful and reasonable, even though I don’t share his values.

Now I understand him a bit better, including some of the wild stuff. His main problem is that he is spectacularly naive.

His recent piece, “Elite misinformation is an underrated problem” is, in itself, a good piece. He notes that “misinformation” research is embarrasingly one-sided, and draws attention to a couple of claims that have been widely circulated in mainstream elite media, which are somewhere between misleading and outright lies.

Good stuff. But then he says, “There’s lots of this going around”.

No! There’s not “lots”. This is absolutely fucking everything you read. All of it. From all sides. All the time. He’s still describing them as if they’re the exception. Everything is exaggerated, nobody is honest. Except him. And me. Sometimes.

It’s the universality of exaggeration and misleading information that makes it impossible to hold anyone responsible.

If what you say is 80% false, because everything you read is misinformation, or if what you say is 85% false, because everything you read is misinformation plus you exaggerated a bit yourself, what’s the difference? Can anyone really blame you?

If someone hears something deliberately misleading, and repeats it in such a way that it is factually false because they believe the thing that was deliberately implied but carefully not said outright, is that their fault? This is the real damage of the situation that we’re in. It’s not that “we” are being consistently lied to by “them” — it’s that everyone including “them” believes a ton of stuff that isn’t true.

I write on the morning after the first 2024 presidential debate. Everyone I read in my ideological bubble, including a few outsiders like Yglesias, are saying that Biden did disastrously badly. I didn’t watch it and am not going to. But many people are saying “they must have known he was like this.” But most of them probably didn’t. They know their opponents lie and exaggerate (they do!). Their friends were telling them it was OK.

I’m inclined to suspect it was always like this, but there are clues that it might not have been. In Britain, before my time, it was spoken of as a rule that a Minister would resign if it was shown he had “misled the house” even once. Something like that, applied not only to politicians but media too, is the only way to be different, since it’s impossible for holding anyone accountable for telling untruths while swimming in an ocean of untruth. And there isn’t a way to get there from here. (Actually my guess is that the rules were always applied selectively, but as I say it was before my time).

The ocean of untruth is what makes it impossible to change, too. You can appear wise and balanced, like Yglesias, by picking one or two things that your side is promoting and pointing out the weaknesses. But if you go through every single thing said, and rule out a third as simply false, and identify the misleading implications and exaggerations of the other two, you are massively harming your side, and your opponents will just pile in gleefully while repeating all their own lies and half-truths.

(Possibly Yglesias knows this, and that is why he is pretending to be naive. My interpretation is that he’s serious, though).

AI Doom Post

I’ve been meaning for a while to write in more detail why I’m not afraid of superintelligent AI.

The problem is, I don’t know. I kind of suspect I should be, but I’m not.

Of course, I’m on record as arguing that there is no such thing as superintelligence. I think I have some pretty good arguments for why that could be true, but I wouldn’t put it more strongly than that. I would need a lot more confidence for that to be a reason not to worry.

I think I need to disaggregate my foom-scepticism into two distinct but related propositions, both of which I consider likely to be true.

Strong Foom-Scepticism — the most intelligent humans are close to the maximum intelligence that can exist.

This is the “could really be true” one.

But there is also Weak Foom-Scepticism — Intelligence at or above the observed human extreme is not useful, it becomes self-sabotaging and chaotic.

That is also something I claim in my prior writing. But I have considerably more confidence in it being true. I have trouble imagining a superintelligence that pursues some specific goal with determination. I find it more likely it will keep changing its mind, or play pointless games, or commit suicide.

I’ve explained why before: it’s not a mystery why the most intelligent humans tend to follow this sort of pattern. It’s because they can climb through meta levels of their own motivations. I don’t see any way that any sufficently high intelligence can be prevented from doing this.

The Lebowski theorem: No superintelligent AI is going to bother with a task that is harder than hacking its reward function

Joscha Bach (@Plinz), 18 Apr 2018

@Alrenous quoted this and said “… Humans can’t hack their reward function”

I replied “It’s pretty much all we do.” I stand by that: I think all of education, religion, “self-improvement”, and so on are best described as hacking our reward functions. I can hack my nutritional reward function by eating processed food, hack my reproductive reward function by using birth control, my social reward function by watching soap operas. Manipulating the outside universe is doing things the hard way, why would someone superintelligent bother with that shit?

(I think Iain M Banks’ “Subliming” civilisations are a recognition of that)

The recent spectacular LLM progress is very surprising, but it is very much in line with the way I imagined AI. I don’t often claim to have made interesting predictions, but I’m pretty proud of this from over a decade ago:

the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

Speculations regarding limitations of Artificial Intelligence

I don’t think we’ve hit any limits yet. The current tech probably does what it does about as well as it possibly can, but there’s a lot of stuff it doesn’t do that it easily could do, and, I assume, soon will do.

It doesn’t seem to follow structured patterns of thought. When it comes up with an intriguingly wrong answer to a question, it is, as I wrote back then, behaving very like a human. But we have some tricks. It’s a simple thing, that GPT-4 could do today, to follow every answer with the answer to a new question: “what is the best argument that your previous answer is wrong”. Disciplined human thinkers do this as a matter of course.

Reevaluating the first answer in the light of the second is a little more difficult, but I would assume it is doable. This kind of disciplined reasoning is something that should be quite possible to integrate with the imaginative pattern-matching/pattern-formation of an LLM, and, on todays tech, I could imagine getting it to a pretty solid human level.

But that is quite different from a self-amplifying superintelligence. As I wrote back then, humans don’t generally stop thinking about serious problems because they don’t have time to think any more. They stop because they don’t think thinking more will help. Therefore being able to think faster – the most obvious way in which an AI might be considered a superintelligence – is hitting diminishing returns.

Similarly, we don’t stop adding more people to a committe because we don’t have enough people. We stop adding because we don’t think adding more will help. Therefore mass-producing AI also hits diminishing returns.

None of this means that AI isn’t dangerous. I do believe AI is dangerous, in many ways, starting with the mechanism that David Chapman identified in Better Without AI. Every new technology is dangerous. In particular, every new technology is a threat to the existing political order, as I wrote in 2011:

growth driven by technological change is potentially destabilising. The key is that it unpredictably makes different groups in society more and less powerful, so that any coalition is in danger of rival groups rapidly gaining enough power to overwhelm it.

Degenerate Formalism

Maybe an AI will get us all to kill each other for advertising clicks. Maybe an evil madman will use AI to become super-powerful and wipe us all out. Maybe we will all fall in love with our AI waifus and cease to reproduce the species. Maybe the US government will fear the power of Chinese AI so much that it starts a global nuclear war. All these are real dangers that I don’t have any trouble believing in. But they are all the normal kind of new-technology dangers. There are plenty of similar dangers that don’t involve AI.

On the Culture War

In my review of David Chapman’s Better Without AI, I fundamentally agreed with his assessment that recommender engine AIs are “deliberately”4 whipping up the culture war for ad clicks, and we need to somehow prevent this.

However, unlike Chapman, I can make no claim to be neutral in the culture war. I am clearly on one side of the divide.

It isn’t necessary to be neutral towards culture war issues, to be against the culture war. The key, if you are roused by some event linked to the culture war, is to think, “what can I practically do about this”.

Lets say, for instance, that I wish that The Rings of Power had been written to be as nearly true to Tolkien’s vison as The Fellowship of the Ring, and not an excuse for a load of left-wing propaganda.

What can I practically do about it?

Pretty obviously, nothing. I can refuse to watch the one they made, except that it never occurred to me to watch it in the first place. I can cancel my Amazon subscription, but that won’t mean much to the people who made the show, and again I will probably be doing that anyway once I’ve finished Clarkson’s Farm 2 because there isn’t really anything I watch on it any more.

I could write to Amazon explaining that I don’t find their programming relevant to me any more. That actually makes sense, but the cost/benefit ratio is very high.

What most people do is bitch about it publicly on social media. That is surely completely ineffective, and might even be counterproductive.

An anonymous5 voice shouting in public is not persuasive to anyone. In private, with people I know (whether they are likely to agree or not), I will share my feelings. That might help, and is a natural form of communication anyway. It also doesn’t generate ad clicks for the AI.

The reason I say it might be counterproductive is that by the behaviour of the leftist agitators, stirring up opposition is obviously their aim. As I said a while ago, In the case of the drag shows, the only credible motivation behind it that I can imagine is desire to upset the people who are upset by it.6 Yes, there are some perverts who want to actually do this stuff, but the public support for it comes from people who want to make it a hot public issue. Getting involved is playing into their hands.

Should we just let all this stuff happen then? Mostly, yes. The exception is, again, “what can I practically do about this”. If this is happening in some school in North London I have no connection with, the answer is nothing. Still more if it is happening in another country. I wrote in 2016: I consider local stories from far away as none of my business and refuse to consider them7. There is no reason anyone who is involved should care about my opinion. On the other hand, if it is happening somewhere I do have a connection, I should act — not to express my feelings, but to have an actual effect. This is similar to the recommendation that Chapman has for companies — not that they should launch into the culture war, but that they should drive it out. “Take this out of my office”.

This isn’t a clear route to victory. Nothing is that easy. But playing the culture war game, screaming in public, is no route to victory either. Take only measures that actually help, and those will generally be private and local.

From my biased viewpoint, the culture war is very asymetrical. One side is trying to overturn cultural standards, and the other is resisting and retaliating. In that sense, I think even Chapman’s call to drop it is actually a right-wing position. I think without a loud public clamour, most of the recent excesses would be quietly rolled back in private by people who care about reality. Unfortunately, the loud public clamour is not going away any time soon, but playing the recommender AI’s game by joining it, even on the “right” side, is helping to maintain it rather than stop it.

Once you rule out participating in the culture war, the next step is to stop consuming it. A family member watches Tucker Carlson regularly. The presentation of his show disgusts me slightly. Not because I disagree with Carlson; I think he is right about practically everything. But then what? All he is doing is getting people outraged about things they can’t do anything about. What is the effect on them of watching this show? They make noise on social media, which is harmful, and they vote for right-wing politicians, which is the thing that has been proved by experiment to not do any good.

Better Without AI

How to avert an AI apocalypse… and create a future we would like

Just read this from David Chapman. Really excellent, like all his stuff. What follows is going to be a mixture of boasts about how this is what I’ve been saying all along, and quibbles. Read the whole thing. In fact, read the whole thing before reading this.

It draws probably more on Chapman’s current work on meaning than on his previous life as an AI researcher, which is a good thing.

The book starts by discussing the conventional AI Safety agenda. I interpret this as mostly a bait and switch: he is putting this issue up first in order to contrast the generally discussed risks with what he (correctly) sees as the more important areas. That says, he isn’t, or at least isn’t clearly, as dismissive of it as I am. The thing about unquantifiable existential risks that can’t be ruled out is that if there was only one of them, it would be a big deal, but since there are somewhere between hundreds and infinitely many, there’s no way to say that one of them is more worthy of attention than all the others.

He makes the correct point that intelligence is not the danger: power is. As I said in 2018, If we’re talking teleology, the increasing variable that we’re measuring isn’t intelligence or complexity, it’s impact on the universe. This also leads to being dismissive of “alignment” as a concept. A significant proportion of humans are more than adequately motivated to cause catastrophe, given enough power — while completely inhuman goals or motivations are conceivable in AI, they don’t obviously increase the risks beyond those of powerful AI with very comprehensible and mundane human-like goals and motivations. This is one of the most critical points: you don’t need to desire catastrophe to cause a catastrophe. Villains always see themselves as the heroes (though, realistically, more fictional villains should probably see themselves as normal sensible people doing normal sensible things).

All the blather about “real intelligence”, “consciousness” and so on is incoherent and irrelevant to any practical question. Chapman covers this in his other writing better than anyone else I’ve ever read.

He then plays down, or at least draws attention away from, the possibility of “superintelligence”. My own pet theory, expressed here before, is that superintelligence is not a thing. As Chapman puts it: “Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the pointless puzzles IQ tests throw at you

Next comes the real meat of the book. The scariest AI scenarios do not involve superintelligence or rogue AIs fighting against humanity, but practical AIs doing fairly reasonable things, much more thoroughly and effectively than before, and those things having very harmful downstream effects.

And while there are no doubt dozens of possible scenarios that meet that description, there is one that is already happening and already doing massive damage, with no clear limit to how much more damage could happen.

The scenario that is actually happening is the collision of two things I have brought up here before, but not explicitly put together as Chapman does.

Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.

Anomaly UK: Defining the Facebook Era

this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.

Anomaly UK: Epiphenomena

(“ML” in my second quote is “Machine Learning”, i.e. today’s AI)

Putting these two things together, what you get is:

The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)

AI has discovered that inciting tribal hatred is among the best ways to sell ads. In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages. Under partial brain control from AIs, we humans create emotion-inducing culture-war messages. The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).

Better Without AI: At war with the machines

This is not an AI gone rogue and seeking to destroy mankind. This is a business function that has existed for what, 150, 200 years: sensationalist media stirring up drama for advertising revenue. But that existing business has been made orders of magnitude more effective by new communications technology and AI. I suspect it would have become very dangerous even without the AI — my “Defining the Facebook Era” did not take AI into account, and the “Epiphenomena” post was six months later — but quite likely I had underestimated the role that AI was already playing two years ago, and in any case it doesn’t matter: as dangerous as social media without AI might be, social media with AI “recommender engines” is, as Chapman argues, vastly more dangerous still. It is quite reasonable to claim that the AI picked the current and previous US presidents, undermined and destroyed the effectiveness of long-established and prestigious institutions 8, and has the potential to be far more effective and harmful in the immediate future, without any further “breakthroughs” in AI science.

As I tweeted in 2020, If you think a million people dying of a disease is the worst thing that could ever happen, you should read a history book. Any history book would do … in worst-case competitions, politics beat plagues every time., and as I blogged here back in 2006, Humankind has always faced environmental threats and problems, and has a good and improving record of coping with them. We have no such comforting record in dealing with overreaching government and tyranny.9

AI may have many avenues to inflict damage, but the force multiplier effect of politics means that all other ways of inflicting damage are also-rans. Specifically, the primitive, clunky, unreliable AIs we have today are leveraging media, advertising and democracy to suck in human attention. Like criminals, the money they steal for themselves represents a tiny fraction of the damage they do.

Chapman devotes a lot of attention to just how primitive, clunky and unreliable neural-net based AI is, which is all true, but I wouldn’t dwell on it so much myself, since in this case its limitations are not increasing the damage it does at all, and probably are decreasing it. The biggest worry is not the effects of its errors, but how much more damaging it will be if a way is found to reduce its errors. The situation today is very bad, but there is little reason not to expect it to get worse. The “2026 apocalypse” scenario is not overstated in my view – there is no upper limit to mass insanity.


Mooglebook AI does not hate you, but you are made out of emotionally-charged memes it can use for something else

We next come to what to do about it: “How to avert an AI apocalypse”. The first thing, reasonably, is to fight against the advertising recommender engines. Block them, don’t follow them, try to ban them.

My only issue there is that, as I said before, AI is only part of the problem. I mean, since the media companies now know that inciting tribal hatred is among the best way to sell ads, they don’t need AI any more. They can eliminate whatever technical measure you try to define, but carry on doing the same thing. To be clear, that is probably still an improvement, but it’s a half measure.

In fact, the AI that has taken control of politics is exploiting two things: the advertising industry, and democracy. It is not doing anything that has not been done before; rather, it is doing bad things that have long been tolerated, and amplifying them to such a degree that they become (or at least should become) intolerable. The intersection of advertising and democracy inevitably tends towards rollerskating transsexual wombats — without AI amplification that is arguably a manageable threat. However, my personal view is that it isn’t.

The next chapter of the book is about science. We don’t want AI, so instead lets just have decent science. unfortunately in the 21st century we don’t have decent science. I’ve written about this quite a lot recently, and Chapman’s writing is very much in line with mine:

Under current incentives, researchers have to ensure that everything they do “succeeds,” typically by doing work whose outcome is known in advance, and whose meager results can be stretched out across as many insignificant-but-publishable journal articles as possible. By “wasted,” I mean that often even the researchers doing the work know it’s of little value. Often they can name better things they would do instead, if they could work on what they believe is most important.

Better Without AI: Stop Obstructing Science

I have no idea how to fix this. Classic science was mostly carried out by privileged rich buggers and clergymen, plus the occasional outside genius with a sponsor. State funding of science in the last century initially added vast resources and manpower to the same system, with spectacularly successful results. However, over decades the system inevitable changed its form and nature, producing today’s failure. There is no way back to that “transitional form”. We can go back to rich buggers (we no longer have Victorian clergymen), but that means reducing the size of science probably by 99.9% – it’s tempting but probably not an improvement in the short term.

Anyway, that chapter is very good but of minor relevance. It does also contain more good arguments about why “superintelligence” is not a major issue.

The last chapter is about having a wider positive vision (though perhaps “vision” is the wrong word).

Mostly it echoes Chapman’s (excellent) other writings: Eschew lofty abstractions, accept uncertainty and nebulosity, avoid tribalism, and look for things that are simply better. Discovering what you like is a never-ending path of opening to possibility.

you do not have an “objective function”
you do not have any “terminal goal”
your activity is not the result of “planning” or “deciding”
you do not have any “ethics”
these are all malign rationalist myths
they make you miserable when you take them seriously
you are reflexively accountable to reality
    not to your representations of it
your beneficent activity arises
    as spontaneous appreciative responsiveness

Better Without: This is About You

It would be nice to end on that note, but I have to shoehorn my own conclusion in:

I don’t quite recall seeing it stated explicitly, but I think Chapman’s view is that advertising recommendation engines are only the first widespread practical use of AI, and, not coincidentally, the first form of apocalyptic threat from AI. As other practical uses for AI are found, equal or greater threats will result. That is plausible, but, as I’ve said, I think politics is (by far) the greatest point of vulnerability of our civilisation. If we protect ourselves from politics, we are going a long way to protecting ourselves from AI and from other threats.

This is probably my biggest near-disagreement with the book. Yes, AI is an existential risk that we might not survive. But then, Genetic Engineering is an existential risk that we might not survive. Coal is an existential risk that we might not survive. Heck, Literacy is an existential risk that we might not survive. For better or worse, we don’t survive these risks by suppressing them, but by adapting to them. Current AI is indeed unreliable and over-hyped, but I’m more worried by the prospect of it getting better than by the prospect of it keeping the same limitations. There are many imaginable and unimaginable risks that could come from AI in the future, and one solid one that is present today, that Chapman’s second chapter lays out admirably. If we can save ourselves from that one, we are doing well for today. In any case, I suspect that the next risk will, like this one, take the form of amplifying some harm that already exists to the point that it becomes a danger of a different order.

This risk today is the amplification of politics via media, advertising, and democracy. Democracy was well-known for causing catastrophes like the rollerskating transsexual wombats before Leibnitz’s calculator or the US Declaration of Independence. The level of democracy we have in the West today is not survivable, with or without AI. For that matter, the level of democracy in China and Russia is dangerously high.

Update: more on the culture war

Climatic Climax

The last time I blogged about climate was in early 2018. Back then, I said that the climate scare was “primarily a media phenomenon“.

I was seriously wrong. I had underestimated the decline of conspiracy, the degree to which it is impossible in the modern age to sustain insincerity10.

I also ignored everything I knew about the Cathedral. The media is part of the ruling structure; if the media believes something, then by definition the ruling structure believes it.

My mental model, at the time, was that the media promoted the climate scare because it was good TV. The politicians went along with it because it was good politics. But at the end of the day, real action on the climate would be superficial, fake, or indefinitely postponed to the future, because the sensible people behind the scenes would never actually cripple our entire civilisation over something so silly.

What an idiot.

In reality the climate scare was and is primarily a political phenomenon — one of the non-partisan runaway manias I discussed recently, under the title Loyalists without a cause. As I tweeted, “Since the end of the cold war, the most damaging movements have been non-partisan: environmentalism, social justice, global democracy.”

In the modern system, where nobody is responsible for results, and everyone is responsible for tomorrow’s papers, it is just very much easier to support something that makes you seem selfless or kind than to oppose it. If it is actually a live partisan issue, then you can and should take your side, in order to appeal to your party, but only a few things can be live partisan issues at once. Those are the important issues, and if you weaken your position by taking an unattractive position on an unimportant non-partisan issue, you risk concrete losses on the important partisan issues. (You also risk your own personal advancement.)

I did touch on this, back in 2010 — the left-wing commentator Jonathan Hari claimed that 91% of Conservative MPs “don’t believe man-made global warming exists.” And yet, I emphasised, they ran on a manifesto commitment to reduce greenhouse gas emissions.

In late 2018, I pointed out that “It is a feature of any large movement that pretending to believe something is effectively the same as believing it.” If Tory MPs in 2010 did not believe that man-made global warming existed, that made no difference. They effectively did believe it. There were no sensible people behind the scenes, keeping the power stations open.

There’s also a generational effect. The 2010 parliamentary conservative party might have been pretending, but newcomers coming in weren’t in on the joke.

There’s also no absolute limit on how far things can go, as Sri Lanka is in the process of demonstrating. There is no fuel on the island, no money to buy any because the export industries have been crippled, and the mob yesterday stormed the presidential palace. Because of environmentalism.

At the same time, it isn’t actually inevitable. To take one of my favourite themes, the unthinkable can become thinkable very fast. This could happen tomorrow.

The German Green party just voted for more coal power 11

The European Commission and Parliament have agreed that Natural Gas is Green and sustainable

The easy way to save civilisation, without looking an idiot on climate change, is just to not talk about it. It all got going because the media would happily report the conflict between “nice” pro-environment politicians and “nasty” anti-environment politicians, and nobody wanted to appear nasty. If the left-wing media see that banging on about climate change is bad for their politicians, they will keep their mouths shut. The population will forget all about it in a matter of weeks. If it stays a non-partisan issue, then politicians will as always take whatever side of the story gives them better press.

Over a longer timescale, when the fanatics counterattack, then an actual counter-narrative will gradually be built. The dangers were over-hyped. Adaptation is feasible. Warm weather is actually good. Those of us who have been saying all of this for decades will be completely ignored, but our talking points, suitably laundered, will be everywhere. As I said before, decades from now the question will be recorded in history as a media fad that got out of hand.

A bunch of scientists will have funding dry up. But this was never really about science. The whole climate scare is fundamentally political, not scientific. Because of that, if the politics change everything else will just topple. In the early years of this blog, I wrote very frequently about the science, or lack thereof, of global warming. There is a small amount of very bad science making the case for a catastrophe. There is a truly vast amount of science explicitly taking that as a given, and wrapped in verbiage that seems to support it, but not itself adding any evidence. There are a lot of papers whose conclusions are phrased to give support to the dominant political narrative, but whose concrete findings are wholly compatible with “negligible effect”. Change the political incentives, and all these papers can be repeated, with identical results and “nothing need be done” abstracts. Again, history will not describe this as a scientific story.

The active propagandists of global warming always knew that this could happen. You can see that very clearly in the climategate emails that leaked in 2009 — they were desperate to keep control of the media narrative, even though to casual observers it looked like their opponents were very few and weak.

I’m not actually particularly confident that it is going to break like that now. Sri Lanka shows that it is not inevitable. But it could happen.