Kamala is (Whale) Cancer

The first real lesson of the election is that whales don’t get cancer because the cancers get cancer before they get big enough to kill the whale1.

Put another way, the ideals of the modern left are very bad for society, because they obstruct effective organisation by encouraging disruptive behaviour, allow corruption by removing personal responsibility, and assign people to functions based on identity rather than ability.

However, a political movement also is a society, and those ideals are not only destructive to society, they are destructive to the political movement that advances them.

Trump could easily have been beaten by a good candidate. A below-average career democrat like Biden was able to beat him (OK, maybe that was fraud. I don’t know. The fact that there was a somewhat even national swing towards Trump compared with 2020 suggests possibly not, to me)

Trump talked about running for president for decades, but he didn’t do it until 2016 because he couldn’t win. He beat Clinton in 2016 and Harris now, because they were bad candidates who got the nomination through a combination of corruption and diversity ideology.

If you can’t say that a bad candidate is bad, if she is a woman or a minority or both, then you will necessarily get bad candidates. If you are corrupt, then you will get bad candidates through corruption rather than good candidates on their merits. If you do not hold people responsible, your staff will spend campaign money on meeting their favourite pop stars rather than on getting votes.

The Democratic Party has poisoned itself with the same poisons that it poisons the country with.

The catch, of course, is that the Republican Party is not much better. It wasn’t only the Democratic opposition that was so unusually weak in 2016 that Trump could beat it. Perhaps the Republican party is less captured by its own bureaucracy, and their third-rate candidates were not so vulnerable to maverick outsiders appealing to the primary electorate against the party machine? Sanders had his own popularity, but he was more effectively nobbled than Trump was, although at least as much effort was devoted to nobbling Trump. On top of their corruption and diversity ideology, the Democratic Party’s bureaucracy and authoritarinism undermined its ability to select an electable candidate.

I think this is also a big part of the mechanism of one of the big questions of our age, “Why did politics go insane?”

As mass media became more appealing — newspapers to radio to television to social media — what Americans call the “ground game” of politics became less important. The parties of the 19th and early-to-mid 20th centuries were really serious organisations, with millions of members, regular meetings, publications, social events, and fully organised and directed for campaigning. People who were heavily into politics lived and breathed this organisation. Today you can be important in politics by making memes on social media, and have no idea of what goes into creating and maintaining an organisation the size of a 1950s political party. Thousands of people can evolve ideologies on Twitter or Tumblr and never notice that those ideologies are a complete barrier to getting anything done in the outside world.

However, the organisations are still important to the process of selecting candidates, even with America’s primary system. And what just happened was that an organisation made disfunctional by anti-organisation ideology picked a terrible candidate.

Elite Misinformation

I kind of like Matthew Yglesias. He comes out with some wild things occasionally, but mostly he’s careful and reasonable, even though I don’t share his values.

Now I understand him a bit better, including some of the wild stuff. His main problem is that he is spectacularly naive.

His recent piece, “Elite misinformation is an underrated problem” is, in itself, a good piece. He notes that “misinformation” research is embarrasingly one-sided, and draws attention to a couple of claims that have been widely circulated in mainstream elite media, which are somewhere between misleading and outright lies.

Good stuff. But then he says, “There’s lots of this going around”.

No! There’s not “lots”. This is absolutely fucking everything you read. All of it. From all sides. All the time. He’s still describing them as if they’re the exception. Everything is exaggerated, nobody is honest. Except him. And me. Sometimes.

It’s the universality of exaggeration and misleading information that makes it impossible to hold anyone responsible.

If what you say is 80% false, because everything you read is misinformation, or if what you say is 85% false, because everything you read is misinformation plus you exaggerated a bit yourself, what’s the difference? Can anyone really blame you?

If someone hears something deliberately misleading, and repeats it in such a way that it is factually false because they believe the thing that was deliberately implied but carefully not said outright, is that their fault? This is the real damage of the situation that we’re in. It’s not that “we” are being consistently lied to by “them” — it’s that everyone including “them” believes a ton of stuff that isn’t true.

I write on the morning after the first 2024 presidential debate. Everyone I read in my ideological bubble, including a few outsiders like Yglesias, are saying that Biden did disastrously badly. I didn’t watch it and am not going to. But many people are saying “they must have known he was like this.” But most of them probably didn’t. They know their opponents lie and exaggerate (they do!). Their friends were telling them it was OK.

I’m inclined to suspect it was always like this, but there are clues that it might not have been. In Britain, before my time, it was spoken of as a rule that a Minister would resign if it was shown he had “misled the house” even once. Something like that, applied not only to politicians but media too, is the only way to be different, since it’s impossible for holding anyone accountable for telling untruths while swimming in an ocean of untruth. And there isn’t a way to get there from here. (Actually my guess is that the rules were always applied selectively, but as I say it was before my time).

The ocean of untruth is what makes it impossible to change, too. You can appear wise and balanced, like Yglesias, by picking one or two things that your side is promoting and pointing out the weaknesses. But if you go through every single thing said, and rule out a third as simply false, and identify the misleading implications and exaggerations of the other two, you are massively harming your side, and your opponents will just pile in gleefully while repeating all their own lies and half-truths.

(Possibly Yglesias knows this, and that is why he is pretending to be naive. My interpretation is that he’s serious, though).

On the Culture War

In my review of David Chapman’s Better Without AI, I fundamentally agreed with his assessment that recommender engine AIs are “deliberately”2 whipping up the culture war for ad clicks, and we need to somehow prevent this.

However, unlike Chapman, I can make no claim to be neutral in the culture war. I am clearly on one side of the divide.

It isn’t necessary to be neutral towards culture war issues, to be against the culture war. The key, if you are roused by some event linked to the culture war, is to think, “what can I practically do about this”.

Lets say, for instance, that I wish that The Rings of Power had been written to be as nearly true to Tolkien’s vison as The Fellowship of the Ring, and not an excuse for a load of left-wing propaganda.

What can I practically do about it?

Pretty obviously, nothing. I can refuse to watch the one they made, except that it never occurred to me to watch it in the first place. I can cancel my Amazon subscription, but that won’t mean much to the people who made the show, and again I will probably be doing that anyway once I’ve finished Clarkson’s Farm 2 because there isn’t really anything I watch on it any more.

I could write to Amazon explaining that I don’t find their programming relevant to me any more. That actually makes sense, but the cost/benefit ratio is very high.

What most people do is bitch about it publicly on social media. That is surely completely ineffective, and might even be counterproductive.

An anonymous3 voice shouting in public is not persuasive to anyone. In private, with people I know (whether they are likely to agree or not), I will share my feelings. That might help, and is a natural form of communication anyway. It also doesn’t generate ad clicks for the AI.

The reason I say it might be counterproductive is that by the behaviour of the leftist agitators, stirring up opposition is obviously their aim. As I said a while ago, In the case of the drag shows, the only credible motivation behind it that I can imagine is desire to upset the people who are upset by it.4 Yes, there are some perverts who want to actually do this stuff, but the public support for it comes from people who want to make it a hot public issue. Getting involved is playing into their hands.

Should we just let all this stuff happen then? Mostly, yes. The exception is, again, “what can I practically do about this”. If this is happening in some school in North London I have no connection with, the answer is nothing. Still more if it is happening in another country. I wrote in 2016: I consider local stories from far away as none of my business and refuse to consider them5. There is no reason anyone who is involved should care about my opinion. On the other hand, if it is happening somewhere I do have a connection, I should act — not to express my feelings, but to have an actual effect. This is similar to the recommendation that Chapman has for companies — not that they should launch into the culture war, but that they should drive it out. “Take this out of my office”.

This isn’t a clear route to victory. Nothing is that easy. But playing the culture war game, screaming in public, is no route to victory either. Take only measures that actually help, and those will generally be private and local.

From my biased viewpoint, the culture war is very asymetrical. One side is trying to overturn cultural standards, and the other is resisting and retaliating. In that sense, I think even Chapman’s call to drop it is actually a right-wing position. I think without a loud public clamour, most of the recent excesses would be quietly rolled back in private by people who care about reality. Unfortunately, the loud public clamour is not going away any time soon, but playing the recommender AI’s game by joining it, even on the “right” side, is helping to maintain it rather than stop it.

Once you rule out participating in the culture war, the next step is to stop consuming it. A family member watches Tucker Carlson regularly. The presentation of his show disgusts me slightly. Not because I disagree with Carlson; I think he is right about practically everything. But then what? All he is doing is getting people outraged about things they can’t do anything about. What is the effect on them of watching this show? They make noise on social media, which is harmful, and they vote for right-wing politicians, which is the thing that has been proved by experiment to not do any good.

Better Without AI

How to avert an AI apocalypse… and create a future we would like

Just read this from David Chapman. Really excellent, like all his stuff. What follows is going to be a mixture of boasts about how this is what I’ve been saying all along, and quibbles. Read the whole thing. In fact, read the whole thing before reading this.

It draws probably more on Chapman’s current work on meaning than on his previous life as an AI researcher, which is a good thing.

The book starts by discussing the conventional AI Safety agenda. I interpret this as mostly a bait and switch: he is putting this issue up first in order to contrast the generally discussed risks with what he (correctly) sees as the more important areas. That says, he isn’t, or at least isn’t clearly, as dismissive of it as I am. The thing about unquantifiable existential risks that can’t be ruled out is that if there was only one of them, it would be a big deal, but since there are somewhere between hundreds and infinitely many, there’s no way to say that one of them is more worthy of attention than all the others.

He makes the correct point that intelligence is not the danger: power is. As I said in 2018, If we’re talking teleology, the increasing variable that we’re measuring isn’t intelligence or complexity, it’s impact on the universe. This also leads to being dismissive of “alignment” as a concept. A significant proportion of humans are more than adequately motivated to cause catastrophe, given enough power — while completely inhuman goals or motivations are conceivable in AI, they don’t obviously increase the risks beyond those of powerful AI with very comprehensible and mundane human-like goals and motivations. This is one of the most critical points: you don’t need to desire catastrophe to cause a catastrophe. Villains always see themselves as the heroes (though, realistically, more fictional villains should probably see themselves as normal sensible people doing normal sensible things).

All the blather about “real intelligence”, “consciousness” and so on is incoherent and irrelevant to any practical question. Chapman covers this in his other writing better than anyone else I’ve ever read.

He then plays down, or at least draws attention away from, the possibility of “superintelligence”. My own pet theory, expressed here before, is that superintelligence is not a thing. As Chapman puts it: “Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the pointless puzzles IQ tests throw at you

Next comes the real meat of the book. The scariest AI scenarios do not involve superintelligence or rogue AIs fighting against humanity, but practical AIs doing fairly reasonable things, much more thoroughly and effectively than before, and those things having very harmful downstream effects.

And while there are no doubt dozens of possible scenarios that meet that description, there is one that is already happening and already doing massive damage, with no clear limit to how much more damage could happen.

The scenario that is actually happening is the collision of two things I have brought up here before, but not explicitly put together as Chapman does.

Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.

Anomaly UK: Defining the Facebook Era

this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.

Anomaly UK: Epiphenomena

(“ML” in my second quote is “Machine Learning”, i.e. today’s AI)

Putting these two things together, what you get is:

The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)

AI has discovered that inciting tribal hatred is among the best ways to sell ads. In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages. Under partial brain control from AIs, we humans create emotion-inducing culture-war messages. The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).

Better Without AI: At war with the machines

This is not an AI gone rogue and seeking to destroy mankind. This is a business function that has existed for what, 150, 200 years: sensationalist media stirring up drama for advertising revenue. But that existing business has been made orders of magnitude more effective by new communications technology and AI. I suspect it would have become very dangerous even without the AI — my “Defining the Facebook Era” did not take AI into account, and the “Epiphenomena” post was six months later — but quite likely I had underestimated the role that AI was already playing two years ago, and in any case it doesn’t matter: as dangerous as social media without AI might be, social media with AI “recommender engines” is, as Chapman argues, vastly more dangerous still. It is quite reasonable to claim that the AI picked the current and previous US presidents, undermined and destroyed the effectiveness of long-established and prestigious institutions 6, and has the potential to be far more effective and harmful in the immediate future, without any further “breakthroughs” in AI science.

As I tweeted in 2020, If you think a million people dying of a disease is the worst thing that could ever happen, you should read a history book. Any history book would do … in worst-case competitions, politics beat plagues every time., and as I blogged here back in 2006, Humankind has always faced environmental threats and problems, and has a good and improving record of coping with them. We have no such comforting record in dealing with overreaching government and tyranny.7

AI may have many avenues to inflict damage, but the force multiplier effect of politics means that all other ways of inflicting damage are also-rans. Specifically, the primitive, clunky, unreliable AIs we have today are leveraging media, advertising and democracy to suck in human attention. Like criminals, the money they steal for themselves represents a tiny fraction of the damage they do.

Chapman devotes a lot of attention to just how primitive, clunky and unreliable neural-net based AI is, which is all true, but I wouldn’t dwell on it so much myself, since in this case its limitations are not increasing the damage it does at all, and probably are decreasing it. The biggest worry is not the effects of its errors, but how much more damaging it will be if a way is found to reduce its errors. The situation today is very bad, but there is little reason not to expect it to get worse. The “2026 apocalypse” scenario is not overstated in my view – there is no upper limit to mass insanity.


Mooglebook AI does not hate you, but you are made out of emotionally-charged memes it can use for something else

We next come to what to do about it: “How to avert an AI apocalypse”. The first thing, reasonably, is to fight against the advertising recommender engines. Block them, don’t follow them, try to ban them.

My only issue there is that, as I said before, AI is only part of the problem. I mean, since the media companies now know that inciting tribal hatred is among the best way to sell ads, they don’t need AI any more. They can eliminate whatever technical measure you try to define, but carry on doing the same thing. To be clear, that is probably still an improvement, but it’s a half measure.

In fact, the AI that has taken control of politics is exploiting two things: the advertising industry, and democracy. It is not doing anything that has not been done before; rather, it is doing bad things that have long been tolerated, and amplifying them to such a degree that they become (or at least should become) intolerable. The intersection of advertising and democracy inevitably tends towards rollerskating transsexual wombats — without AI amplification that is arguably a manageable threat. However, my personal view is that it isn’t.

The next chapter of the book is about science. We don’t want AI, so instead lets just have decent science. unfortunately in the 21st century we don’t have decent science. I’ve written about this quite a lot recently, and Chapman’s writing is very much in line with mine:

Under current incentives, researchers have to ensure that everything they do “succeeds,” typically by doing work whose outcome is known in advance, and whose meager results can be stretched out across as many insignificant-but-publishable journal articles as possible. By “wasted,” I mean that often even the researchers doing the work know it’s of little value. Often they can name better things they would do instead, if they could work on what they believe is most important.

Better Without AI: Stop Obstructing Science

I have no idea how to fix this. Classic science was mostly carried out by privileged rich buggers and clergymen, plus the occasional outside genius with a sponsor. State funding of science in the last century initially added vast resources and manpower to the same system, with spectacularly successful results. However, over decades the system inevitable changed its form and nature, producing today’s failure. There is no way back to that “transitional form”. We can go back to rich buggers (we no longer have Victorian clergymen), but that means reducing the size of science probably by 99.9% – it’s tempting but probably not an improvement in the short term.

Anyway, that chapter is very good but of minor relevance. It does also contain more good arguments about why “superintelligence” is not a major issue.

The last chapter is about having a wider positive vision (though perhaps “vision” is the wrong word).

Mostly it echoes Chapman’s (excellent) other writings: Eschew lofty abstractions, accept uncertainty and nebulosity, avoid tribalism, and look for things that are simply better. Discovering what you like is a never-ending path of opening to possibility.

you do not have an “objective function”
you do not have any “terminal goal”
your activity is not the result of “planning” or “deciding”
you do not have any “ethics”
these are all malign rationalist myths
they make you miserable when you take them seriously
you are reflexively accountable to reality
    not to your representations of it
your beneficent activity arises
    as spontaneous appreciative responsiveness

Better Without: This is About You

It would be nice to end on that note, but I have to shoehorn my own conclusion in:

I don’t quite recall seeing it stated explicitly, but I think Chapman’s view is that advertising recommendation engines are only the first widespread practical use of AI, and, not coincidentally, the first form of apocalyptic threat from AI. As other practical uses for AI are found, equal or greater threats will result. That is plausible, but, as I’ve said, I think politics is (by far) the greatest point of vulnerability of our civilisation. If we protect ourselves from politics, we are going a long way to protecting ourselves from AI and from other threats.

This is probably my biggest near-disagreement with the book. Yes, AI is an existential risk that we might not survive. But then, Genetic Engineering is an existential risk that we might not survive. Coal is an existential risk that we might not survive. Heck, Literacy is an existential risk that we might not survive. For better or worse, we don’t survive these risks by suppressing them, but by adapting to them. Current AI is indeed unreliable and over-hyped, but I’m more worried by the prospect of it getting better than by the prospect of it keeping the same limitations. There are many imaginable and unimaginable risks that could come from AI in the future, and one solid one that is present today, that Chapman’s second chapter lays out admirably. If we can save ourselves from that one, we are doing well for today. In any case, I suspect that the next risk will, like this one, take the form of amplifying some harm that already exists to the point that it becomes a danger of a different order.

This risk today is the amplification of politics via media, advertising, and democracy. Democracy was well-known for causing catastrophes like the rollerskating transsexual wombats before Leibnitz’s calculator or the US Declaration of Independence. The level of democracy we have in the West today is not survivable, with or without AI. For that matter, the level of democracy in China and Russia is dangerously high.

Update: more on the culture war

Media bifurcation

Quick summary of some tweets in response to this article about how hard news is moving behind paywalls

https://www.axios.com/media-startups-subscriptions-elite-401b9309-404e-482b-9e23-718f9daea3a6.html

The tone of the articles is that journalism is moving to paywalls so the poor underprivileged folks will be denied all this valuable journalism, and suffer as a result.

If the mass population were to be denied access to journalism, that would be about the best thing that could possibly happen, but of course it is not conceivable. They will continue to get what they want to consume; the stuff that is moving behind paywalls is the niche stuff that the profitable mass media no long sees a reason to subsidise.

Nevertheless, that is significant and could have large effects in the long run. I wrote about some of the issues a decade ago, when I reviewed “Flat Earth News”.

Mass-market news is primarily entertainment. Most people watch news to engage their minds and have something to talk about, not because they actually benefit from the information. (see also: Politics as Entertainment).

There is a long tradition, though never dominant and much reduced in recent decades, of including true information in news media. This was a product of paternalism, idealism, and the fact that actual news was kicking around anyway and was easy to throw in.

There has always been a minority of news consumers who actually need true information from the news for practical reasons. They used to be served by the same media industry as the mass market. (Not necessarily the same publications, but the same organisations and meta-organisations of media).

When the same industry produced facts for the minority and entertainment for the majority, that made it cheap to include facts in entertainment. If it bifurcates, the infotainment side will no longer have access to or focus on true information.

It is not clear that “premium news” of the type described in the axios piece is the factual news I am discussing, as opposed to just being a market segment of infotainment. It might be, but “business intelligence” services are more obvious candidates.

The “factual news consumers” I am thinking of are primarily business and government. If you want to know what is really going on in the world today, in order to make business decisions, do you read a daily newspaper or watch TV news? I don’t think so — you read specialised industry analyses.

Epiphenomena

Enlightening post from Jason Pargin

The story is interesting in its own right. Youtube observs responses from users, both to videos being listed in their screens, and to actually watching the videos, runs some Machine Learning models8 over that feedback information, and selects what to list to them next to keep them watching and engaging. (This is widely understood)

(In a tiny, tiny fraction of high-profile cases, it then applies human moderation to advance the company’s interests, its political and social biases, and so on. That’s not what I’m writing about today)

As is known, this feedback loop can lead people in some highly unexpected directions. Recreational lock-picking, really? There are also some less mysterious tendencies — any activity is more watchable if it’s being done by attractive young women. But the particular instance Pargin finds — of an innocuous third-world fishing video getting ten times the views if it mildly hints at a tiny bit of indecency that isn’t even really there — would have been very difficult to predict. Note that it’s not as simple as “ten times as many people want to see the videos with the not-quite-upskirt thumnail”. Because of the feedback, more people get the suggestion to watch that video, and many of them might have equally watched the other ones too, but didn’t get the opportunity. The behaviour of a smaller number of unambitious creeps is driving the behaviour of a (probably) larger number of ordinary viewers.

Pargin makes the wider point that this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.

I want to make a wider point still. We can understand, roughly, how this particular mode of media comes to produce some kinds of content and not others. That does not mean that without this particular mode of media, you get “normal, natural” kinds of content. You just get different incentives on producers, and consequently different content.

It’s not just media, either. Different structures of organisation and information flow produce different incentives for participants, and consequently different behaviour. Financing a business by selling equity into a highly liquid public market produces certain specific behaviours in management. Running a sport where teams can prevent players moving between them produces certain behaviours in the players. Organisations may be designed to incentivise certain desired behaviours, but many others will arise spontaneously because the system as a whole unexpectedly rewards them.

This is what Moldbug means when he says “The ideology of the oligarchy is an epiphenomenon of its organic structure.” We do not have woke ideology because a deep centuries-long woke conspiracy has taken over. We do not have it because someone sat down and worked out that a particular structural relationship between civil service, universities, and television would tend to promote ideological shifts of particular kinds. We have it because a structural relationship was created between civil service, universities, and newspapers and it turns out that that structural relationship just happens to result in this kind of insanity. You can trace through all the details — the career path of academics, the social environment of civil servants. You can spot historical parallels — this bit Chris Arnade found on pre-revolutionary French intellectuals. Moldbug attributes this epiphenomenon primarily to the separation of power from responsibility. I’m sure he’s right, but it’s a bit like Jason Pargin saying “yes, the internet really is that horny”. The particular ways in which irresponsibility or horniness express themselves in systems are still somewhat unexpected.

Related:

Defining the Facebook Era

This is just an addendum to the previous post — a few tweets from three years ago

My tweet reads,

Early 20th century politics was organised around printing presses. To be a party, you needed printing equipment. Today’s establishment is the group of people who got control of television. There’s no other worthwhile definition.

An earlier Tweet from Carl Miller said

Whatever the ‘mainstream’ is, it’ll never again have a monopoly on an ability to raising large amounts of money quickly, reaching millions of people, coordinating logistics on the ground. The money, experience and machinery of the political mainstream matters a lot less now.

Half my timeline is now trying to fight to keep that true. I think they’re going to lose.

The End of an Era

Tweetable link: https://t.co/t5qlk2FaZG?amp=1

The Internet began somewhere around 1970

The World Wide Web began somewhere around 1990

Mass participation in the internet was reached a little before 2000

With that, anyone could communicate with anyone else, or with any group, easily and free of charge.

That did not mean that anyone could whip up ordinary people with ordinary interests into political hysteria like Black Lives Matter or QAnon. Ordinary people with ordinary interests would not pay attention to that stuff.

Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.

The Trump presidency was a glorious carnival, but a carnival is all that it was. When the Saturnalia ends the slaves go back to work. I said when he was elected that it was a setback for neoreaction, and it probably was.

I got a lot wrong though. I did not expect the anti-Trump hysteria to endure. Facebook-era politics was too new, then, for me to have understood how it works.

The Facebook era of politics ends today. As with the Trump presidency, I will miss the fun and excitement. I miss eating a packet of biscuits a day too. But man was not meant to eat that much sugar, and democracy was not meant to exist with uncontrolled access to mass media. From the invention of journalism until the twenty-first century, ability to reach the public with your propaganda was power, and power had its say on who could do it. A decade of unconstrained mass media gave us Trump and Brexit and the Gilet Jaunes9, and it also gave us Open Borders, Trans Rights, Russiagate10 BLM, PornHub, and QAnon. It was destroying our society, and it was going to be stopped sooner or later.

We only really had one thing to say to the normies – that democracy was an illusion, and they were not in charge. I don’t think we need Twitter to tell them that any more. The events of the last week have exposed the relationship between government and media much more obviously than weird technical blog posts.

I spent the night bitching about the hypocrisy and dishonesty of the censors. I suppose I had to get it out of my system.

The pogrom will go a bit wider at first, but in the end I don’t think it will do more than roll back to 2005 or so. I do not expect to be censored, because I do not speak to voters. It was the frictionlessness of the Facebook news feed that pulled normies into these games — if you have to go out of your way to find me, then I am doing the regime no harm, and I expect to be ignored, at least if I get through the next few months.

This, of course, is also the system in China. And I admire the Chinese system. When I tried to imagine neoreactionary victory, I struggled a bit with how a monarchical regime could exist in a world of uncensored internet. I don’t have to worry now.

Some practical resilience steps are sensible. Back up everything. Try not to depend on the Silicon Valley giants (GMail is nice, but you’re not the customer you’re the product). It’s possible that something like RSS could make a comeback if it’s awkward enough to use that the normies aren’t included, but don’t chase after the holy grail of a censorship-resistant mass media because that’s a coup-complete problem. Keep your head down, keep the channels open. I had this blog working as a Tor hidden service once, I’ll revisit that but I don’t expect to need it.

The failure of paedophile campaigners

Back in 2014 I wrote a short piece on the somewhat forgotten fact that when sexual liberation was being pushed in a big way in the 60s and 70s, sex with children was part of the movement, and was supported by mainstream liberal voices — the National Council for Civil Liberties, and so forth.

The question for historians to ask about the 1970s is not, “how could respectable people have supported paedophilia back then?”, rather, it is “how did they not succeed?” My original answer was that as the rebels became the establishment, they were forced to take some small measure of responsibility for keeping society together, and withdrew from a few of their most dangerous demands. That’s no more than a hypothesis really, since I have no particular evidence for it. The truth could possibly be even more interesting.

The question has recently come up again, with this NY Times article, tweeted by Sam Bowman, who thinks, “It’s really fucked up how mainstream paedophilia was during the 1960s and 1970s”

PARIS — The French writer Gabriel Matzneff never hid the fact that he engaged in sex with girls and boys in their early teens or even younger. He wrote countless books detailing his insatiable pursuits and appeared on television boasting about them. “Under 16 Years Old,” was the title of an early book that left no ambiguity.

Still, he never spent a day in jail for his actions or suffered any repercussion. Instead, he won acclaim again and again. Much of France’s literary and journalism elite celebrated him and his work for decades. Now 83, Mr. Matzneff was awarded a major literary prize in 2013 and, just two months ago, one of France’s most prestigious publishing houses published his latest work.

As I said in 2014, the question is not how the cultural revolutionaries who overthrew much of what society had previously thought right or moral could possibly have supported this, it’s how they failed, when they succeeded in so much else. Not only did they fail, but paedophilia inspires a level of opposition and revulsion today that to me always feels a little bit deranged. I’m perfectly happy to say that it’s harmful to young people to have sexual relations with adults and should be illegal. I’m also OK with saying that at least sex with younger children — say 13-year-olds and younger — is not just harmful but perverse (though I’m not clear why that counts for anything in 2020). But I struggle with the aura of evil — and that’s most often the word that’s used — when pretty much nothing else you can think of is today considered evil.

That attitude clearly wasn’t around in the 70s. I think it really dates from the late 80s onwards.

In discussion, though, I came up with a much more boring answer. I think the explanation is that a series of very heavily reported child murders created a strong association in the popular consciousness between paedophiles and murderers, and that’s what caused attitudes to harden so dramatically.

This theory is disproved if there was repeated heavy coverage of child sex murders before the 1970s. The biggest story, in the UK, is the Moors Murders, for which Ian Brady and Myra Hindley were arrested in 1965. If that was the beginning, and I vaguely remember it being a repeating theme through the 70s and 80s, it works as an explanation. (It doesn’t matter if there actually were murders before Brady, only if they got the same kind of media treatment).

It can also be looked at internationally. The USA seems to have followed a similar pattern, of it being naughty stuff done by wild rock stars in the 60s and early 70s, and being the definition of evil from the 90s on. I don’t know the specific cases, but they have the “missing children on milk cartons” thing going, at least from the 80s.

Maybe France hasn’t had that kind of crime, or not the same kind of media treatment, and that explains the softer attitude there.

It also gives clues to the future. Over the years I’ve often seen suggestions that “they” are going to be making paedophilia mainstream next, and I’ve tended to pooh-pooh them on the grounds that “they tried that before and failed”. But if there aren’t murdered kids in the papers, maybe they have a chance. In the UK, the last big media circus was Soham, almost 10 years ago now. Maddie McCann who disappeared in 2007 is probably still higher in the public consciousness, because nobody knows what happened to her. A few more years might be enough.

Sunk Moral Costs

I don’t understand Syria, and I’m not going to, and I’m OK with that. Trump’s pullout may be bad for America for all I know.

The concrete harmful impact of Russia having a lot of influence in Syria (as it did in the 1980s) isn’t spelled out, instead we just get innuendo.

I tweeted that Kurds will always be allies in destabilising, and always be enemies of peace, because of their situation as a stateless cross-border group. That’s simplistic, but if it’s not true someone needs to explain how. Peace in any of the countries in which they have large populations has to include either (a) they give up their claim to statehood, or (b) they achieve their own state, and I have never heard anyone suggest that (b) is a realistic possibility. There is a chance in any one country that you could get an autonomy-based settlement short of statehood which is beneficial for them, but while the other countries in which they have large populations are unstable, that can’t be a peaceful settlement, because they will still be fighting in the others. As I tweeted, none of this is their fault — it seems they were completely screwed in the 20th Century but this is the position today.

If there’s any coherent view coming from the US establishment, it’s anti-Iran. They may have a good reason for that, but I don’t know what it is. The reason probably has a lot to do with either Israel or Saudi or both, but I don’t expect to ever find an answer I can be sure is true.

Syria has been a bloodbath since the beginning of the Arab Spring attempt to depose Assad. Anyone suddenly upset about the humanitarian impact this week can be dismissed out of hand.

“Kurds were our allies”. How is that, exactly? I asked on twitter, sarcastically, for links to the announcements of and debates of this policy. It was made ad-hoc by the military and civil service. The president never talked to the electorate about it. Quite possibly the president (Obama) never even knew about it. Which is perfectly OK. But there is sleight of hand here. The line we are getting is: “We allied with the Kurds and relied on them, now we need to stand up for them”. The two “we” in there are two different groups. The opaque Washington foreign-policy establishment allied with the Kurds, without input from or notification of the general public. Now the voters are being asked by the media to stand by some implied commitment they played no part in making.

1) So much context has been lost and recent history revised in the coverage of this growing crisis between Turkey and Syria. US always assured Ankara that their support for the YPG was ‘temporary, tactical and transactional’ – a US diplomat quoted here in my new book on Erdogan

@hannahluci https://twitter.com/hannahluci/status/1184012129562775552

From around 14th October, the Kurds have made some kind of arrangement with the Syrian Government, and the narrative has switched from “it’s terrible to abandon the Kurds” to “Now the Russians are winning”. This is utterly disgraceful. It entirely proves that the complaints about the fate of the Kurds the previous days were insincere. Had the concern really been for the Kurds, then Monday would have been a day of rejoicing at their safety. Instead, the opposition to the withdrawl policy stays the same but the reasons change.

It is because of this sort of thing that I automatically disregard all foreign policy arguments that are made on humanitarian grounds. I don’t even consider the possibility that they might be well-founded. The concept of intervening internationally to protect civilians is 100% discredited in my eyes.

Around 500,000 human beings were killed in Syria while Barack Obama was president and leading for a “political settlement” to that civil war Media has been more outraged in the last 72 hours over our Syria policy than they were at any point during 7 years of slaughter
Ask why

@BuckSexton https://twitter.com/BuckSexton/status/1183812563261382656

Kinda telling that the intensity of Online Outrage expressed by Smart People today over the Kingsman-meme isn’t any perceptibly different than the Online Outrage they were emoting yesterday or the day before over, like, The Kurds being slaughtered
it’s all a video game

@soncharm https://twitter.com/soncharm/status/1183750875321438208

Trump, though I find him amusing, I consider no more trustworthy than the rest of them. I am not able to judge whether his policies are good or bad, but he is the only person who makes arguments for his Syria policy which make sense. The arguments against are always obviously dishonest (like the ABC gun show footage), insincere, or rest on vague unstated assumptions (such as that nothing that Russia wants can be allowed).

The FSA leader who John McCain took a picture with is now part of the invasion of Northern Syria, which the hawks are insisting we must oppose.

@j_arthur_bloom https://twitter.com/j_arthur_bloom/status/1183364011708080128

There’s another related point, more subtle but much more general. Modern thought does not admit of a distinction between crimes of commission and crimes of omission. To a naive rationalist, causing harm and allowing harm to happen are equivalent. But like so many arguments you hear today, the equivalence rests on an entirely unrealistic level of certainty towards the assumptions that are being made about the results of action or inaction. The potential for very large unexpected harmful effects is very much greater in military action than it is in inaction, and the expected benefits of action have to be large enough to outweigh that category of risk. That is equally true whether the harms and benefits in question are political, financial or humanitarian.

Tweet links:

  • https://twitter.com/anomalyuk/status/1183128988803371009
  • https://twitter.com/anomalyuk/status/1183135846226108416
  • https://twitter.com/anomalyuk/status/1183450270585540609
  • https://twitter.com/anomalyuk/status/1184063105669709824