On the Culture War

In my review of David Chapman’s Better Without AI, I fundamentally agreed with his assessment that recommender engine AIs are “deliberately”1 whipping up the culture war for ad clicks, and we need to somehow prevent this.

However, unlike Chapman, I can make no claim to be neutral in the culture war. I am clearly on one side of the divide.

It isn’t necessary to be neutral towards culture war issues, to be against the culture war. The key, if you are roused by some event linked to the culture war, is to think, “what can I practically do about this”.

Lets say, for instance, that I wish that The Rings of Power had been written to be as nearly true to Tolkien’s vison as The Fellowship of the Ring, and not an excuse for a load of left-wing propaganda.

What can I practically do about it?

Pretty obviously, nothing. I can refuse to watch the one they made, except that it never occurred to me to watch it in the first place. I can cancel my Amazon subscription, but that won’t mean much to the people who made the show, and again I will probably be doing that anyway once I’ve finished Clarkson’s Farm 2 because there isn’t really anything I watch on it any more.

I could write to Amazon explaining that I don’t find their programming relevant to me any more. That actually makes sense, but the cost/benefit ratio is very high.

What most people do is bitch about it publicly on social media. That is surely completely ineffective, and might even be counterproductive.

An anonymous2 voice shouting in public is not persuasive to anyone. In private, with people I know (whether they are likely to agree or not), I will share my feelings. That might help, and is a natural form of communication anyway. It also doesn’t generate ad clicks for the AI.

The reason I say it might be counterproductive is that by the behaviour of the leftist agitators, stirring up opposition is obviously their aim. As I said a while ago, In the case of the drag shows, the only credible motivation behind it that I can imagine is desire to upset the people who are upset by it.3 Yes, there are some perverts who want to actually do this stuff, but the public support for it comes from people who want to make it a hot public issue. Getting involved is playing into their hands.

Should we just let all this stuff happen then? Mostly, yes. The exception is, again, “what can I practically do about this”. If this is happening in some school in North London I have no connection with, the answer is nothing. Still more if it is happening in another country. I wrote in 2016: I consider local stories from far away as none of my business and refuse to consider them4. There is no reason anyone who is involved should care about my opinion. On the other hand, if it is happening somewhere I do have a connection, I should act — not to express my feelings, but to have an actual effect. This is similar to the recommendation that Chapman has for companies — not that they should launch into the culture war, but that they should drive it out. “Take this out of my office”.

This isn’t a clear route to victory. Nothing is that easy. But playing the culture war game, screaming in public, is no route to victory either. Take only measures that actually help, and those will generally be private and local.

From my biased viewpoint, the culture war is very asymetrical. One side is trying to overturn cultural standards, and the other is resisting and retaliating. In that sense, I think even Chapman’s call to drop it is actually a right-wing position. I think without a loud public clamour, most of the recent excesses would be quietly rolled back in private by people who care about reality. Unfortunately, the loud public clamour is not going away any time soon, but playing the recommender AI’s game by joining it, even on the “right” side, is helping to maintain it rather than stop it.

Once you rule out participating in the culture war, the next step is to stop consuming it. A family member watches Tucker Carlson regularly. The presentation of his show disgusts me slightly. Not because I disagree with Carlson; I think he is right about practically everything. But then what? All he is doing is getting people outraged about things they can’t do anything about. What is the effect on them of watching this show? They make noise on social media, which is harmful, and they vote for right-wing politicians, which is the thing that has been proved by experiment to not do any good.

Better Without AI

How to avert an AI apocalypse… and create a future we would like

Just read this from David Chapman. Really excellent, like all his stuff. What follows is going to be a mixture of boasts about how this is what I’ve been saying all along, and quibbles. Read the whole thing. In fact, read the whole thing before reading this.

It draws probably more on Chapman’s current work on meaning than on his previous life as an AI researcher, which is a good thing.

The book starts by discussing the conventional AI Safety agenda. I interpret this as mostly a bait and switch: he is putting this issue up first in order to contrast the generally discussed risks with what he (correctly) sees as the more important areas. That says, he isn’t, or at least isn’t clearly, as dismissive of it as I am. The thing about unquantifiable existential risks that can’t be ruled out is that if there was only one of them, it would be a big deal, but since there are somewhere between hundreds and infinitely many, there’s no way to say that one of them is more worthy of attention than all the others.

He makes the correct point that intelligence is not the danger: power is. As I said in 2018, If we’re talking teleology, the increasing variable that we’re measuring isn’t intelligence or complexity, it’s impact on the universe. This also leads to being dismissive of “alignment” as a concept. A significant proportion of humans are more than adequately motivated to cause catastrophe, given enough power — while completely inhuman goals or motivations are conceivable in AI, they don’t obviously increase the risks beyond those of powerful AI with very comprehensible and mundane human-like goals and motivations. This is one of the most critical points: you don’t need to desire catastrophe to cause a catastrophe. Villains always see themselves as the heroes (though, realistically, more fictional villains should probably see themselves as normal sensible people doing normal sensible things).

All the blather about “real intelligence”, “consciousness” and so on is incoherent and irrelevant to any practical question. Chapman covers this in his other writing better than anyone else I’ve ever read.

He then plays down, or at least draws attention away from, the possibility of “superintelligence”. My own pet theory, expressed here before, is that superintelligence is not a thing. As Chapman puts it: “Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the pointless puzzles IQ tests throw at you

Next comes the real meat of the book. The scariest AI scenarios do not involve superintelligence or rogue AIs fighting against humanity, but practical AIs doing fairly reasonable things, much more thoroughly and effectively than before, and those things having very harmful downstream effects.

And while there are no doubt dozens of possible scenarios that meet that description, there is one that is already happening and already doing massive damage, with no clear limit to how much more damage could happen.

The scenario that is actually happening is the collision of two things I have brought up here before, but not explicitly put together as Chapman does.

Facebook hit a billion users a bit after 2010. It is Facebook, Twitter, and YouTube that meant that anyone, if they pitched it just right, could reach a mass audience. And that sent politics insane.

Anomaly UK: Defining the Facebook Era

this same system of user feedback and ML-generated recommendation is shaping the content across all digital media. Whatever you have to do to get the views, those are the rules, even though nobody chose those rules, even though nobody knows what all the rules are, if you are in the business you just have to do your best to learn them.

Anomaly UK: Epiphenomena

(“ML” in my second quote is “Machine Learning”, i.e. today’s AI)

Putting these two things together, what you get is:

The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)

AI has discovered that inciting tribal hatred is among the best ways to sell ads. In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages. Under partial brain control from AIs, we humans create emotion-inducing culture-war messages. The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).

Better Without AI: At war with the machines

This is not an AI gone rogue and seeking to destroy mankind. This is a business function that has existed for what, 150, 200 years: sensationalist media stirring up drama for advertising revenue. But that existing business has been made orders of magnitude more effective by new communications technology and AI. I suspect it would have become very dangerous even without the AI — my “Defining the Facebook Era” did not take AI into account, and the “Epiphenomena” post was six months later — but quite likely I had underestimated the role that AI was already playing two years ago, and in any case it doesn’t matter: as dangerous as social media without AI might be, social media with AI “recommender engines” is, as Chapman argues, vastly more dangerous still. It is quite reasonable to claim that the AI picked the current and previous US presidents, undermined and destroyed the effectiveness of long-established and prestigious institutions 5, and has the potential to be far more effective and harmful in the immediate future, without any further “breakthroughs” in AI science.

As I tweeted in 2020, If you think a million people dying of a disease is the worst thing that could ever happen, you should read a history book. Any history book would do … in worst-case competitions, politics beat plagues every time., and as I blogged here back in 2006, Humankind has always faced environmental threats and problems, and has a good and improving record of coping with them. We have no such comforting record in dealing with overreaching government and tyranny.6

AI may have many avenues to inflict damage, but the force multiplier effect of politics means that all other ways of inflicting damage are also-rans. Specifically, the primitive, clunky, unreliable AIs we have today are leveraging media, advertising and democracy to suck in human attention. Like criminals, the money they steal for themselves represents a tiny fraction of the damage they do.

Chapman devotes a lot of attention to just how primitive, clunky and unreliable neural-net based AI is, which is all true, but I wouldn’t dwell on it so much myself, since in this case its limitations are not increasing the damage it does at all, and probably are decreasing it. The biggest worry is not the effects of its errors, but how much more damaging it will be if a way is found to reduce its errors. The situation today is very bad, but there is little reason not to expect it to get worse. The “2026 apocalypse” scenario is not overstated in my view – there is no upper limit to mass insanity.


Mooglebook AI does not hate you, but you are made out of emotionally-charged memes it can use for something else

We next come to what to do about it: “How to avert an AI apocalypse”. The first thing, reasonably, is to fight against the advertising recommender engines. Block them, don’t follow them, try to ban them.

My only issue there is that, as I said before, AI is only part of the problem. I mean, since the media companies now know that inciting tribal hatred is among the best way to sell ads, they don’t need AI any more. They can eliminate whatever technical measure you try to define, but carry on doing the same thing. To be clear, that is probably still an improvement, but it’s a half measure.

In fact, the AI that has taken control of politics is exploiting two things: the advertising industry, and democracy. It is not doing anything that has not been done before; rather, it is doing bad things that have long been tolerated, and amplifying them to such a degree that they become (or at least should become) intolerable. The intersection of advertising and democracy inevitably tends towards rollerskating transsexual wombats — without AI amplification that is arguably a manageable threat. However, my personal view is that it isn’t.

The next chapter of the book is about science. We don’t want AI, so instead lets just have decent science. unfortunately in the 21st century we don’t have decent science. I’ve written about this quite a lot recently, and Chapman’s writing is very much in line with mine:

Under current incentives, researchers have to ensure that everything they do “succeeds,” typically by doing work whose outcome is known in advance, and whose meager results can be stretched out across as many insignificant-but-publishable journal articles as possible. By “wasted,” I mean that often even the researchers doing the work know it’s of little value. Often they can name better things they would do instead, if they could work on what they believe is most important.

Better Without AI: Stop Obstructing Science

I have no idea how to fix this. Classic science was mostly carried out by privileged rich buggers and clergymen, plus the occasional outside genius with a sponsor. State funding of science in the last century initially added vast resources and manpower to the same system, with spectacularly successful results. However, over decades the system inevitable changed its form and nature, producing today’s failure. There is no way back to that “transitional form”. We can go back to rich buggers (we no longer have Victorian clergymen), but that means reducing the size of science probably by 99.9% – it’s tempting but probably not an improvement in the short term.

Anyway, that chapter is very good but of minor relevance. It does also contain more good arguments about why “superintelligence” is not a major issue.

The last chapter is about having a wider positive vision (though perhaps “vision” is the wrong word).

Mostly it echoes Chapman’s (excellent) other writings: Eschew lofty abstractions, accept uncertainty and nebulosity, avoid tribalism, and look for things that are simply better. Discovering what you like is a never-ending path of opening to possibility.

you do not have an “objective function”
you do not have any “terminal goal”
your activity is not the result of “planning” or “deciding”
you do not have any “ethics”
these are all malign rationalist myths
they make you miserable when you take them seriously
you are reflexively accountable to reality
    not to your representations of it
your beneficent activity arises
    as spontaneous appreciative responsiveness

Better Without: This is About You

It would be nice to end on that note, but I have to shoehorn my own conclusion in:

I don’t quite recall seeing it stated explicitly, but I think Chapman’s view is that advertising recommendation engines are only the first widespread practical use of AI, and, not coincidentally, the first form of apocalyptic threat from AI. As other practical uses for AI are found, equal or greater threats will result. That is plausible, but, as I’ve said, I think politics is (by far) the greatest point of vulnerability of our civilisation. If we protect ourselves from politics, we are going a long way to protecting ourselves from AI and from other threats.

This is probably my biggest near-disagreement with the book. Yes, AI is an existential risk that we might not survive. But then, Genetic Engineering is an existential risk that we might not survive. Coal is an existential risk that we might not survive. Heck, Literacy is an existential risk that we might not survive. For better or worse, we don’t survive these risks by suppressing them, but by adapting to them. Current AI is indeed unreliable and over-hyped, but I’m more worried by the prospect of it getting better than by the prospect of it keeping the same limitations. There are many imaginable and unimaginable risks that could come from AI in the future, and one solid one that is present today, that Chapman’s second chapter lays out admirably. If we can save ourselves from that one, we are doing well for today. In any case, I suspect that the next risk will, like this one, take the form of amplifying some harm that already exists to the point that it becomes a danger of a different order.

This risk today is the amplification of politics via media, advertising, and democracy. Democracy was well-known for causing catastrophes like the rollerskating transsexual wombats before Leibnitz’s calculator or the US Declaration of Independence. The level of democracy we have in the West today is not survivable, with or without AI. For that matter, the level of democracy in China and Russia is dangerously high.

Update: more on the culture war