I've Never Had a Real Adversary
And this is a big problem for thinking about adversarial situations
One can get a bit cocky if one picks up their strategic sensibilities from normal life and single-player video games.
In the vast majority of situations in my life, I might have at most a counter-party (such as the person trying to sell me donuts). Strictly speaking, my counterparty has interests different from mine. She wants to sell me a lot of donuts, I don’t want very many. She wants to charge me a lot for the donuts, I don’t want to pay a lot. So there’s some tension there, but it would be wrong to call her “my adversary”. In the end, we both want me to buy some donuts, we’re just arguing price. And once I leave the store, with or without donuts, the interaction is over.
Single player video games are a particularly instructive example of this, because they SEEM to have adversaries. For example, there might be an evil emperor who is sending all the monsters to come kill me. But this is not a real adversary either. In most games, the evil emperor can’t really do anything against me while I try to solve that one puzzle in the cave—I have infinite time. And the evil emperor DEFINITELY can’t work against me when the game is off and I’m driving to work.
And in fact, it’s a level less adversarial than that: The evil emperor was created by the game developer to pose me with challenges. I want to get challenged, so I and the evil emperor are actually on the same side, just arguing price there. Is the challenge too hard? Too easy?
Playing chess online might seem more like having an adversary. Indeed, in this case, it’s zero sum—I have to lose for my opponent to win. Ok, maybe that brings it a bit closer, but even in this case, I and the other player are both sharing the desire to play a game of chess. We also both want to win, but realistically we accept that isn’t always possible, and we compromise by seeking out another player.
Paul Crowley recently mentioned that we underrate the effect of the Russian IRA (Internet Research Agency) which works full-time on creating discord and anger among Americans online (it was in the context of Robin Hanson’s dismayed surprise at how much racist vitriol a post about Africa had attracted; Eliezer Yudkowsky correctly pointed out the vitriol could be entirely manufactured by bots; Crowley added IRA as a culprit).
It is very hard for a normie like me to really internalize what this means. Here is a group that will bend all their effort, all their human creativity, working 40 hours a week plus overtime, trying to find novel ways to mess with their target. This is simply not something my life has prepared me to deal with strategically. It’s very hard to program yourself to see an angry note in your inbox and pause to reflect that this could be the conscious effort of your enemies—it seems paranoid to even write that!
Yudkowsky in particular has used the concept of Mossad to make this point. Paraphrasing: If Mossad is after you, they are going to beat you (unless you have the entire apparatus of a major power to protect you). It’s easy for a normie like me to imagine—to default to believing—I could somehow escape their attack, but that’s only possible because I’ve been raised and educated and learned about existence in a world where I never encountered a real adversary: someone whose job, whose only job, was to defeat me, and who was good at it.
Yudkowsky has not been shy about using this insight to point out how profoundly dangerous a superhuman intelligence would be, and he’s clearly right about that. But blindness about the dangers of ASI is only one of the ways that never really grappling with an adversary can hurt you. Indeed, for a person without an adversary (i.e. most of us), just taking the idea seriously sounds like paranoid delusion (“what if that flat tire was because of…” “what if the IRS called because…” “what if that woman asked for my number because…”). It’s an anti-meme: you’d be punished for even expressing it.
I’m not sure what the solution here is. In normal life, trying to account for the possible action of a true adversary WOULD be paranoid delusion, or at least would not be distinguishable from it. But in reality, there may, sometimes, actually be adversaries who really are out to get you (if only in the general sense of “American Internet user”, but perhaps in a more targeted way on occasion!).
I have no heuristic for trying to distinguish between these cases or what I would do in a true adversarial environment.
At the very least I recognize that my normal habits and behaviors would be totally inappropriate in that case. But raising concerns about possible enemy action would bear emotional and social costs (not to mention strict bandwidth consumption, such as writing this note): costs that in most situations I am right to refuse to pay.
So whose job is it to bring up the IRA? Other than Paul, I mean.


Really interesting post!
It might be worth looking into the field of cyber threat intelligence a bit, as it deals with exactly this kind of adversarial thinking and strategic planning (including being the ones that mention the IRA!).
While as an individual you cannot realistically defend against a nation state adversary, organizations and governments do routinely defend against these threats.
Though ASI as an adversary (or being wielded by one) breaks a lot of assumptions in CTI around scale, motivation, etc., but I think it could be a good starting point to see how people think about adversarial risk and try to plan for it.
Also, “The Cuckoo’s Egg” by Cliff Stoll is a great read that shows how the author had to change his mental models once he realized he was dealing with a real adversary rather than system glitches.
There are of course some people who have been stalked. Or who have had a very disgruntled former employee or lover actually try to sabotage their lives. By all accounts, it's a horrible experience