The closing keynote of four days of Noisy Square at OHM was delivered by Eleanor Saitta (Dymaxion). A volunteer effort (Lunar, hdez, kali, KheOps, pabs, zilog and others) transcribed and proofread the full text of her call to arms.
Audio recording — 43 minutes — MP3 — 99 MB
Foreword: I’d like to thank Quinn Norton for many conversations over the years that inspired a lot of what I said here today; much of the credit is hers, all of the blame is mine — Eleanor Saitta
For those of you who don’t know me, my name is Eleanor Saitta. I wear a bunch of different hats: I am technical director at the International Modern Media Institute located in Iceland. I am Principal Security Engineer at the Open Internet Tools project. I work on the Briar, decentralized delay-tolerant messaging platform and the Trike threat modeling tools and do a bunch of other random things in my spare time.
Today, I am here to talk about the kind of stuff that has been floating in the air around surveillance, and the interaction between surveillance, the state and people. I want to start by saying that this is actually a lot bigger than our current moment and the worries about “oh, this government is doing this to these people and what are the technical capabilities in, and kinda all the stuff that isn’t part of the story that has now turned into a soap opera at a russian airport”. What this actually is, and it’s worth understanding as we go into this, why the different sides think the way they do, what the mindsets that are playing into this battle are. And that’s where we start seeing this long term structure. What we are actually looking at right now is what you could call kind of a battle between “good” and “evil”. But it’s not a battle between good and evil as in this kind of Titanic clash with the evil spies versus the good freedom fighters. What this actually is, is a battle in terms of how we understand humanity. Do we understand people as fundamentaly good or do we understand people as fundamentaly evil. And this is one of the sort of like epic philosophical debates that we normaly think of not having that much relevance to the real world, to the way it affects our every day life. But it turns out to actually have a lot of relevance.
So, if you believe in democratic free societies and think that they are very very important and things that absolutely must be preserved. That our abilities to be humane to each others depends on the existence of a democratic society. But, you believe that people are fundamentaly evil, what do you do? So, there is an entirely constistent and coherent mindset that says: “people are fundamentaly evil, democratic society is an amazing and beautiful and scarce and rare flower that must be preserved no matter what. However democratic socities cannot be exposed to the evils of people in the real world because if it is, it will fail, it will fall apart and it will die.” If you believe that, you should get a job at the CIA Because that is the mindset that drives mass surveillance. That drives mass surveillance in the kind of western democratic regimes where there is this understanding that democracy is important. That democracy is too important to be left undefended. Democracy requires rough men doing evil deed into the night to preserve the space in which it can happen. And the people who are practicing this democracy can never find out what was done in the name of that democracy. Or they won’t be able to do the things that they are doing. Their innocence has to be retained, less they’ll be tainted, less they’ll be corrupted.
If you think that people are fundamentally good, then your life is in some case much simpler. Because you don’t have to worry about people being tainted by the reality of things. You can simply assume: if you build all these structures that let people coordinate and collaborate, it will probably work out OK.
You know, they may go a bit off course sometimes but we don’t need that structure in the dark and probably further even say that the democratic structures and our ability to have this kind of, you know, peaceful beautiful flower of democracy or what have you, depend on those things to not happen. This is the battle that we are actually looking at right now. Now, lets look a bit more at what good and evil actually are. I don’t think that anyone who is approaching this, certainly, like the folks at the CIA, I don’t think that they think that North Koreans are an elemental evil. But what they do believe is that, if you believe that most people are like you (most people do) and you are selfish and you want the people who are more like you to do better in the world then you assume that everyone else is is doing the same thing and all evil means is “oh, you are someone over there who is coming from a very different position than I am, you want different things that are mutually hostile.” Whereas if you believe that people are fundamentally altruistic and don’t want other groups to do better than they are, this is the same thing in some ways as saying people are fundamentally good then you don’t need to fight and undermine those people.
This kind of economic balance and economic understanding of altruism and differential benefit are going to underlay a lot of the things that I have to say today so I’m going to come back to this theme in a bit. But another reason of why these people surveil, its not like they just said “Well, you know, this has to happen in the world, so therefore I’m going to do surveillance on behalf of the state”. We did actually ask them to do this surveillance. There’s budget items that say “Oh, please run a surveillance agency, here is however much money to do it”. And the reason why we all collectively ended up asking those people to do it many cases many years ago and that’s just kind of continued. is that geopolitics, if you are if you are trying to run a nation state as a nation state, geopolitics is the law of the strong. There is this notion that there is International law and that you can somehow, like, “Oh, you invaded me so I’m going to sue you.” No, it doesn’t work that way, or least historically it hasn’t worked that way. This isn’t to say that we can’t have structures that aren’t the law of the strong at the geopolitical level but we don’t have them right now. So nation states require this kind of surveillance in order to function.
The modern state depends on having an intelligence function. I should be clear that this is not to excuse the intelligence function in saying this, it’s to understand what the set of power relationships are here. I would actually say that the state depends on having an intelligence function because the state depends on being able to able to deny others access to territory. Now, if you are in a globalised situation where state borders are porous, where people flow back and forth, where ideas flow back and forth, where trade flows back and forth, you can’t simply build a wall. Certainly before World War 1 and even even in the lead up to World War 2 there was this notion at the diplomatic level of “gentlemen do not open eachother’s mail”, that you didn’t need to do that you shouldn’t need to do this kind of deep dirty intelligence work. But as the world has become more globalised, but as the world has become more porous, the need perceived from the state’s side to do surveillance has grown because they don’t have any other way of understanding and controlling the state’s structure as the state’s structure becomes in some ways flimsier, it has to hang on more to be able to maintain the same centralising governance structure.
One of the things that I hear a lot as people are are trying to deal with, sometimes psychologically but also technically, the outcome of the massive pile of revelations that has been dumped in our lap. Well, revelations and confirmations, which doesn’t seem likely to end any time soon thankfully. One of the things that I hear people saying is “we need to make policy, we need to ensure that these things don’t happen any more without oversight”, which I think is a great idea, it is sadly kind of ridiculous: Policy doesn’t matter around surveillance for a few reasons. If you look at the historical record of surveillance structures, you have never, we’ve never seen a modern state without going through a revolution or something similar, roll back deployed and operational and technical capabilities.
Once something is fielded from an intelligence gathering team, and assuming it stays funded, if it is in the field, if it is working, if it is actively producing useful intelligence, it stays there. Pretty much no matter what, as far as we can tell. The NSA did very politely in 1975 turn off their telegram surveillance program. It had never in their entire history produced anything useful. So that’s our one example of a technological capability being rolled back. So, so much for history.
Also, if you build a capability limiting it, similarily it does not work. Because there is this sort of notion of pernicious efficiency, right? If you have a functional system that is deployed and that is useful for one thing, “oh, we can use that for that other thing too, oh, we can use that for that third thing too.” And the capability kind of naturally expands over time. It is very difficult because policy does not have of a hold on things. A soon as policy weakens for a moment, there is “oh we needed over here too”. You know, there is always something else you could do with that capability.
Policy relies on political enforcement. If you can’t politically enforce your policy, then you have very likely hood of it being functional in the long term. Now, I don’t know about GCHQ and the Germans and the Dutch, but NSA taps every politician’s email, and even anybody who might become a politician. They tap their phones, they tap their email, Partially they do this for national security reasons: they need to know if those people have sold out to somebody or any number of justifications. But it also means that they have all the dirt on all the politicians. Now, how easy do you think it is to keep a politician bought if their career can be destroyed at any moment? How well do you think their long term policy will is going to be when they know that if they stand up too much they just get destroyed.
Clean politicians don’t win elections. Clean politicians aren’t allowed to win elections. But also, there aren’t just clean people in the world. Everybody has something that is embarassing enough to ruin their career if the NSA decides they want to ruins someone’s career over it.
And also really, policy doesn’t matter because policy isn’t the level at which these decisions are made. These decisions are made at kind of the milito-economic terms purely. The economics of spying is the structure that controls whether or not spying is done. The notion of return of investment is very germane here. How much intelligence product are you going to get for a given investment. That is what determines which intelligence methods are used.
Complicance with legal structures is a cost in this case. If I decide that I don’t want to comply with a legal structure, I may have to budget so much extra for getting the right politician elected, or so much extra for getting the law changed, or buying the creation of a secret court, or whatever the thing that I need to swing my way around to do this. But if you are looking from a purely economic perspective, it’s really just a cost to roll out some new spying technology. You can assign a dollar value to how much it’s going to get to push that through. And if you decide that the cost/benefit analysis is worth it, then maybe you change the societal structure, the legal structure, to allow you to do the piece of intelligence work that you think is necessary.
Political fall-out similarily, just a minimizable cost structure over time.
So, your primary targets is intelligence product per dollar. But you are also interested in coverage: having really deep intelligence in one area isn’t particularily interesting. If you are an intelligence company you need both deep intelligence in specific areas, but also the understanding of a broad spectrum. You also need flexibility: You don’t know what your intelligence needs are going to be next week. So, this leads you to look very agressively at things like full text surveillance. Because full text surveillance means you have maximum amount of flexibility. You can decide at any point where you want to go deep. You can decide five years later than you need to go deep somewhere else.
Cost is an interesting structure in here, because the risk, and this is the mindset shift that happened at least in the US, really it happened throughout the kind of black state throughout intelligence world in the Cold War where all of a sudden intelligence failure became an existential risk.
You know it was a thing that was literally intolerable because if you failed, everyone in the country might die, so at that level there is no cost which is too great. Now there is what cost that you can actually afford to spare versus the other things you need to do with that money, but there’s never a level where you overspent on intelligence if it’s useful in preventing that existential failure.
9/11 in the US was another instance of this. It was another case where it suddenly became an existential crisis, and completely reoriented the intelligence community in the US. In the early nineties, the director of the NSA and the director of the CIA refused to talk to each other. They literally could only communicate through intermediaries because the inter-organizational hatred was that deep. That vanished in a few months. This is very interesting if you’re looking at what this balance looks like.
But the fact is that it’s still fundamentaly about return on investment. Which is really good for us, it turns out, because we can shape the market of their return on investment or rather we can shape their cost structure for different kinds of intelligence work. There are a few reasons why they do take some actions which don’t make any economic sense: “oh my buddy has a company who does that — oh yeah we totally need that”. Political stability is an interesting one: even it is entirely internal “oh, they are activist we don’t like what they’re doing, the government might change on us”. If you are the NSA, you don’t want the government to change. Not because it’s going to necessarily be an existential event for the country but because any kind of political instability is just bad for business. So you end up going after a whole range of dissidents that are maybe not so important for deep intelligence reasons but matter in terms of the political stability. And then also one of the other non-economic structures is that the NSA’s first priority isn’t the survival of America, it’s the survival of NSA. Which means that if you look at it purely from the external cost per intelligence product for return on investment, they do things that don’t make sense because they are actually aimed not at the survival of the US but the survival of NSA. The same applies to any other intelligence organization, I’m using the NSA for the example because it’s the most temporal one. Regardless, ROI still kind of fundamentaly rules.
So, the security community, if you evaluate them from this kind of perspective, has been doing a lot of really weird stuff over the last twenty or thirty years, a lot of which doesn’t make any sense. We have a lot of really great maths, very little of which ends up resulting in keeping people secure in the field. There is this sort of truism in the community that “everything is broken”: if you have an Android phone, you cannot secure how the code is delivered to the phone and SSL is broken and we know there are all sorts of bugs in the operating system and the baseband is completely owned and SIM toolkit, and we don’t know, and the hardware is manufactured on untrusted assembly lines, so there is really nothing we can do: “everything is owned.” Well, this is not actually that relevant in the real world because what we actually care about isn’t if is something secure theoretically, it is: did someone get away with the thing they were trying to do. I spent nine years as a commercial consultant, thinking about security mostly from that mindset of “what is the kind of theoretical structure of “can I trust this thing?”. And I started to spend more time in the field and very quickly realized: “no I don’t really care what the theory is, I care, did my friend get away from the police?”. And sometimes, “Were they able to kind of maintain some sort of long-term subterfuge?” and sometimes it’s “Did they have the five minutes to make it to the airport?”. When you look at the world from this kind of outcome-oriented perspective, it, among other things, plays very well with this understanding of intelligence’s return on investment, because that’s the same game. It’s not “could theoretically this dragnet have picked up some piece of intelligence”, it’s “did it?”, “was it actually useful?
So, the things that we need to focus on to shift the structure of intelligence gathering, the things that we can possibly achieve are: stopping this kind of full-take “we’re just going to surveil everything coming accross these cable surveillance” and protecting the social graph. Because those are the two things that are most deleterious to free society and where we have the most leverage based on the current situation. SMTP is a really interesting example as by default SMTP between two servers is not encrypted. So you may have an encrypted connection from your mail client, whether that’s webmail or a thick client running on your laptop to the mail server, but then that mail server talks to another mail server where your friend has their mail service somewhere accross the internet and that connection accross the backbone is completely unencrypted. Which is why the NSA is able to just snoop up most of that mail, because it’s just sitting there on the wire.
There are two are kinds of encryption for email: you can either require that connections between two mail servers always be encrypted which is very difficult to roll out because most mail servers don’t support it right now. However, if everyone starts saying “I’ll ask first if the other guy supports encryption and then we will upgrade to that”, it actually buys you a massive amount. If an intelligence agency, or anyone else who can get between those servers, wants to make an active attack, they can still get the mail. It’s very easy to just fake that the other server said “no”, but that’s noticable and they actually have to do something on the wire: that’s not something that is passive. And revealing active intelligence capability in a context like that is very expansive. You can’t dot that.
If NSA decided to man-in-the-middle every mail connection at once, people would notice. It would be a diplomatic problem globally. It’s no longer this kind of quiet surveillance place that they need to be coming from. So, even without providing any additional theoretical security, because of course it can be downgraded, we’ve completely shifted the outcome, we’ve completely shifted the return on investment for doing that kind of passive snooping.
There are a lot of different places where we can make that same kind of shift. This is interesting when you’re looking at designing security systems: “did you get caught?”, “could you perform the mission?” and “did you see the failures and correct for them?” are equally important questions. The only one of those that is directly influenced by the kind of the maths of security is “did you get caught?”. The rest of those are entirely usability questions. The “did you get caught?” is actually informed as much around opertational security practices: “does the user understand how the tool is supposed to work?”, “can they actually use the tool correctly?”, all of these other questions. Security is actually in many ways mostly a usability problem. Certainly, the usability part of it is at least as hard as all of the rest of it. We’ve concentrated on theory to such a degree that it is actually completely warped our ability to understand what the actual problems we’re trying to solve are.
Part of this is because the security community doesn’t have any real problems. It’s a bunch of white guys who can afford to spend time hanging out in tents in the middle of the very nice Dutch countryside. Which is great, it is awesome, but it means you don’t understand the position of someone who’s in the field, of someone who’s actually depending on this tool working for whether or not they or their cousin who barely knows how to use a computer survives the night. Once you put yourself in their shoes you start looking at the tools we build in a very different way.
I don’t blame people for having the standpoint from which they come from, that just happens, but it’s time to start expanding our viewpoint on the world. So, I want to try to give you some hope about the future even though everything is broken.
Encrypting all the things isn’t enough. Encrypting all the things will be hard, but it isn’t actually enough. However, there are things that we can do that will actually make a difference in addition to encrypting all the things. If we start decentralizing all the things, that makes a real difference. One of the reasons why NSA has been so successful is that, “well, if we can’t break your security of if it’s going to be too inconvenient to tap this on the wire, we just show up with a letter and now you have to do what we say.” There is lots of other places where this can happen too, we don’t know that much about who else is trying to compel the companies to do that but I would guarantee that if NSA is doing it, then lots of other people are doing it as well.
If you have to physically take a hard drive out of the machine to get at my data, and my data is on a server somewhere in a rack in who knows where, run by another company, so you don’t have to talk to me to get at that machine, then I never have to find out about it when you take that hard drive. If I own that rack that it’s running in, then maybe you have to talk to me or maybe you can just talk to the people that run the colo, but still I am a little better off. At least somebody is going to come and say “why is it that this machine came down,” I have some visibility in there. If that hard drive is in my pocket, you are going to have a relatively difficult time of taking it without my knowledge.
So, decentralization, when you do not have the rule of law as a protective structure, which we don’t, is an incredibly, incredibly critical tool. This means that we need to stop using an internet that is build out of services: APIs are kind counter-revolutionary. It’s over, we need to stop relying on central services, we just can’t do it anymore, it’s impossible to build a free internet that is centralized.
But we can’t just say “well ok, we are going to build a bunch of protocols and then everybody will decide to use them”. Because, if it hasn’t worked so far, why should it work now. We do have a moral high ground, but that not enough, right?
We are doing a lot better this week than we were a few months ago as far as being able to say “you know really guys, it is important, it matters, you need to do this, but it is still not enough.” You don’t just need protocols, we need protocols with business models. We need some way to be able to say “Yes, this development pays for itself, we have the money to hire a real UX team and hire designers and a marketing team, and all of the other things that you do in the commercial world if you’re actually serious about building software. Because it turns out that programming is the easy part. Everything else is the hard part. Adoption is harder than development. Design is harder than feature-completeness: that’s the part that we do not have right now, that we have to learn, and kind of learn in a hurry.
And honestly, the user model is the thing that needs to come first. Because, let’s say I release a webchat client that is really easy to use, and people love it, and the security really sucks. But you know what? It turns out that we can fix that, we can totally fix that. And it doesn’t matter where it started, because you know what, people are actually fucking using it, and that’s the part that actually matters and they can use it. If we start with an user model, if we start with, ok, this is the security theorems that we’re trying to provide, and we know how to do them, and we know how to explain them to the user, we know what they’re interested in, and we can make them cool, whatever, we fix them out later. We arere trying to get better at being able to do that process (there are a few projects that I am trying to run that I can tell you about later if you’re interested), where can take something that makes sense and it as a tool that can ship and will work in the real world, and get it to the point where it needs to be to be deployed in a high risk environment. But it has to make sense first, you would have to start with design.
I know there is a bunch of people here who work in NGO land, I am one of them and I am going to speak to a few different groups here: NGO land, we have a problem. We are concerned about building careers, we are concerned about our space, we are concerned about the security scene… Fuck all that! What we’re supposed to be doing is trying to solve the problems. Different organizations get this and different organizations don’t, it’s not universal, but I see a lot of people in the field, which I just kind of walked in to and I am just going to be the bull in the China shop trashing for a while, and I see a lot of people who are doing things that don’t look like they are trying to solve the problem. Because I know what trying to solve problems looks like, and it’s not: “well, you know, this is not really the project that we should be doing but it’s what we think we can get funded maybe.” and it’s not: “well, I actually don’t know if it is going to have any real….” Come on!
I mean yes, I understand that we have to deal with funders, but if the funders will not fund the projects that are going to make a real impact, then let’s fucking talk to the funders and get them to change what the hell they fund, because we can do that. This is ridiculous, we cannot keep wasting time and money on dumb projects that look really cool and get us respect within the community but don’t actually help people on the ground. I don’t want to see helpdesks that are run without people who have the depth of skill to actually help the people that they are helping, that doesn’t do any good. There’s a billion other examples, that’s just one of the top of my head.
Security people, hackers. We also have a problem: we have massive fucking egos and they are getting in the way. I don’t care about your egos. My friend Siena, who is trying to keep her seven year old daughter from getting the shit beaten out of her by Moroccan riot cops does not care about your egos. When I don’t have any tools because I get told “Oh yeah! GPG is totally something that you can totally teach in the field.” Fuck you! Egos don’t matter. If you do a cool thing everybody is gonna go and say “Hey! You did a cool thing. Really awesome! Let’s go and do other cool things together.” There are enough of us that if we work together we can do some pretty amazing shit if we work togeter. If we all dick-wave, we can’t do anything useful. I have been around in the security community for a while and I can do the whole “Oh! Your idea sucks, my idea is better!”, just as well as anyone. It’s not useful. Let’s just try and stop to harass each other instead of having a polite technical conversation like professionals do in the real world. You know, this is a crisis not a career, we don’t have time to play those games anymore.
Another thing and this is interesting for a lot of projects, I see a lot of stuff getting built without a theory of change. “This is a cool tool that I can built,” without an understanding of, “Hey, there is a larger battle going on, how does this change the overall landscape, where is this going, what am I actually trying to do?” Here is an interesting little example: Right now we have systems that are incredibly brittle. When they get compromised, they just get owned, because we are very bad at notifying the user our systems kind of fail atomically, like an entire system fails at once. And the user, generally, because the entire system fails at once doesn’t have a way of noticing and understanding what is going on. Maybe they just notice that their phone is acting funny or maybe they do not notice anything because the guy that wrote the malware maybe knew what he was doing for once.
So if you take that as one of the fundamental problems in security then, one of the interesting things is that in most situations you have an incredibly complex and very powerful pattern matching CPU hooked-up to your system that you are not using, it is called the user. If you say, we’re gonna segment this problem, let’s say you have a phone, instead of it just being a phone I’m just gonna make it a WIFI dongle and a phone and the wifi dongle has some LEDs on that aren’t being used for anything and I pack a firewall and a deep packet inspection tool on that wifi dongle and if it sees anything that it thinks looks weird it lights up an LED and if it sees something, hey, you got something I think looks like voice traffic, it lights up an LED… and then you look at your phone and you see, wait, I’m not in a call why does this thing have this voice traffic LED going on, and now I can do something. Maybe, you know, I’m still owned and I don’t know any techies and I can’t really do anything but I can say, my phone is acting weird and I think it’s owned. Then my wifi dongle is wrong, something in this set of things is no longer acting correctly. Ok, now I can take some kind of corrective action, maybe, I’m gonna go and put my phone in the fridge and have this conversation in another room or I’m going to leave my phone in the bus so it rides around town for a few hours while I go to the airport or whatever.
The thing you actually need to do is, you put a user in a place where they can affect what the outcome is for them. This is how you look at a security problem and take a theory of change and drive it all the way through. If you’re not doing that and you’re designing tools, you’re probably wasting everyone’s time.
So, hacker culture is kind of at a crossroads. For a long time it was totally cool that, you know what, I don’t really want to be political, because I just like to reverse code and it’s a lot of fun, and I don’t really have time for politics cause I spend thirteen hours a day looking at Shell code and socialism takes too long. That was great for a while, but we don’t get to be apolitical anymore. Because If you’re doing security work, if you’re doing development work and you are apolitical, then you are aiding the existing centralizing structure. If you’re doing security work and you are apolitical, you are almost certainly working for an organization that exists in a great part to prop up existing companies and existing power structures. Who here has worked for a a security consultancy? Not that many people, ok. I don’t know anybody who has worked for a security consultancy where that consultancy has not done work for someone in the defense industry. There are probably a few, and I guarantee you that those consultancies that have done no work that is defense industry related, have taken an active political position, that we will not touch anything that is remotely fishy. If you’re apolitical, you’re aiding the enemy.
No neutral ground means that we have to have the culture war, that you have to say “you are either with us or against us,”. Not “well, I guess that if democracy happens to occur, I’m not against it, but I’m not…” No… no, we’re done. There is this notion, and it’s talked about in the context of harassment, that you accept what you walk by, your standard is what you walk by. So, if you see somebody kind of putting somebody down, but it is not like it is a big deal, it is not enough to write them up over and you walk by it, well, that’s your standard of what you accept. That is your baseline. If that happens everywhere, then that is totally fine. And it will happen everywhere, because the standard is what you will ignore. The standard is where you will be like: “Oh, I don’t really want to have that fight right now, it’s not such a big deal, I’m just going to go and get a beer.” Yes, that’s your standard for what you’ll accept in the world. So, if you don’t wanna do that, fucking speak up.
This is why I was boycotting. I am still boycotting this event. I was not going to be here, and then I had free day in Amsterdam and DrWhax said: “I’ll give you a stage and you can show up and rant.” So I am here and I am ranting.
Yes, we have culture wars going on. This isn’t about hacker in-group politics. The culture war is the big culture war: It is the fight for the narrative of what humans are: do you believe that people are fundamentally altruistic, or do you believe that we will stab each other in the back for a loaf of bread at a moment’s notice, and that if you are not kin and kind then, fuck you I’ll stick your head on a pike. If that’s the world you want to live in, you get to choose which of those is true. This isn’t just about what is true, but about what we want to be true. You get to build the world you want to live in. That’s what the culture war we’re having right now in the hacker scene is actually about.
This war has real cost. This is not free, it is not easy. It has a cost at many many levels. It has costs within our community. You are talking about maybe running Noisy Square to become OHM next time. To run the Dutch camp in four years. If that happens it will probably split the Dutch scene for real. There will be two camps, and one of them might or might not happen, depending on how the funding goes. That has a real cost, it means that there will be real fights, there will be people who won’t come. There were a lot of people that I wanted to see this year who decided that they really couldn’t come after the behaviour of the organizing comittee. I miss those people, I would have loved to see them here, I would love to have been here the whole time. This has real costs, because it is a real fight. People get hurt. But it matters, we don’t get to say no.
Back to that bit about the power of the state. Back to that bit that surveillance is inevitable as long as you have a state. Which it is, as long as you have the state that wants to centralize all the power, that wants to hold on, that is going to be the single centralizing entity. As long as you have empire, yes, you are going to have surveillance. It is an inherent problem in the Westphalian comprise. You know for a long time it sort of, kind of, maybe worked or we could at least pretend it worked a little bit more easily. But the state has been captured by a lot of other centralizing structures. There really is no such thing as an independent state anymore. I mean, the closest you get is North Korea, which is really just a client state of the Chinese anyway. You don’t have independent states because money has gotten into states, because global corporation and the global rich have bought those states. Now, the states also buy the global rich. It is like this mutual cancer of centralizing structures.
In this context surveillance is going to continue to be maximally deployed, and a functional public, a functional democracy, a functional dialogue can’t happen with surveillance. As soon as you start organizing to express an opinion publicly which is unpopular and you get banned, you get Jack Boots at your door because they have heard every word you have said. Or you never have the idea of organizing dissent because you live in Singapore and the brutality is right there out in the open, as long as you stay in the mall it is cool. But if you like independent media… Why would we have an indepedent media? The media exists to serve the state.
So if we want to have something that resembles democracy, given that the tactics of power and the tactics of the rich and the technology and the typological structures that we exist within, have made that impossible, then we have to deal with this centralizing function. As with the Internet, so the world. We have to take it all apart. We have to replace these structures. And this isn’t going to happen overnight, this is decades long project. We need to go build something else. We need to go build collective structures for discussion and decision making and governance, which don’t rely on centralized power anymore. If we want to have democracy, and I am not even talking about digital democracy, if we want to have democratic states that are actually meaningfully democratic, that is simply a requirement now.
But we cannot build a free world, on an unfree Internet. You cannot build functionally decentralized Internet-centric democratic structures on an unfree Internet. It is like the CIA trying to build a free democracy on a legacy of treachery and murder. It just doesn’t work.
So yeah, let’s fight.