With the cost of surveillance and mass information gathering becoming cheaper and easier, laws are struggling to keep pace. Who is fighting for transparency and working to protect your digital rights?
Our guest today is Danny O’Brien. Danny has been an activist for online free speech and privacy for over 20 years. He co-founded the Open Rights Group and has defended reporters from online attacks at the committee to protect journalists. He is now the Director of Strategy at Electronic Frontier Foundation.
“We need transparency before we will ever see reform.” - Danny O’Brien Share on X
Show Notes:
[0:59] – Danny began working with Electronic Frontier Foundation (EFF) in 2005 but had been interested in them and digital rights overall since 1990 as a journalist.- [2:18] – In the early days of EFF, the topics they were writing about seemed very theoretical to the everyday person. It became confusing, but a lot of these hypothetical situations were becoming reality in the early 2000s.
- [4:06] – The assistance Danny gave to journalists to keep them safer from online attacks began on a case by case basis.
- [6:23] – Danny explains that now they are seeing a rise in targeted attacks on journalists with government connections.
- [7:50] – The tools to conduct a spyware style monitoring of a particular person are now so ridiculously cheap. It can be anyone. In the early days, it always seemed like an attack was government based or done by professionals.
- [8:42] – Journalists in particular are highly targeted for attacks because they have likely upset someone they’ve reported about.
- [10:49] – When Gmail was hacked in 2009, it became apparent that the people that were targeted in that attack were Tibetan activists.
- [11:42] – There has been a shift into a professionalization of attacks. It is someone’s job to clock on, hack and make someone’s life unpleasant, and clock off.
- [13:10] – One of the key cases in the last several years in regards to digital privacy rights is the Apple San Bernardino case in which the FBI wanted a back door into the iPhone of a suspect in a shooting.
- [14:36] – There is a gray area where governments are saying that as long as they have the ability to do these things, they should.
- [17:16] – The globalization of technology has caused confusion and blurred lines on what is legal and illegal in each country.
- [20:25] – Danny gives an example of a loophole in United States law regarding getting geolocation data from phones.
- [23:13] – The process of getting information is very murky especially in the United States.
- [24:41] – We need transparency before we will ever see reform.
- [26:40] – Google would do something called The Creepy Test where they would demonstrate something they could do internally and determine whether it was something that could be used in a “creepy” way.
- [28:29] – Something may seem like a great idea but wind up causing more bad than good. Danny uses apps for tracking the pandemic as an example.
- [30:20] – As technologists, we are capable of acting very quickly and reaching for a toolkit that we can use.
- [31:19] – Sometimes we have to be careful that the solutions that are the simplest from a technological point of view aren’t just shifting the complexity elsewhere.
- [34:02] – The consequences of simply uploading photos online in regards to privacy were very unexpected at the start of the internet and social media.
- [35:49] – In the 90s there was a strong fight against encryption. Now that encryption is what holds entire economies together.
- [36:08] – While encryption is useful, it is also being used by cybercriminals to hide illegal activity, particularly child pornography.
- [39:00] – We used to argue about digital rights but now all rights are digital. Now, all laws are about the internet.
- [41:53] – Danny and Chris discuss the passage of a bill about the digital rights of sex workers that had several unintended consequences.
- [43:12] – There is a big push right now to undermine encryption particularly for fighting against sex trafficking.
- [44:04] – There are a lot of problems that have been created that were unintentionally created by technology that needs to be solved.
- [45:06] – Large companies, like Apple, Google, and Amazon, have a lot of control of our personal devices.
- [46:46] – We will start to see a lot of technological compromises between large companies and the government.
- [48:44] – Pick the privacy tools and try out different tools to find what works for you. It exercises your right to remove trackers and ads from your web experience.
- [50:27] – In order to exercise your rights, you need to know them.
- [52:38] – People wind up being the consumers of technology and they don’t become active citizens in this digital community. Education is important.
- [54:54] – Danny shares links to useful material to educate yourself on surveillance (listed in the Links and Resources).
- [57:21] – Sometimes, lawmakers don’t know all about these technological problems, so write to your lawmakers when you have concerns.
- [59:30] – EFF is membership driven and a huge proportion of their funding is from individual members. If you are interested in becoming a member or donating to EFF, visit their website for more information.
“We used to argue about digital rights but now all rights are digital. Now, all laws are about the internet.” - Danny O’Brien Share on X
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- whatismyipaddress.com
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- Open Rights Group Web Page
- Electronic Frontier Foundation Web Page
- Danny O’Brien on Twitter
- Surveillance Self-Defense (SSD at EFF)
- Security Education Companion (SEC at EFF)
Transcript:
Can you tell our audience a little bit about who you are, how you came to be associated with the Electronic Frontier Foundation? I got it right.
You did. After all that discussion that people always put freedom in there, which is a perfectly fine place for freedom to be. I joined the EFF in 2005. Before that, I was a journalist writing about the kind of topics that EFF works on, sort of broadly digital rights or civil liberties online.
I've been tracking EFF since its beginning in 1990. EFF has been around since the very beginnings of at least mainstream digital communications and the internet. Of course, the internet precedes it by many years.
The deal was that some of the pioneers of that world began to realize exactly how unprepared the rest of the world was for the kind of effects that technology was going to have. I've been tracking it.
As I say, I was a journalist and wrote about this primarily in the UK, as you might be able to tell from the remnants of my accent. Actually, I sort of returned to the world of journalism, at least for a few years after my first stint at EFF.
A lot of the first few decades of EFF’s existence, we ended up trying to describe things that seemed very theoretical to people. In the early 90s, we were talking about the importance of encryption when the US government was actually trying to pretty much ban it worldwide. We'd have to throw up these hypotheticals about what would happen if the government had the private keys to everybody and people would be very confused. Around, I think, 2008 or so, a lot of these theoretical issues that we were dealing with became very practical, very quickly. I actually shifted to the Committee to Protect Journalists for a while.
What we were seeing was, at that level, malware and state-sponsored attacks being targeted at journalists and other sort of canaries in the coal mine, sort of human rights defenders who were the first people to be really individually attacked in that way by state actors.
We spent three years. I wouldn't say I was on the frontlines, but I was definitely talking to people on the frontlines—people in Russia, people in Syria, and of course in the Middle East during the Arab Spring. Then, with that kind of knowledge, I went back to EFF and helped connect the dots a little bit there from our theoretical work, to how we might do some more practical assistance to the people who were really at the cutting edge of the digital revolution, and the revolutions that that digital environment was provoking around the world.
What was some of that practical assistance that you provided for journalists?
When we started, it was very on an individual case-by-case basis. At the beginning, we were looking at these issues on a very much case-by-case basis. We would have folks working as foreign correspondents in China and they would say, “I really think there's something odd going on with my computer.” We would end up poring over it or talking to people and seeing the patterns. We weren't entirely sure whether they were state actors, because attribution, as ever, was very difficult. Certainly, very concerted attempts to target journalists using the knowledge that really only intelligence service professionals might have.
To sort of separate this from the common or garden phishing attacks, the one that really stuck in my head made me start thinking about this from a government level espionage was the Foreign Correspondents’ Club of China that was getting fake invites. There would be emails saying, “I’m the editor of The Economist, and I'm coming to Beijing; let's meet up.” They weren't being sent to the journalists themselves, they were being sent to their assistants.
Their assistant’s email addresses and names weren't widely known. They were kind of secretaries. They're sort of people who sorted out their work in Beijing. That's the sort of information that you wouldn't expect a common or garden criminal to have. It's, of course, exactly the information that those correspondents or government handlers would have. That was the kind of evidence we were dealing with there.
Since then, what we've seen is the rise of targeted attacks that as the cost of zero-days have sort of gone up. They've moved from this environment where anyone could do this kind of thing—sadly, that's still kind of true—to things that you would imagine that they were being operated on a professional level, if not at a governmental level. That was the early shift.
I think, now, we're just in an environment where a huge chunk of the targeting of journalists that isn't just kind of petty cybercrime does have, if not again, government connections, then you can certainly begin to suspect that it's people that journalists have upset.
Criminal enterprises who they've written negatively about or exposed.
Yeah, it's interesting. As I was saying that to you, I'm rethinking on the fly, or some of the assumptions that I've always come into this with, which is, there was a time when the thing I had to explain to people, when really their only idea of government espionage and of cyber-surveillance was the NSA. That what we were seeing was a democratization of surveillance, that actually the tools to conduct a spyware-style monitoring of a particular person were now so ridiculously cheap that you shouldn't just assume it was the NSA, China, or even Russia. It actually could be anyone.
Again, like some of the parameters of that, when we were dealing with journalists where we'd sit there and go, “OK, this is strange.” This was usually in the denial of service attack space. This is when people would have an independent website and then that would be taken down. Of course, you'd sit with the journalists and say, “Well, who might have a grudge against your website?” They'd be like, “I'm a journalist. Anybody we've written about in the last six months might want to bring down this content.”
A great example would be Brian Krebs. Whenever he would post about cyber criminals or whatever, he would be under denial of service attacks. His website would be down. He'd be attacked.
Motive and opportunity. If you're a journalist and you stick your journalistic stick into the beehive of the computer underground, then yes, Krebs gets it in the neck.
What we would see, I guess this was five to six years ago, was not particularly well-budgeted state actors were suddenly in this game. We talk about advanced, persistent threats. I think one of the things that people forget is that they're not terribly advanced. The persistence comes from an institutional persistence, that there's somebody whose job is to get up in the morning and make somebody's life unpleasant or monitor them and surveil you. That has a particular pattern and a particular penetration that is different from the drive-by petty criminal or someone who is just pissed off with you this week.
Again, one of the classic moments when we realize that these hackers or hacking attacks were taking place against journalists in China or China-related, we never knew whether that was really the state or the government. It just happened to be kind of broadly allied with what we would suspect that they would do.
For instance, when Gmail was targeted for hacks, I guess in 2009 or 2010, it was the fact that the particular Gmail users that were being targeted, there were Tibetan activists, which was the tell rather than, “Oh, yeah, we traced the IP and it was coming out of China's HQ,” or whatever.
One of the next big indicators was when people were just mapping it to the 9:00AM-5:00PM of the IRA in Russia. People would clock on, hack, then clock out, and spread misinformation. That was the next tell. I think that's what I mean about this professionalization of these attacks.
It's almost corporatization. You've got your Monday through Friday, 9:00AM-5:00PM. This is the time that it's my job to go after journalists. That's what I do. I clock out and I go home.
Literally, in the case of the spyware and malware that's used against dissidents and journalists. Then, you have companies like NSO Group who are taking and productizing something which up until now, seems a strange word to use, but kind of bespoke systems or people taking a particular environment and then doing sort of—well, I was going to say pen testing, but there's no testing in it. It was just simply penetration.
Now, we have this thing where there's actually an industry in a business supporting governments or other well-heeled actors who want to do that kind of thing.
We've seen lots about that in the news, at least with respect…I’m an Apple geek. I see that a lot with so-and-so has this gray box, which can get into iPhone 6s, iPhone 7s, iPhone 8s, as the number slowly creeps up to the one that you've got.
Sure. Returning to the digital civil liberties space, this was what was so odd, one of the key cases in the last few years in the United States about people's right to have some privacy in their devices, which was the Apple San Bernardino case where the FBI began and then dropped a judicial attempt to get Apple to essentially re-engineer their operating system to provide the FBI access to one of those San Bernardino terrorist's devices.
In that situation, what was odd is that they were taking a legal run at this when, as apparently became the case very shortly afterward, a particular company had actually learned how to crack that particular phone. You have this odd thing when the tech doesn't work for breaking these things, then there's an attempt to make a legal attack on privacy.
What's disturbing about that, I think, is because all of these things are going on in this incredibly gray area where governments are saying, “Well, as long as we're capable of doing these things, we should and can do these things.” There really isn't a tight enough legal structure to determine both—when they have the right to do those things and what limits there should be on the technology that they use to achieve it.
It's funny. I'm not a journalist, these things don't directly impact me. I'm not a terrorist. I can see both sides of these arguments that you’re like, “Well, gosh, when someone is going to go out and kill a whole bunch of people, I want people to have access to that information. But, I don't want them spying on journalists.”
Do I trust them with the keys? Do I not trust them with the keys? Even if I trust this guy with the keys, what if he drops the keys and gives them to somebody else?
What we want is a set of clearly defined limits about these things. Definitely, one of the saddest truths about the last 20 years is a lot of the actual practice of digital interception and surveillance has come, not actually out of our existing judicial standards for this, which you get a warrant. If you want to tap somebody, you go to a judge, you ask for a warrant, and you give a very specific reason why you want that.
From the practices of the intelligence community, which by definition exist outside of the normal rule of law. We have had this structure since the 70s in the United States. I think this is pretty much the norm internationally. We understand that the spy agencies shouldn't operate domestically, but anywhere else in the world, anything goes.
By definition almost, they break laws in other people's countries. I'm pretty sure that what James Bond does in the movies is not in line with the laws of Russia. That's actually the wrong way to think about it. One of the reasons why all these intelligence services were given such a broad brief is partly out of sight, out of mind, this is happening somewhere else in the world. Also, theoretically, at least, it was only affecting a small circle of people.
Essentially, the brief is that if you're agents of a foreign power, you could do these things. What the globalization of digital technology has meant, first of all, that's a really odd dividing line where we're allowed to do this outside the border, but we can't do it inside. It was unsustainable. What ended up happening—this is what the EFF has been fighting in the courts since 2007—is that inevitably that surveillance program came home to the United States.
The other part of it is that the numbers grew. The surveillance programs that are taking place became mass surveillance programs. I can imagine a world where people could use targeted surveillance tools such as the spyware that we describe on individual journalists, but bound by the normal process of law where you go to a judge, touch wood, a judge would not say, “No, you can't have a warrant for investigating this journalist just because they said something bad about you. You can have it for this person because we suspect them of planning a terrorist act.”
We've kind of skipped that bit entirely. We've gone straight from what we had before to just spying on everybody and then collecting that data. Even applying it to everyday citizens domestically. It's like we've got so much worse at policing and judging surveillance than we ever had in the past. We weren't great at it in the past, to be honest.
It's what technology seems to do to everything—that there's an exponential factor to. It used to be that if you were a surveillance organization—whether you're government or domestic, whatever you want to call it. You want to follow somebody, well, you've got three shifts and six people on each shift because you don't want them being seen. You're talking to multiple people to watch one individual. We now have one individual that can watch thousands, tens of thousands of people, millions of people, or even no individual because it's all automated. The scale of it has grown exponentially.
When you're trying to think about how these laws, these precedents, and how regulations should work, one of the things that really comes home is how much previous norms and laws land quite a lot on the physicality and the basic economics as existed in an analog world. If you just get rid of those or as you say, things become exponential or things collapse to a zero cost, it really changes that dynamic in ways that you're not expecting.
Let me give an example, which is that one of the loopholes in US law that we're actually pretty successfully fighting with some pretty major court victories: how easy is it to get geolocation data from a phone without getting too much into the details.
It used to be that there were two standards for communications warrants, one of which was getting the contents of a call. The other thing was getting everything else.
The metadata.
The metadata, exactly. The argument was, the metadata wasn't as revealing as the contents of the call. Geodata, which is basically where your mobile phone has been, at any point, when it was on for months in the past, for as long as the ISPs or phone companies log it, is metadata. It's pretty revealing metadata.
Actually, I would say it's more revealing metadata than the content of any of your phone calls. I can't remember the last time I had a phone call, but my phone knows exactly where I've been. Actually, to be fair, I haven't been anywhere for the last two months either. It could tell if I left.
We've been fighting this quite, quite hard. I think judges who now have mobile phones begin to realize that this is not an accurate dividing line between serious and non-serious investigations.
One of the things that really transformed that whole process was the police used to go to the phone companies to get this information. Eventually, one of the phone companies said, “Look, let's make this easier and we'll write you an internal website. You can go and make requests from that internal website.” Of course, the number of requests went from a few thousand to millions because it was easier, because suddenly the economic and the bureaucratic cost had dropped.
Those are the parts of the environment that aren't necessarily part of an inevitable digital rationalization of these things and aren't written into the law but have a huge effect on the level of surveillance, who gets surveilled, and what people can do with that surveillance.
Then, it's getting even more murky now. They don't have to go to the phone company to get the data because the phone companies are now selling it or the app developers are now selling it en masse. We don't even need a warrant for the information. Now, we just go out and buy a subscription. Now, we have the data without the warrant.
Again, I'm speaking mostly of the United States here. The laws constrain what the government can get, but it doesn't really currently constrain, in any useful way, what companies can voluntarily provide. There are some constraints on, as you say, telephone companies.
The laws constrain what the government can get, but it doesn’t constrain what companies can voluntarily provide. -Danny O’Brien Share on XI thought it was very revealing at the beginning of the pandemic when a lot of companies in a burst of civic duty, and also the chance to get out a press release, would publish the information that they had about how people were moving around. It switched pretty quickly from, “Oh, that's fascinating that all these people go to Florida and then fan out across the country to wait….”
There's a company that I've never heard about that knows where everybody is at every point of time. I think that kind of underbelly, that what we need is a sort of transparency about that before we can get to the reform. Sadly, we get that in the government space through whistleblowers and in the commercial space from people who don't realize when they go public with their product that everyone is not going to reply with, “Wow, how cool.” But go, “What the hell?”
If you saw what happened with Clearview AI—a facial recognition system that just spidered everybody's public picture and then devised a fairly straightforward system that if you show it a picture of someone, it can tell you who they are. Sounds great in the lab, but if you ever think about the ethical and political consequences of something like that, it becomes far more serious.
It starts to become a sci-fi movie from a few years ago. In a sense, it’s like, “Oh, now as I walk by a street sign, the display changes and projects an ad specifically to my shopping behavior.”
It's actually amazing how many things that we would have thought of as completely sinister, weird, and an indication of Stasi, Germany, which we now take for granted.
Look, I'm literally talking to you with a camera above a screen. That is actually the thing I remember when I read 1984 for the first time—a TV that has a camera built into it, that's authoritarian. We had these systems around.
I remember listening to Eric Schmidt speak when he was still the “adult” at Google. I remember him describing to a bunch of somewhat aghast journalists, very brazenly, that often Google would internally demo something and they would have a creepiness test where they would show that they were capable of doing something internally and then go, “That's too creepy. No one's going to like it if we reveal that we can actually do that.”
Actually, pervasive facial recognition was one of those things. I remember discussing that with Google engineers and product folks. They were thinking about introducing it into their tools. We were like, “You should know how this is going to be misused in authoritarian or near-authoritarian settings.” I think that creepiness line is something that is very, very fragile. It disappears almost completely in emergencies.
Again, going back to the beginning of the pandemic, it was amazing how many editorials were written by people. I'm sure a few people listening to this podcast will be nodding along when people are saying, “We've got this pandemic. We need to be able to track everybody. Hey, we have these companies that are already doing that.”
At EFF, we like to actually be correct. We spend a lot of time looking into this. The most fascinating thing for us was talking to epidemiologists and talking to the engineers who were building this stuff at Google and Apple.
I think the important key component was people going, “This might not help as much as people think it will help.” We're mashing two things together here, right? We're mashing the information we collect in order to target advertising and the kind of data that epidemiologists want. It's not the same. We're throwing tech at this problem.
I think six months on, we realized that there weren't going to be any quick answers, that the solutions actually come from a reasonable use of a lot of different strategies, and that no real country has done this.
Google and Apple put it into there, put a location recording, which actually has some pretty good privacy qualities to it. It wasn't what people needed or it certainly wasn't sufficient. The sort of stuff that people were proposing right at the beginning would have been so incredibly privacy-invasive and probably wouldn't have worked, which is the worst of all worlds.
You give away your privacy and it doesn’t actually solve the problem which, again, what we saw on 9/11, right?
I think, in the early days, not that we have this thorough understanding of how coronavirus works and how it spreads. We still don’t have a great…like, “Hey, we can tell you exactly how long it stays on surfaces or if it's airborne. It's getting there. In the first weeks and months, we're just throwing up our hands going, “Oh, my gosh, we're all going to die.”
As technologists, we're very capable of acting very quickly and reaching for a toolkit of things that we can use. A lot of what EFF does, you could throw into two rough camps, which is free speech and privacy.
In the early days of the internet, I viewed those two things like an uphill and downhill battle. It was really a downhill battle to fight free expression. It was kind of baked into the technology that free expression was going to increase. It would actually require people to kind of step up and start proactively censoring and blocking it.
With privacy, we had an uphill battle because the technology led to weakening those protections against your intimate details just leaking out. Of course, the two things are somewhat connected in that way. Sometimes, we have to be really careful about the simplest solutions from a technological point of view. We haven't just moved the complexity somewhere else in the system.
We're not actually solving a problem. What we're doing is we're creating 20 other problems in completely different areas of what we're dealing with.
The law of unintended consequences.
Yeah, absolutely. Like anybody who's programmed, anyone who has built hardware knows that there's the noble dream of like, “I can see it very clearly how to achieve this thing.” Then, everything conspires against that. There are lots of these things that complicate your simple answer. Sometimes those complications are in the area of human rights.
I remember seeing a machine-learning study where they had looked at people's retinal scans correlated with a whole bunch of different health conditions. They got to the point where we can throw your retinal scan in here and they can tell you whether you have some particular heart condition.
The initial thought was like, “Oh, that's awesome. Now, I could just look in front of a camera.” Now, what if they start correlating other stuff?
Right. I think we're actually pretty bad at predicting—even experts. I can't overstate how much EFF’s work often consists of just looking at technology and not trying to catastrophize it; not going, “Oh, my goodness, this could be very bad if it falls into the wrong hands.”
In many ways, our instincts are kind of the opposite. Our instincts are, “How could this improve the way things are?” Really, how often? Even if you spend a lot of time thinking this stuff through, things can take you by surprise.
I just think back historically from almost everybody I know who was involved in the early, well, kind of in the dot-com boom, uploaded all of their photos to the web. I think people are much more cautious about that now.
Absolutely, they were cutting edge. They understood how to build a gallery program online, understood the nature of image recognition, but would never have thought, “Oh, I'm creating my own and contributing to the facial recognition database. I'm contributing to something that will learn what locations looked like.” Every picture on the public internet is really contributing to a system that would allow you to locate where a photograph is.
I'm still happy that all these photos are up there, but those almost immediate consequences were not things that we even, the experts, anticipated.
It was funny that you're talking about that. There was a program, an app designed to interrupt child sex trafficking. What they were asking people to do is any time you visit a hotel, take a picture of the room so that if we ever see these things in the background of sex trafficking, we can find them. We know, “Oh, gosh, this happened in this room, in this hotel,” and you could probably even date it, which is like an incredibly noble intention. Where could that go wrong in a sense?
I think those best intentions often are the things that remove that, “Oh, this is a little bit creepy” sense from people. I think that right now, one of the biggest battles we're fighting at the moment, and I referred to it earlier as in the 90s, there was a really persistent attempt to just prohibit the use of strong encryption. This is the strong encryption that now holds the economics of the world together—the things that protect credit cards, the things that protect your data at rest.
It's just math. It's very simple math. There was a very determined attempt to prohibit, literally, the export of this. People were literally being targeted because they were moving a mathematical formula from one part of the world to the other.
It’s ridiculous to us, but the argument was, “This will fight terrorism.” Now, what we're seeing with bills like the EARN IT Act in the United States and really concentrated attempts in Europe reveal around child sexual imagery. The argument is that we used to be able to monitor, spy on everybody, and sort of scan this for these sort of abhorrent images. Now, because people are using encryption more widely, we can't do that anymore. Therefore, we need to stop this advance.
Again, the arguments are compelling if you think of them from the point of view of what they're trying to stop. If that wasn't there, if their impetus wasn't there, what it hides, which are the huge societal implications of not only requiring every piece of software to be fundamentally insecure. Also, that if somebody did invent something that was more secure, that we would have to ban and prohibit it.
Again, to give an example, one of the arguments we're having in Europe at the moment where this is kind of taking the lead just this week is, “This will make things insecure.” Also, you're going to have to ban program software. You're going to have to create a great firewall in Europe that prevents secure tools in all countries from being used in your collection of nation space.
People will have to delete Signal or delete WhatsApp when they enter. Creating a society that, for the sake of a very, very important particular rule, reengineers the whole rest of society, is so strange and so hard to explain to people.
I think the whole view is it's about the children. Their argument is that. Before you forward against that, your argument is it's not just about the situation. It's such a wider, broader implication behind this.
My colleague, Cory Doctorow, we used to argue about digital rights. Well, it turns out that all rights are digital rights these days. All laws are laws about the internet.
All rights are digital rights these days. All laws are laws about the internet. -Danny O’Brien Share on XIn this example, in the early years of the EFF, the hardest thing was to go to judges and say this small change in how you regulate the internet is going to have massive ramifications on the next 10 years of society. It was very hard to explain to them, but in the end, the stakes were pretty low for them.
If we convince them one way, they would go, “Well, I don't know whether this is going to change the world. It doesn't seem that important, but I guess I'll do this.” Now, it's kind of flipped. Everybody knows the internet is super important, which means that whenever there's a problem in the world, somebody somewhere decides that problem is the internet's fault or can be fixed by messing around with the internet in some way. Without realizing that because they're applying a very specialist viewpoint to this, just what the ramifications everywhere else are going to be.
We so often have to bring other people into the room, whether it's bankers who come in and go, “Actually if you prohibit encryption in the way that you want to, the entire financial infrastructure will collapse.” In my case, bringing dissidents, activists, or members of diverse and targeted communities into the room to say, “Actually, you might be even doing this in our name, but this is not going to help and it's going to make things worse.”
A good example of this, a few years ago, was the thing called SESTA/FOSTA. It was, in fact, sex trafficking. It was meant to be this fix for sex trafficking because internet companies weren't liable. It's the whole Section 23, should companies be liable for the communications of their users?
The argument was, we’ll pass this law that will just narrowly make intermediaries liable for the sex trafficking that goes on. It seems like a reasonable argument for a very serious cause. The consequences, which we depicted before it went through, tried to explain to everyone, had everyone say, “So you want child sex trafficking?”
We said, “Look, what's going to happen is all these sites are going to remove sex work, casual encounters, things that look like prostitution. What that's going to mean is it will push all of that underground. It will just remove the ability for sex workers to communicate with one another, to organize. It will restore what used to be the status quo, which is that every sex worker had to either go on the streets or had to have a pimp and so forth.”
That's exactly what happened. They passed the bill and sex workers who are literally the people that this bill is trying to help all came forward and said, “This has made us less safe.”
It's hard to predict those things unless you understand the way that the internet and technology connects and influences everything. Like you said, unintended consequences.
What do you see as the future or upcoming threats in terms of digital rights and civil liberties?
Well, I mentioned one of them, which is there's definitely a big push right now to undermine encryption. We're using the argument—that abhorrent kind of child imagery.
The ones I worry about, the ones where there is bipartisan support, which is so rare, at least in the United States right now. When you see it, you realize that this is something that may just be accelerated through without due consideration. I have to say, politicians in the US are now so frustrated that they cannot achieve anything. If they can achieve anything, they'll do it. There's a lot of that, which is not getting the due consideration.
Another one is the tech backlash, which, as I hope, is conveyed by a lot of what I've been describing. I think that there are a lot of problems that have been created that we have to work on solving. The problem with the tech backlash is that the solutions are going to be often worse than the problem themselves. SESTA/FOSTA, this sort of attack on intermediary liability protections, clumsy attacks, again, it causes a problem.
The other one is taking advantage of the power of these big companies. Again, we talked about Apple. Apple got to where it is in many ways by having such complete control over its ecosystem. Facebook and Google now have similar levels of control over software, less hardware-based environments. Amazon is somewhere in between.
Well, the problem is, all of those companies now have a degree of control over our personal devices and our personal information that is unimaginable. As you pointed out, the problem there is eventually governments will go, “Well, we'll just get these people to do the spying, or just get these people to rewrite the software, to turn our personal devices into tracking, and spying devices.”
That's this line that's very hard to fight because everyone hates the tech companies and don't want them to have power. The governments really want to strike some kind of deal, so they get to share that power.
The real problem for this is what the government should be doing is dealing with the problem of these companies having so much power rather than entering into some sort of power-sharing agreement with them. The real thing that strikes me as the biggest thing we have to worry about is we set ourselves up in a world where governments and politicians feel that they cannot survive without the continuing existence of these quasi-monopolistic powers.
You can’t actually dismantle them. We dismantle them, it would be anarchy because we wouldn't have the ability to censor at the level that they can censor or collect data—vital societal data—at the level that they can collect that data.
It's almost as if the government is so dependent on these entities that we can't dismantle because we're so dependent on them, but we can't allow them to go forward and continue doing what they're doing.
Yeah. What we're going to see is a big bunch of technology regulations coming down the pike. So much of that technology regulation ends up being a compromise between what the politicians want and what the companies want. The real problem is, there are no users in the room. There's nobody representing what the users want and what the rights that users should have.
We at EFF try to play that role. Obviously, it helps if other people are in the room too, demanding these things politically and raising awareness of them. When you can, take back control for yourself. If everybody had switched to iPhones, then it would be very hard for anyone to be able to argue that an app store isn't an absolute requirement or locking down the system is an absolute requirement. It's always useful, even if you're in the minority, to be exercising the rights that you have.
I’m trying to think of the right way to phrase this. What do you think are the most fundamental rights that we should be exercising? What are the switches? What are the settings that we should be thinking about? Not necessarily going to this directory, turn this off, and turn that off, what are the things that people should be thinking about in terms of ownership of their rights?
Right. You can take a sort of very individual point of view. I mean, I'm talking to you on GNU Linux. I try to use software that gives me a degree of that kind of access to freedom. There's a limit to that individual kind of selection. I don't think everybody's going to end up using Linux.
I think a sensible choice is always to think about your own autonomy. I think we've all been in that situation where we've got really excited about a piece of technology, we've gone into it, and then we've gone, not again. I'm locked into this environment. Bearing that in mind, being able to pick privacy, protective tools, or just tools that let you move your data from one place to another, trying out tools.
We have Privacy Badger, which is an easy plugin that you can download from www.eff.org. The point of it is it exercises your right to remove trackers and ads from your web experience. Using tools like that is a really excellent way of both kind of incrementally improving your own privacy.
Also, you notice when they go away. We're entering into this big fight with Google and Chrome right now, because Chrome has been slowly narrowing the API that extensions can have. It's just got to the point now where they're kind of going, “Well, ad blockers kind of need to use a bunch of this API that we're not really interested in maintaining anymore.” That's a canary in the coal mine again. That's one of those things that if you notice that going, you will kick up a fuss.
Again, in order to preserve these rights, someone has to exercise them. In the case of technology, it's about building something which is privacy-protective, that gives you a lot of autonomy, that lets you not just obey what a company or a government wants you to do, and then building your life a little bit around that. When that goes away, you can go, “OK, alarm bells are ringing. I need to call my congressperson, take to the streets, or install GNU Linux, after all.”
Well, it's finding that balance of, “OK, I need to take steps to make myself aware of these things. I need to take steps to protect myself. What's the most obscure OS that I could possibly install? I know I'm never going to get any support for it.”
Obviously, that's not realistic for most people, but it’s finding that balance between the two things.
I'm not taking a sort of security vegan line about this. What I am saying is that I think everybody listening to this podcast probably is an authority figure on technology, whether they want to or not, like you. They're probably the people that other people come to and say, “What should I be using?”
I think because of that, it sort of falls on us to try and think through consequences. I also sort of think that sometimes—I know some people go completely the other way—we kind of project our threat model onto other people. When somebody says, “Oh, I want something that's really easy to use,” we give them something which locks them down into a world that we personally might not want. Hey, they're not as complicated as we are.
I think that sometimes we overdo that. Sometimes we don't walk people through all the consequences of what they're doing. People end up being the consumers of technology without being active citizens in a technological world. Seems very vague. I try to be specific about this. When you're helping someone install something, give them choices and explain what those choices are.
Think about explaining why you might need an ad-blocker. It's not just about blocking the ads, it's about blocking that malware and explaining a little bit about the economics about how all of these things work. I think it's useful sometimes to explain to people that some of the stuff they're being told about cybersecurity is there to scare them.
We know that; you and I know that. A lot of this stuff is snake oil. Well, a lot of it is written in a way that is designed to make people go, “Oh, that sounds terrible. I need to do something about it. I need to spend money.”
Really, what I've learned in teaching journalists about security and all of these things that actually being scared is the last useful thing. It's just not useful for people to make reasonable assessments about security or about the things that they need to be able to buy. People need to be treated like citizens again, rather than people to scare into doing something.
I think it's sort of a tragedy that as the unintended consequences of technology begin to play out, we've ended up playing up the fear of the unknown, the fear of technology to try and get people to do what they want. We try really hard not to do that at EFF.
It's hard sometimes. Pretty much everything we say these days sounds so terrifying. Fundamentally, if you go to sites like Surveillance Self-Defense, ssd.eff.org, one of the things that's interesting is we do another site called sec.eff.org. I'm sure these will be in the notes. SEC sort of explains our thinking behind—for educators—explains that thinking about the advice that we give. A lot of that came down to us going, “Oh, we have to learn how not to scare the living daylights out of people. They need to be able to think about this clearly. That does not help.”
I like that. I think it was on the SEC where you created your own security plan. Really, helping the person to understand the thought process behind what they're doing, not just how they do this. It's like, “OK, what do you want to protect? Who are you trying to protect it from? What are the consequences if that person gets access to it?”
Probably, the more likely one as well, “How likely is this going to happen?” I don't need a 40-foot wall around my house because the likelihood of someone needing to get that high is really, really astronomically not going to happen.
People always come to us going, “Oh, well, I was going to use this software, but I heard that the NSA can get around it.” I'm like, “If you're being targeted by the NSA, an app is not going to help you.”
The kind of unpicking what that means is they've read about mass surveillance by the NSA and trying to bring people up to speed about what that means. What it means is a very minor detail. It's not like being targeted as an individual. It just means that the government knows exactly what everybody is doing all the time. It's a different kind of worry.
People switch, right? People shimmy from being scared to having this sort of privacy nihilism, we call it, where they just go there's nothing I can do.
Why should I bother having any defenses?
Even more depressing version of, like, “Well, I've got nothing to hide.” Which is like, “Even if I did, they'd find it.” What you have to get people to is the point where they're worried about this stuff or they understand it enough to be concerned, but their next act is to talk to somebody else about it or write. Like I said, write to your lawmaker.
One of the things that I find fascinating is that lawmakers don't really know about this stuff. When they do, if you catch somebody at the right time, they not only go, “Wow, this is really worrying.” They go, “And I could be the person who fights for this.” It's an open market. There's so few politicians picking up and running with this right now.
We don't live in a technologically determined world. There were many decisions that were made that got us to where we are now. There are many decisions we can make to make a better future.
I think that's a great place to end. I could probably talk with you for hours about this stuff. We’ll both be gleefully smiling the whole time.
I know, it's very odd. If you could see the video of us, I'm talking about these dystopian things, but I'm grinning with excitement. That's something that we can do about it.
For many of us, at least in the United States, if we don't like the way things are, it can change.
It really can. The reason I like technology is that it's an engine of change, and it should be empowering. I think anybody who likes technology has got a taste of that empowerment. I think that people should be able to take that, run with it, and change it, even when it's the technology that you want to fix.
The reason I like technology is that it’s an engine of change, and it should be empowering. -Danny O’Brien Share on XThat's great. If people want to support the Electronic Frontier Foundation if they want to follow you, how do they find you? How do they find out more about EFF?
I'm @mala on Twitter and Mastodon, because that's where I roll. The most useful thing you can do is pay for my sandwiches at work and donate to EFF. We’re actually membership-driven, people don't realize that. Actually, a huge proportion, majority of our funding comes from individual members. If you sign up and become a card-carrying EFF member, you’re literally enabling us to do the judicial, technological, and activist work that we're trying to make the world a better place with.
For those in the US, are you a tax-deductible, non-profit?
We're a 501(c)(3). I am told about it, I was actually an international director for EFF, as well. We do stuff all around the world, as well. There are digital rights groups like us in pretty much every country. If you can't find one, give some money to us or just write to me and I will find you one. Maybe, you can start one.