Site icon Easy Prey Podcast

Technology Regulation is Outdated with Bruce Schneier

“Every single day a new vulnerability, a new exploit, will always be found by a researcher or a company. These will often be quite unique, but that only opens the front door. What they do when they get inside, they follow a… Share on X

Regulators have to invest a considerable amount of time in keeping legislation and policy up to date regarding technology and AI, but it’s not easy. We need floor debates, not for sound bytes or for political gain, but to move policy forward.

Today’s guest is Bruce Schneier. Bruce is an internationally renowned security technologist called The Security Guru by The Economist. He is the author of over a dozen books including his latest, A Hacker’s Mind. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. He is a fellow at the Berkman-Klein Center for Internet and Society at Harvard University, a lecturer in Public Policy at Harvard Kennedy School, a board member of the Electronic Frontier Foundation and AccessNow, and an advisory board member of EPIC and VerifiedVoting.org.

”“Typically, Share on X have access, then the intent is to get their hands on as much sensitive information as possible because everything has a resale value.” – Andrew Costis” username=”easypreypodcast”]

Show Notes:

“There’s not necessarily a one-size-fits-all.” - Andrew Costis Share on X

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Bruce, thank you so much for coming on the Easy Prey Podcast today.

Thanks for having me. When was the last time I was here?

You have not been here before. This is your first time. I’m very pleased to have you here. I’ve been doing this for about four years now, and I can’t believe we haven’t had you on yet.

It’s nice to be here.

These days, you’re super passionate about AI and what is intersecting. Let’s talk about that.

All right. I am ready. I teach here at the Harvard Kennedy school, which is a public policy school; it’s a postgraduate school. Basically, that means I teach cryptography to students who deliberately did not take math as undergraduates, which is entertaining to say the least. They are all very interested in the tech, but in the policy implications of the tech, which I find fascinating and something that I think a lot of us in cybersecurity have moved into policy. It’s interesting teaching students these things.

They are interested in new tech and how it’s going to change things. Some look at it from an arms race lens—US versus China—a national security lens. Some look at it from a competitive lens, from a human rights lens. It is clear to me that the technologies of AI are just going to change a lot of things. I don’t know when.

A lot of what we’re seeing is marketing bullshit, but some of it’s real and it’ll get better. Thinking a lot about the intersection of AI and cybersecurity as a profession, something we really need to start paying attention to.

Maybe we’ll go down this rabbit hole a little bit. You’re talking about policy. Is one of the challenges with tech is policy keeping up with the tech? It always seems to be a policy that is running 10–15 years behind the tech when it comes to how we manage our tech.

All right. Is that new?

No, I think it’s been going on for decades.

That’s true for pharmaceuticals. Was that true for automobiles? Was that true for aircraft? Yes. We in tech have this story that legislators can’t regulate it because they don’t understand it. Legislators regulate things all the time.

Is it a question that they don’t understand or they can’t understand it? I think there’s a little bit of a nuance.

That also might be true for aircraft design.

That’s true, OK.

That might be also true for pharmacology. We in society figure out and have to figure out how to make laws in areas where the lawmakers don’t have the expertise. Tech is one of them.

We in society figure out and have to figure out how to make laws in areas where the lawmakers don’t have the expertise. Tech is one of them. -Bruce Schneier Share on X

Now tech is fast-moving, probably faster moving than aircraft design, than pharmacology. I think we need some more agile tools, but the notion that tech can’t be regulated has been very harmful. The lack of regulation is why things are so bad out there.

We need powers who can work against each other. If there is no check on corporate power, you get a corporate dystopia. You get monopolies that do whatever they want or are extractive, and it’s bad for society. I’m with you, but I’m not with you. It is a convenient excuse, but it’s too convenient an excuse.

I think my point is not that we shouldn’t have regulation. I definitely see the pitfalls of lack of regulation. From time to time, I wonder, do politicians overshoot or get overly influenced by money?

United States, certainly. We in the US don’t pass laws that money doesn’t like, period. Money runs politics in the US. It’s one of the reasons we’re in such bad shape.

Europe is different. Europe has its own issues. Europe will sometimes overshoot, be over regulatory, but at least they’re friggin' trying. Comprehensive privacy laws came out of Europe. The EU AI Act is in Europe, Digital Markets Act. As regulators, they don’t issue fines that are rounding errors. They issue fines that companies notice.

Your goal is to change corporate behavior. You’ve got two options: You can jail executives—which I’m totally in favor of—or you can fine numbers that affect their share price and are not, like, one-tenth of what they paid their lawyers.

This is hard. We’ve built a world where corporations have the rights of people, yet they are these immortal hive organizations that are sociopathic, are single purpose. They’re not constrained by the psychological constraints that regular humans are, yet they’re handed most of the rights and responsibilities of humans. That’s just not working, but it’s what we’re stuck with.

This is a lot of what I’m thinking about. How do we look at society from a security perspective? What are the incentives? What are the motivations? What are the security controls? How well do they work? If you think about stuff I’m saying, we could talk about software in the same way, but I’m talking about the set of rules that run the economy instead of the set of rules that run your laptop.

You talked a little bit about abuse and the companies being able to abuse their power. In what way are you seeing that happen in security, privacy, AI, in that space?

I think we’re just seeing the AI abuses. I’m trying to write about this. To me, AI is a power magnification tool. It makes the user more powerful. Will the uses further empower—they’re already powerful—or will it somehow democratize power? That’s really what we have to deal with. I think that’s true for a lot of tech.

To me, AI is a power magnification tool. It makes the user more powerful. Will the uses further empower—they’re already powerful—or will it somehow democratize power? That’s really what we have to deal with. -Bruce Schneier Share on X

I wrote in my latest book, A Hacker’s Mind, about loophole finding. I’ll pull the book out. I’ve got a pretty cover. Here, I’m writing about hacking society, where instead of hacking computer code, we’re hacking regulatory code. You might think of the tax code. It’s a set of algorithms, and there are bugs. There are vulnerabilities; we call those vulnerabilities tax loopholes.

There are exploits; we call them tax-avoidance strategies. There are hackers; we call them accountants and attorneys. But it’s very, very in parallel. If you imagine that you or I find a new loophole, a new hack of the tax code—by that I mean something the rules permit by the letter of the rules, but it’s unintended or unanticipated—it’s a hack.

It’s not really a cheat. It’s a way to bend the rules. If we find one, we might make a few hundred dollars in our taxes, maybe a few thousand. We found a really great loophole. Goldman Sachs finds that same loophole, they make millions. They have more raw power to magnify.

I think about this in terms of AI. There’s an essay I wrote in 2022 maybe, called The Coming AI Hackers, where I imagine AI is finding loopholes. Already, there’s a lot of research in AI finding vulnerabilities in source code. They’re not very good at it yet. They’re OK. They’re going to get better, of course. But if you think about it, it’s the thing you give an AI.

Here are a few million lines of code, source code, object code. Go through it all. Find the loopholes. A lot of data, pattern matching. They’re going to get better. They’re going to get really good at it. In cybersecurity, this is really interesting news because it benefits the attacker and defender.

Legacy code, the attacker uses it to find all the vulnerabilities and attack systems. The defender, on the other hand, can use it to find the vulnerabilities and fix them. You can imagine a future where this tool is built into the development process. It’s part of the compiler. All vulnerabilities or all findable vulnerabilities are removed from code before it’s ever fielded, as a matter of course.

That sounds like a utopia.

It does. Remember the crazy, old days when there are vulnerabilities in code? Those were dumb years. The transition point is dangerous. All the legacy stuff is vulnerable. I think this is actually a huge win for the defense and AI and cybersecurity. I think this will happen, but let’s go with my generalization.

Imagine we train an AI to find vulnerabilities in the tax code. Will it learn that you register your company in Delaware and your ship in Panama? Do you remember the double Dutch Irish sandwiches that attack vulnerability that Google, Apple, or other companies use to save billions in US taxes? It exploited—I think I get this right—Dutch law, Irish law, and an offshore tax haven in the Caribbean. All those laws together created this loophole.

Humans found that. Could AIs find those? Will they find one, ten, a thousand? How many will they find? It’s a very different world. It’s really going to be weird to watch AI start doing human cognitive tasks, because they’re going to do them differently—in some cases better, in some cases worse, but differently. A lot of our security systems are set up against humans doing cognitive tasks in the ways humans do.

We don’t have a world where a thousand vulnerabilities in the US tax code are suddenly discovered. Revenues drop to zero, Congress is moribund, can’t do anything, and things fall apart. That kind of crisis could happen.

It’s interesting. I had never thought about AI being used to exploit tax law.

It’s going to be harder than exploiting computer code because there’s a lot more context. I think I’m describing not stupid science fiction. If I were the president of Goldman Sachs, I would have a skunkworks in my basement doing this. It wouldn’t be the AI. It would be a collaborative process. A lot of the best AI work in these types of systems are human plus AI together.

It’s going to be something like this. The AI combs the planet’s tax codes, and it pops up, “Here’s something interesting.” Then the human goes and looks at it, and says, “That’s not interesting, and here’s why.” Then the AI gets better. Or, wow, that’s a cool idea, and then the human develops it more. It’s going to be that kind of collaborative process. It’s not the AI overlords to suddenly invent tax loopholes. It’s AI plus a very skilled tax attorney, in the same way that AIs assist good programmers more than average programmers. Good programmers know how to use the AI, incorporating what it does into what the human is doing. It’s a better collaboration, whereas your average or poor programmer just isn’t able to ask the right questions or to use the AI’s results in the right way.

Can’t think outside the right box.

Right, and AIs have different boxes, basically.

Not biased by emotion, in theory.

But biased by all sorts of other things, but yes. That is a little bit dangerous because biased by emotion is very human, and our human systems are built around that. This is where you get the sociopath, but the not biased by emotion. Sociopaths are a way to hack society. They’re interesting.

If you’re in a medium-sized family group in the East African Highlands in 100,000 BC, one or two sociopaths in your tribe is incredibly valuable. It helps you survive because you’re constantly at risk from others around you, and they will be invaluable for the defense. That is a trait you’re not going to breed out of your species.

In 2024 Cambridge, Massachusetts, a very different world. Those skills aren’t as valuable. It is interesting to think about the role of the sociopath in small-group defense. Sorry. Complete tangent. I hope it’s what you’re expecting.

I’ve had these conversations about narcissists, sociopaths, and psychopaths as well in the past. Not my field of expertise.

This is well-taught and grounded for you. Excellent.

But highly interesting to me of how all these things intersect with one another. Do you see AI being used on the policy side to solve policy issues?

I’m writing a book on AI democracy. I’m thinking about this very question. We’ve already had examples of AI writing legislation. To think about it, AI is writing text, and a law is just a piece of text that we vote to adopt. Like every text writer, AI is going to become a collaborative tool.

There’s a story of a city in Brazil, where a legislator wanted legislation on water meters. He went to ChatGPT and said, “Give me this legislation.” The AI wrote the legislation. The human submitted it to the legislator for a vote without changing it, it was debated, voted, and passed. Then the human said, “Look. Hey, the AI wrote that.” That’s not that interesting. The AI didn’t pass the law, the humans passed the law. The AI created the language. That seems perfectly reasonable.

I think we’ll see AI-assisted legislative writing, and then go into the human process. But here’s where it’s interesting: AIs can write more complex law than humans can, especially in light of the Chevron decision. You might have the AI-assisted law being more complex, being more detailed, because the AI can do more of that faster.

Now, it’s going to be human-reviewed, but it’ll also be AI-reviewed. What are the loopholes? Again, finding loopholes in legislation. With this legislation, tell me any unintended consequences.

That has always been one of my gripes about legislation is unintended consequences.

That’s right. They will exist, but I think AIs will be able to find them. They’ll also be able to insert them. This is now a boon for lobbyists as well. Again, AI increases power. Who uses the power?

I think you’re going to see AI in negotiations, again, as an assistant. I’m going to China for a trade negotiation, and I have my three human negotiating assistants and my two AI assistants. They’re going to suggest strategies and analyze what my counterparts are saying.

I will use everybody’s advice, and the AI’s just a member of the team. This seems perfectly reasonable. Of course, China’s going to try to hack my AI. We’re going to try to hack their AI, so there’s a whole level of cybersecurity on top of that.

I think these applications are coming. They’re coming from the bottom up, not top down. All it takes is legislators to say, “I need some help drafting this bill.” Open up a chat window and suddenly they have help. I don’t need to change laws for that to happen.

I can see AI helping outside of people that have huge financial incentives. If I could train AI on auto accident settlements….Me as a consumer, I’m in a fender bender, I think my case is worth $500, but the AI says, “Oh, no. These cases normally settle for $10,000.”

A couple of things there. This AI as adjudicator, can we agree as humans instead of binding arbitration? Let’s say we’re in some kind of contract and we’re partners. Normally, we would agree if we have a dispute, we go to binding arbitration—cheaper than the legal system.

Let’s say we agree to binding arbitration by AI. There’s an AI that does dispute resolution—I’m making this stuff up—and we agree to use it. We could, and you’re right, we have AI as legal assistants. “Tell me how to best position my case. Tell me what my dispute is worth.” AI as an arbitrator helps two sides come to an agreement or mediate a dispute between two sides. This stuff, there’s research here. There are no products yet, but I truly think it’s coming.

To me, I could see some lawyer working with an AI company to help my clients get more money out of their accident settlements.

AI is already being used in the US to help screen potential jurors. Who is the juror and what are they likely to decide? There are already AI products for that. Anytime we’re going to have human judgment, there will be AI assistance. Some will be good. Some will be poor. Some will be OK, though most will be in the middle, and they will be used by humans as another input.

AI is already being used in the US to help screen potential jurors. Who is the juror and what are they likely to decide? There are already AI products for that. Anytime we’re going to have human judgment, there will be AI… Share on X

How do we make sure they’re not biased towards one side or the other? Let’s say deep pockets are funding it; the implication is it’s going to be skewed in their favor.

And guaranteed to will. One of the things I push is for public AI. I want a public AI model, a non-corporate model. You’re right, it will be skewed towards corporations. It might be biased by race, or by gender, or by any of the other, ethnicity, any of the other nutritional biases we’re seeing, if the corporations don’t care.

The AI that’s going to advise the Republicans, they have a different bias than the AI that’s going to advise the Democrats. That’s not bad. The other side of bias are values. I’m going to want an AI that reflects my values.

If I’m a bigoted legislator, I probably want an AI that reflects my values as well. We might not want to give it to them, but there will be a company that will. Just like you can’t remove bias in humans, you’re not removing bias from AIs, but I do worry about what you said. There’s AI that serves the corporate monopolies, because right now, that’s what we got, and that’s not going to be great.

How do we protect against that?

I’m a big fan of legislation and regulation, but people don’t like to hear that as an answer. That tends to be what I like. I think the government really needs to regulate this space.

It is interesting because when we talk about legislation and regulation, the knee-jerk reaction is, I agree. The last thing we need is more legislation, but hey, it got us seatbelts. That saves X many lives a year. Most people don’t question it now.

This is weird belief. It’s very slick in the valley. It’s very libertarian that the government can’t help. It just makes no sense. Unfettered corporate power is terrible. There are market failures everywhere. Monopolization is rampant, and you need government. Otherwise, you get a corporate dystopia. I have very little sympathy for the Silicon Valley libertarian approach. It just makes no sense.

Something we haven’t talked about I want to bring up before we end: AI and cybersecurity. The question I’m asked often is, “Will AI help the attacker or the defender more?” We’ve talked about vulnerability-finding, where it helps both, but it helps the defender more because the defender fixes the vulnerabilities. In the long-term, it helps the defender.

In 2016, DARPA held an AI capture-the-flag contest. You know capture the flag? It’s a staple at hacker conventions. Human teams defend their networks and attack other people’s networks, and then it’s scored. They ran a competition with AIs. There were regional competitions. The finalists had a final at DEFCON 2016 for 10 hours. They were hacking each other’s systems. It was great. You could see a picture of all the servers on stage doing nothing for 10 hours.

One of them won; it was a team out of Carnegie Mellon. That was really interesting because that is a system where the AIs are both attacking and defending. DARPA never repeated it, which is sad. I think they should have done it every year, but China has. It’s called robot-hacking games. There were reports from the initial few, but then the Chinese military took over, and we don’t see any reports.

It implies that there’s some success.

They are putting a lot of work in both AI attack and defense. My feeling is, near term, AI will benefit the defender more because we’re already being attacked at computer speeds. Defending at computer speeds will be enormously powerful. The problem is going to be, of course, adoption. If there is an AI cyber defense product, getting it in the networks, that’s always hard.

Isn’t it also the fact that we have so much legacy garbage in our corporate closets? It doesn’t matter how good your defender is if you still have something that is inherently vulnerable.

I think about this in general. How do I secure the Internet of Things? I’m going to need some kind of overarching system monitoring my home network, watching everything happen. That refrigerator is now sending emails—stop it. I think AI is going to be good against legacy because we won’t be able to fix the legacy; we need to surround it somehow. But yes, we’ve got to deal with legacy, but I want some AI defender-like being in the network watching all the legacy stuff and seeing what’s going on.

“Hey, that legacy piece of equipment is doing something weird. We need to stop that.”

“It’s never done before.” Right. “Maybe we should stop it.”

“The refrigerator is sending emails again.”

Exactly. I think there’s a lot here. There’s really great research.

You see in the short term, defense benefiting from it. What do you see in the long term?

Long term, I don’t think we know. It’s an arms race. It’s going to go back and forth and back and forth. I don’t know. I don’t think anyone can predict. In the end, will AI benefit the attacker or defender more? I don’t think we have any clue.

Too far away to see things pointing one way or the other.

And too many interactions. It’s not just that it’s too far, it’s that tiny things will happen, and the feedback loops will be great. The second-order, third-order effects. When I talk about AI and democracy, the effects of AI helping write legislation means more complex legislation, which will probably change the nature of lobbying and also the executive branch rule-making. Those are the thoughts that affect you, and they’re very hard to predict because they’re social, not technical.

Now I’m thinking about AI hacking. I wonder, just in my own experience running my websites, 90% of the stuff hitting the edge, it is not human. Once you start throwing in layers and layers of AI attackers, is our bandwidth just going to be 99.9% noise?

We don’t know. That’s how good discrimination is going to be, that AI detection. I think we do have a problem of AI flooding human channels. We have managed computer networks where the human signals are very small, and it’s been OK. You and I are talking, so it would be really hard for an AI to intercept this.

There are examples of AIs fooling people on Zooms, but if we were in person, it’s very hard. We’re talking by text. There are no human signals. We’re just taking it for granted that we’re human. I think that’s going to be a problem. There’s going to be this need for proof of humanity.

We’re talking by text. There are no human signals. We’re just taking it for granted that we’re human. I think that’s going to be a problem. There’s going to be this need for proof of humanity. -Bruce Schneier Share on X

A company like Facebook or Twitter are not going to care because it’s more profitable not to, but others will. The hope is that Facebook dies a fiery death because they won’t do it, because I think that changes. The rise of AI conversants changes what we think and changes what we think others think.

Over the past decade, lots of people mistook what people talk about on Twitter with what people talk about. Twitter is not the population. It’s not representative. What AIs on Twitter talk about is totally not representative of what people talk about. We have to be able to make that discrimination.

When do you see the rise of a social media company or an existing social media company having such sophisticated AI that they can assure that, “Hey, 99.9% of the interactions that you have through our platform are with the human”?

That’s going to be hard. There are two problems I have to solve. One is the AI bots, and maybe I can do that with proof of humanity. The other is the human accounts, where the human has let the AI log in for them. That’s very different because my proof of humanity will say, “There’s a human behind this account. The human is just turning over the keyboard to an AI.” That’s a lot harder.

The human created the account, we could do. It’s annoying, but we do it for lots of documents. Humans get driver’s licenses. They have to show up in person at a place and get their photo taken. We have a complex proof of humanity system. We could build that.

I’m going to make this up. Here’s a social network. In order to join, you have to show up at a FedEx and sign up in person, or somebody else that’s all over the country. If you can’t do that, we have other backups. We can make this up, but it’s not going to solve the AI standing in for the real human.

Yeah. Two layers to the problem: account creation and account activation.

There’s account creation, and then there’s post creation or the sentence creation.

Utilization.

Yeah, utilization.

You and I together are going to create this platform now. As we wrap up here, what can people do to help this move along?

It’s hard. People ask me that regularly. This has to become a political issue. I need to see floor debates about this. Not the kind of stuff we see. We’re in the aftermath of the CrowdStrike disaster. I guarantee you, George Kurtz is going to be holding in front of Congress. There will be a hearing, he’s going to be yelled at, he’s going to be contrite, and nothing will happen. We have to get beyond nothing will happen. This needs to be something that policymakers care about in a deep way. I don’t know how to do that.

President Biden’s executive order in AI was really good. It has the limits of an executive order. I need something that has legislative oomph. The CSRB—Cyber Security Review Board—which reviews cyber incidents, they’re a great organization. They put out decent reports. They have no subpoena power because they weren’t established by an act of Congress. Fix that. I need legislators to get involved, not just the executive.

It has to be beyond the, “I’m here to get a soundbite to get people to donate to my campaign.”

If it’s in the US, it just won’t happen. We just don’t have in the US a functioning legislator. We just don’t. Maybe we can fix that, but it’s not getting better. A lot of ways, I look to Europe. Europe is the regulatory superpower on the planet. These countries are all international, all global. Maybe we abandon the US and look to Europe to solve these problems.

I know a number of companies that run their business model under the most strict jurisdiction of which they operate.

Right. I worked for IBM when GDPR came out. IBM said—it’s fascinating to me—“We’re going to implement GDPR worldwide because that is easier than figuring out who European is.” A good regulation in a large enough market moves the planet. Yes.

A good regulation in a large enough market moves the planet. -Bruce Schneier Share on X

I know someone who works in consumable products. The company that she works for, it’s, here’s the most strict kind of the FDA equivalent of all the countries they work with as well. “If we comply with this one, then we’re good everywhere,” and it just simplifies everything that they do. They don’t have to do six different formulations for six different countries. What if someone ships it from one country to another country? Just make it according to the most strict one.

I’m in favor of that.

OK, that’s it. We’ve decided that the most strict laws win.

As they should. I don’t want the weakest laws against murder to win. I want the strongest laws against murder to win.

That’s true. We just don’t want our laws to be crazy.

I think craziness comes from lobbying. That’s where you get all the weirdnesses. Everyone’s trying to tweak it in their favor, and we’re just too good at it now. That kind of stuff I think worked a couple of hundred years ago better. Things were simpler. This gets back to a democracy. I think the complexity of our system is such that it’s no longer functioning.

Time for simplification?

I would like to see some of that, but is that possible? Is society so complex? Now the real question is, what is the system that can handle the complexity of current society? It’s unlikely. It’s a political system invented in the mid-1700s. Not a good way to bet.

We’ll end with this weird tangent.

I’m up for weird tangents.

There was a book I read probably 30 years ago. Maybe you’ll be able to pick up on the title of it that I can’t, for the life of me, remember. The universal government got so fast at proposing laws and passing them that they created a bureau of sabotage to slow down the government so that it wouldn’t make decisions so quickly.

This is a science fiction law.

It was a science fiction law.

I don’t remember it. I think science fiction, though, is a really interesting way to explore a lot of these futures. I do a workshop on reimagining democracy, and I bring in science fiction writers who have written new political systems. They are always fascinating to have.

Science fiction writers bring so much value to us, whether it’s concepts for new technology. I really want it, so let me figure out how to do it because I read about it or I saw it on Star Trek, therefore I want to make it a reality.

Heinlein wrote about it. Right, I get it.

Bruce, thank you so much for coming on the podcast today. If people want to find you, where can they find you?

I am on schneier.com. I don’t do any social media, which makes me a freak yet wholly productive, but there’s a Facebook that mirrors my blog, and there’s a Twitter that mirrors my book. Basically, schneier.com. All my books, all my essays, everything I write, everything is there. You can subscribe to an email newsletter. You can go there, but I am very much not a social media person. I have never TikTok’d or whatever the hell they call it these days.

That’s why I couldn’t find you on LinkedIn.

I am not on LinkedIn.

Bruce, thanks so much for coming on the podcast today.

Thank you.

Exit mobile version