Site icon Easy Prey Podcast

AI: Doomsday vs. A Very Bad Day with Dr. Robert Blumofe

“It’s remarkable what has happened over the last decade since the dawn of deep learning.” - Bobby Blumofe Share on X

After a data breach, many criminals are beginning to use deep learning AI to categorize the information they have stolen. They’re using a steady amount of micro attacks on individuals and businesses and not just full-scale assaults.

Today’s guest is Robert Blumofe. Bobby joined Akamai in 1999 to lead the company’s first performance team. While serving as one of Akamai’s chief architects, he was instrumental in the design and development of their intelligent edge platform which now handles trillions of internet requests daily. Bobby’s technical past lends itself to a passion in machine learning and AI and he holds a PhD in Computer Science from MIT.

“AI has become a big part of everything we do.” - Bobby Blumofe Share on X

Show Notes:

“Generative AI gives criminals the ability to create fake content, to mimic other people, to create misinformation, to launch spear phishing, to launch social engineering attacks… and do it at scale.” - Bobby Blumofe Share on X

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Bobby, thank you so much for coming on the podcast today.

Thanks, Chris. Thanks for having me.

Can you give myself and the audience a little bit of background about who you are and what you do?

Goodness. I'm currently the chief technology officer at Akamai Technologies. It's a company I've been with for 25 years now, or almost 25 years. That'll be this summer. For those of you who aren't familiar with Akamai, we're the company that I think I can safely claim invented what's now called the content delivery network, which is really a fundamental part of how the Internet, or how the World Wide Web, operates today. It's been a critical part of making the web scale and work the way it does today.

We still have that business. We also have a cyber security business, and we now also have a cloud computing business. Those are the three big sections of Akamai's business. I'm involved in all three, of course, as the company's CTO. As I said, it's a company I've been with nearly since the get-go. The company was founded in the summer of 1998. I joined in the summer of 1999. Yeah, that's me.

Was there something specific about Akamai that you were like, “This is where I want to start”?

The founder of Akamai, Tom Leighton, who's now our CEO and my direct boss, I've known him for a long time because he was a professor at MIT when I was a graduate student. He wasn't actually my PhD advisor, but he was a reader on my thesis, and I took his classes. I TA'd for him. I've known Tom for a long time.

I've told this story a number of times; there's a mini lesson in it. When I joined the company, at the time, I actually didn't know exactly what the company did. What I knew about the company is that Tom was the founder. I knew that one of my best friends from MIT was the head of engineering. I knew half a dozen or more other former colleagues of mine from MIT who were there.

I knew that this is a group of people that I really enjoyed being with. They're motivated, they're exciting, they're smart, and they're funny. They're people that I really always enjoyed being around. I always had a good time with these people. I thought, “Well, just stay with this group of people. Be with Tom, be with Bruce. Be with this group of people and something good will happen.”

It doesn't matter what the company does. What matters is who the people are or who the company is. I've oftentimes said, therefore, that the who is more important than the what. This is a true story that my reason was simply because of the who, not the what. I didn't even know the what.

I absolutely agree that for the companies that I've worked for, it's been more about the people than it has been about the product.

Exactly. Just surround yourself with great people, people you enjoy working with, people who make you laugh. I always add that—people who make you laugh. Something good will come of it, and it certainly worked out for me very well at Akamai.

With Akamai, starting with the CDN, at what point did you guys move into providing and doing cyber security stuff?

Really, as a product line, about 10-ish years ago, I don't have the exact date. We were doing some sort of bespoke security work before that. In fact, it was really customer ask that got us into that business. We had customers who recognized that we were delivering their content. We could see all the traffic that was going to and from their website. As attacks were starting to proliferate, whether they were attacks trying to exploit app vulnerabilities or simply DDoS (Distributed Denial of Service) attacks, these types of attacks were the kind of thing that not only could we see but potentially we could block them.

We didn't have a business doing that, but customers recognized that this is something that we could maybe do for them. We started doing that for a handful of customers in a bespoke fashion. We really launched these capabilities as a business about 10 years ago, starting with DDoS protection and web app firewall, since then moving into bot management. That's now also a very big line of business for us in cybersecurity.

Most recently is what's called zero trust security, which includes zero trust to access and micro-segmentation. Those are big growing parts of business for us. By the way, just to fill in the story, the most recent thing we've done is we did an acquisition that's putting us into the space of doing API security, which we're very excited about. APIs are now such a large part of any company's attack surface. Protecting APIs is paramount, so we now have an exciting business there.

I remember doing an interview with someone else talking about medical APIs and how scarily vulnerable many of them were to being poked and prodded to reveal stuff that they shouldn't.

The interesting thing about APIs is you're exposing business logic. You're not just exposing the consumer interface, you're really into the business logic, and there's so much complexity there. -Robert Blumofe Share on X

Exactly. The interesting thing about APIs is you're exposing business logic. You're not just exposing the consumer interface, you're really into the business logic, and there's so much complexity there. It wasn't that long ago that mostly APIs were hidden beneath a veneer and therefore not visible to the outside. Now, of course, APIs are, by and large, exposed because that's how your apps get access to the business logic, and that's how your partners get access and things like that. The world has changed when it comes to APIs in a fairly short span of time. Therefore, API security is becoming a must-have.

The world has changed when it comes to APIs in a fairly short span of time. Therefore, API security is becoming a must-have. -Robert Blumofe Share on X

With AI coming on the scene, how has that impacted the cybersecurity business?

AI is remarkable on what's happened over the last roughly decade since the dawn of deep learning, I guess around the early 2010s, maybe 2012, 2013, as the dawn of deep learning. We've been using deep learning really since about then, because it's really a perfect tool for classifying things, for example, traffic.

We started using deep learning, as I said, about 10 years ago in our security products as a way to classify traffic in terms of what's normal versus what's abnormal, what's benign versus what's malicious, what's coming from humans versus what's coming from bots, and across all these different dimensions. We've had some real depth in deep learning as a tool to solve problems for our security products.

Now, of course, more recently, we're doing more with generative AI using large language models to do things like make it easier to interact, provision, and set up security products, get insights into the threat landscape and what's happening to label assets in your environment. A number of things have now opened up through the progress that's been made with generative AI.

We've been major users of AI. We continue to be major users of AI. Now, of course, as we get into cloud computing, AI becomes a workload that we can support on our platform, both training and inference. We can deliver AI workloads. We can protect AI workloads. AI has become a big part of almost arguably everything that we do.

I think for more and more individuals and companies, they're starting to poke at AI, test it, and figure out, “How can I do this to leverage its ability to reduce my overhead?”

Exactly. This is another area. It's just so surprising to me how quickly it's gone from being something that I would say, by and large, didn't affect most enterprises. Obviously, if you were in robotics, it mattered. If you were in certain domains, AI really matters. But for most enterprises, AI was not an important technology. That changed about 10 years ago with deep learning. And we've now moved into a phase. Again, it's really only about 10 years now where AI is a must-have technology in almost every enterprise.

Was some of your ability to do the deep learning on traffic a direct result of not necessarily who your customers were, but because you could just see so much internet traffic, and you're not just seeing what's happening with one customer, you can see hundreds and hundreds of customers?

I think one of the learnings from AI, particularly in deep learning, the quantity and quality of data is paramount. I've argued that the value in AI really accrues to the data because I think that the models, they're just readily available. There are libraries that you can download. The models are just readily available. They're well published, they're well understood.

There's really very little differentiation between different enterprises or even providers now based on the model itself. I would make the same argument around the computation resources. We all know that AI takes a large amount of compute resource, both for the training as well as also to do the inference.

Compute resources are available. Temporarily, you could argue that there's a shortage of high-end GPUs, but that's a temporary thing. Compute is readily available. What really differentiates one company versus another, I think, is the quantity and quality of the data that they have access to.

What really differentiates one company versus another, I think, is the quantity and quality of the data that they have access to. -Robert Blumofe Share on X

When it comes to traffic data, we have that. We see an enormous amount of the Internet on a daily basis. That data can be used then to train models that have a pretty high fidelity in their ability to distinguish between what's good, what's bad, what's benign, what's malicious, what's normal, what's abnormal, what's human, what’s bot, and so on.

That's amazing. Are you also starting to see on the cybersecurity side, adversarial AI, that the criminals are now leveraging AI?

Absolutely. I think we're at the very early stages of that. I think we're moving into a new era of cyber criminality with AI. Arguably the last five years or so were really characterized by the rise of the cyber criminal who's in it for the money.

I think we're moving into a new era of cyber criminality with AI. Arguably the last five years or so were really characterized by the rise of the cyber criminal who's in it for the money. -Robert Blumofe Share on X

It wasn't that long ago where outside of nation states, the biggest threat actor that we thought about were the hacktivists. Organizations that were out there really to make a point, to poke people in the eye, and maybe make a political point or something like that.

That was really the dominant threat actor that we had out there. It's really, I would argue, around the last five years or so that we've seen the rise of the cyber criminal who's money-motivated. That creates a whole new level of competence and capability in organization and virulence in these attacks.

Ransomware, for example, because that's a way to turn cyber activity into money. That's why we saw the rise of ransomware and related types of attacks, any attack that can be translated into money. That's been the last five years.

The next five years and maybe more, I think, are going to be characterized by those same cyber criminals adopting AI. My view is that AI is essentially a super weapon if you're a criminal. I sometimes joke that if you're a cyber criminal, the greatest day in your professional life was November 30th of 2022, because that was the day that ChatGPT was announced.

My view is that AI is essentially a super weapon if you're a criminal. I sometimes joke that if you're a cyber criminal, the greatest day in your professional life was November 30th of 2022, because that was the day that ChatGPT… Share on X

While it may not be ChatGPT is your weapon of choice, but that might have been the day or shortly thereafter that you learned that there's this whole new class of tools that can do pretty amazing things for you if you're a cyber criminal.

I have some thoughts in my head. Why is it such a super tool for the criminals?

This is the ability to create fake content, to mimic other people, to create misinformation, to launch spear phishing attacks, social engineering attacks, and do it at scale. These types of attacks, for example, spear phishing, have always been very effective, but historically, they required a lot of research. You've got to do a lot of work to really craft the appropriate custom lures that appropriately are targeting the person that you're after and things like that. Now, you're in a world where these things can be created en masse.

It's no longer work hard and create one mega attack and hope that that one works. It’s now create millions and millions of micro-attacks. Any one of those attacks, if it fails, it fails. If it produces a small result, it produces a small result. Who cares? You've got millions of them going out because everything is so automated. That's a pretty scary world. The fact that misinformation, targeted attacks, social engineering attacks, can be created at such levels of virulence and scale.

Are there any specific examples of that that you have seen happen?

We have seen the headlines. I think almost everybody has seen a handful of headlines where we've seen deep fakes, whether it's in the political realm or in the business realm, where we've had people fall for what sounds or looks like somebody that you know who's asking you to do something. While it sounds like my boss, maybe even it's a video call and it looks like my boss, but it's not, and you end up doing something that you really shouldn't have done.

We've seen those headlines. I think it's pretty safe to assume that there's a lot more of that going on than we've read about. Again, it can happen at such a micro level. So much of this can be going on, and it doesn't have to be the high-profile attacks. It can be millions and millions of very small attacks. All it takes is for a handful of them to be successful. If you're a criminal, then that’s success. I think we're only just seeing the beginning of that. We'll see more headlines, but you have to remember that under the headlines, there's a lot more activity going on that you've probably never read about.

So much of this can be going on, and it doesn't have to be the high-profile attacks. It can be millions and millions of very small attacks. All it takes is for a handful of them to be successful. If you're a criminal, then that’s… Share on X

What would be an example of the small activity that is not making the news?

Again, it would be fooling somebody into doing something through deep fakery, through social engineering, or something like that. I think there's also, moving from the enterprise to the social level, the misinformation that's out there. It's just so easy to create just massive amounts of misinformation.

Again, any one misinformation campaign might completely fail, might have almost no impact. But if you do enough of these things, each one adds up small amounts of impact, and the total impact can be dramatic. It's not necessary that these attackers have to launch high-profile attacks that might surface onto the radar and might get them detected. Rather, they can stay below radar with millions of, if you will, micro-attacks.

Let's say on the misinformation space, you could have thousands or tens of thousands of accounts doing small posts with disinformation, and it's not 40 guys in a room pounding away on their keyboards. It's, “Hey AI, generate me 10,000 posts about this misinformation topic.”

Exactly. You can create these things en masse. It's fully automated, and they can be actually targeted. As you've got your list of where you want to go, these misinformation campaigns can be targeted for the recipients to address issues that they're most likely to be receptive to and things like that. It's pernicious and quite worrisome.

How do you protect against that?

On a social level, I think it's hard. I believe that the companies that are selling the technology and the social media companies, for the most part, want to do the right thing. They don't want their platforms being used for misinformation, for attacks, or things like that.

But when you're in a world where, again, we have this scale issue, where there are too many moles to whack, that even as much as these companies want to do the right thing, there's only so much power that they have in the equation when faced with attackers that have access to these tools and can do the kinds of things that they're doing.

I'm not sure there's much that can be done, as good as the intentions are coming from the vendors or the platforms. I don't know that there's all that much that they can do, but they'll try their best. Again, I give them a lot of credit. They want to do the right thing there.

I think it becomes incumbent, then, upon all of us as consumers and as people who read, watch, and listen, to be aware that you can't be sure where this stuff is coming from. You have to take some care, be on guard, and try to understand, “Where is this information coming from? What's the provenance of this information? Do I have any ability to know whether or not this information is coming from a reputable source?” Or if it's a phone call, a video call, “Is there anything that created some form of authentication on the other end? Do I know that this voice that sounds like my son really is my son?”

I think there's a lot of education that can make a big difference. It is an area that my wife talks a lot about. She's on a mission on AI literacy. She talks about this notion that we've moved from a generation of digital natives to a generation now of AI natives. That means there needs to be a level of AI literacy that is broad. People need to have some idea of what AI is, what it's capable of, what it's not capable of, and therefore be informed consumers.

A little bit of a tangent. I remember hearing discussions with generative AI. A number of college campuses and educational organizations have said, “Absolutely, no. We won't talk about it, we won't tell people about it, and we'll tell people under no circumstances are you allowed to use it for your schooling.”

The opposite side of that is the reality of, “Even if we don't tell people about it, they're still going to do it. Maybe we should be teaching them how to use it appropriately and the pitfalls of it.”

Arthur C. Clarke says that any sufficiently advanced technology is indistinguishable from magic, and that's a problem. When we have a technology that is affecting all of our lives and nobody can distinguish it from magic, you have… Share on X

Exactly. That's exactly the point with AI literacy. By the way, when I'm giving talks, I oftentimes start with a quote from Arthur C. Clarke, which is when I learned about through Rod Brooks, a famous artificial intelligence pioneer. Arthur C. Clarke says that any sufficiently advanced technology is indistinguishable from magic, and that's a problem. When we have a technology that is affecting all of our lives and nobody can distinguish it from magic, you have a problem.

You have to go beneath that layer, go behind the magic, reveal the secrets, if you will, of how this stuff actually works so that you have informed consumers. It's not put up a wall and say, “You cannot use this.” I really think it's the opposite. We have to embrace the technology. It has so many good uses. There's so much positive that can come out of it. We want to encourage people to use it, adopt it, and love it, but do it safely. To me, that's really what AI literacy is all about and where we need to be focusing our attention.

It has so many good uses. There's so much positive that can come out of it. We want to encourage people to use it, adopt it, and love it, but do it safely. To me, that's really what AI literacy is all about and where we need to be… Share on X

I think the part of that is by understanding its weaknesses—you were talking earlier about the training data. Ingesting the entire internet is really garbage training data in my mind, because there's a lot of garbage out on the Internet that if you're training it on a bunch of, “Hey, this is my website. I'm going to publish a fan fiction, so to speak, about political events.” If the trainer doesn't know that it's fan fiction, it starts to become problematic. If it says, “Well, I saw this on the Internet, it must be true.”

I think that's exactly right. As a user of the technology, it's very incumbent, then, to have a level of understanding and not use it as a magic box, because that is going to get us into a lot of trouble.

We're in an interesting phase with this stuff. Again, we're in a phase with AI, where it is affecting everybody's lives in dramatic ways, and yet it is magic. That's a problem that we've got to solve for. We're also at a phase now where you have this marvelous technology that also is being misused, overhyped. I think there's a swing of the pendulum that's gone a bit too far.

Personally, I have to say I struggle with this because I love AI. LLMs are absolutely remarkable. It's a remarkable invention that we have with LLMs and other forms of generative AI, especially as a technologist. I want to love this stuff, but boy, they make it hard. They make it so hard because there are just so many examples of where this stuff is being misused, of where it's being overhyped, and in many cases, by the way, where it's the wrong tool. People are actually almost applying this tool blindly, thinking that it can solve every problem. That's not the way to think about it.

Anyway, I love the technology. I want to love LLMs, but boy, they're making it hard right now. I'm hopeful that maybe over the next some number of months or years, the pendulum will swing back to a reasonable place, and we will have LLMs that are being used in the way they should be used. We'll have the appropriate guardrails and protections to get rid of the bad usages. The pendulum is back in the middle where it belongs. I can maybe unabashedly get back to loving LLMs, but right now they sure are making it hard.

I remember this is probably six months older. There was a lawyer who wanted a particular outcome for his clients. He could not think of a legal precedent for trying to get the outcome for his client.

This lawyer went to one of the generative AI platforms and said, “This is the result I want for my client. What would be the citations, the source, and the logic that I can use?” It generated what looked like, “Here's the case law, here's the citations.” He brought it into court, used it in court, and the defense attorney or the attorney on the other side was, “Hey, other attorney. I can't find this citation. In fact, I can't even find this case because the AI couldn't find what it wanted, so it generated a result that looked like a good result.”

You can use the technology, but check the output. I think that there needs to be a human in the middle. When you're interacting with one of these models, it gives you some interesting information, something you can use, check it.

We all know the term hallucination now. I've taken to calling it LLM-splaining, because they oftentimes give you misinformation with tremendous confidence and authority, oftentimes backed up by support information that may also be wrong or incorrect in its logic. You have this interesting dynamic of misinformation being told with great authority. I call that LLM-splaining.

I always referred to it as generative AI is confident and it's confidently wrong.

Oftentimes, yeah. It's often right, so you don't want to over rotate to say, well, these things lie somewhere. Use them, but you have to check the answer. Use them in the right context for the right problems. This isn't the tool for every problem.

Use them, but you have to check the answer. Use them in the right context for the right problems. This isn't the tool for every problem. -Robert Blumofe Share on X

Let's talk about that because you talked earlier about people trying to overuse AI. What are some examples of people trying to use AI when this is clearly not the application for it?

The meta point I oftentimes make is, oftentimes maybe AI is the right solution, but maybe some good, old-fashioned deep learning, or what would today be called a small deep-learning model before generative AI probably would have been called a big, deep-learning model. A simple deep-learning model might solve the problem, and you don't necessarily need a massive large language model.

The example that I oftentimes give here is, you've got to remember that these foundational models, these large language models, were trained on a massive amount of data. Some of that is to give the model the basis to be able to form structure and conversation, to be able to form English sentences, paragraphs, documents, or conversation, that actually makes sense. It's not gibberish; it actually has reasonable structure. They're using words in the right places and things like that. That's part of it.

Another part of it is just a massive amount of just “knowledge.” You have to put the word knowledge in quotes, because it's not really knowledge like it is for a human. It's more connections between words, but I'll use the word anyway. I'll call it knowledge.

In some sense, because of all that training, LLMs have knowledge of every movie that's ever been made, everybody who ever starred in every movie that's ever been made, every television show that's ever been made, every point in history, every president, general, actor, every car that's ever been manufactured. All of that is in the model. That's part of why the models are so big, because they're literally encoding every bit of “knowledge” that's out on the Internet.

Think about a particular enterprise. That's very useful, by the way, if you want to have it as a general answer, something that can answer general questions like how we use Google. But if you're trying to solve a particular problem in a particular domain, maybe you're in an insurance company trying to figure out information about actuarial data, understand how to deal with a claim, or something like that, do you really need an LLM that knows the entire cast to Mork & Mindy to solve that problem?

I've been talking about it as using megawatts to solve a problem that can be solved for milliwatts. That's a problem too. It's not only going to cost you a lot, but we're going to throw tremendous amounts of energy at solving problems that can be solved with much, much less energy with a model that's probably many orders of magnitude smaller than the model that's being used.

Anyway, it becomes the pile driver that can hammer anything. Oftentimes, all you really have is just a little tiny nail, and you could do it with a little tiny hammer. Instead, you're using this massive pile driver. It's a massive waste of cost and a massive waste of energy. There will be a backlash there. Today, I do see examples of people basically solving simple problems with a model that knows the entire cast of The Godfather.

I also remember another story, where someone went to an airline, they were chatting with the AI, and a family member had passed away. He asked the AI what the bereavement policy was.

I've heard this story.

“Hey, here's the bereavement policy, book the flight. Afterwards, give us this information, and we'll reimburse you.” The person goes out and does this. After the flight, they go to the airline and say, “I'd like my reimbursement.” The company goes, “We don't do that.” “But your chatbot said you would.” “Well, it lied to you. We're not responsible for the fact that it lied to you.” “If it was an employee that lied to me, you'd be responsible. Why not the AI?”

Exactly. I hope that's a cautionary tale for companies who are putting up chatbot interfaces. LLM isn't always the best chatbot interface. We see pretty amazing things. Some of the recent demos that we saw come out of OpenAI and Google with these conversational interfaces that can use text, voice, video, images, and things like that, remarkable. It's unbelievable how natural the conversation is.

There's also a danger there, because there's a tendency to ascribe broad intelligence to anything that can carry on a conversation. Just because the entity can carry on a conversation, doesn't mean that it has broad knowledge or authoritative knowledge about things that you might think it has authoritative knowledge about. This is a big challenge.

It's such an interesting topic. Swinging back to the cybersecurity, does AI provide you guys—or if it's a super weapon for the villains, can it be a super weapon on the cybersecurity on the defensive side?

It's an important weapon on the defensive side. I do think there's an asymmetry, though. I think that at least as things stand today, the technology favors the attacker. AI is no question an important part of any defense. I don't… Share on X

It's an important weapon on the defensive side. I do think there's an asymmetry, though. I think that at least as things stand today, the technology favors the attacker. AI is no question an important part of any defense. I don't think it's a silver bullet, though. I wouldn't be surprised if we don't see AI snake oil out there.

Again, leveraging the fact that to most people, AI is magic and making magical claims about your wonderful new AI product that's going to solve all of your problems, whether it's in the domain of cybersecurity or something else, we have to be wary of that over claims that AI becomes the magic bullet that's going to solve all your cybersecurity problems. But that being said, I do think AI is a critical ingredient in any reasonable cyber defense.

Ultimately, I do think when it comes to defense against these kinds of attacks, I oftentimes think of it as back to the basics. The important thing is to do the basics really, really well—strong identity authentication, multi-factor. Also, I think zero trust is an important part of this because zero trust in many ways is about visibility.

Ultimately, I do think when it comes to defense against these kinds of attacks, I oftentimes think of it as back to the basics. The important thing is to do the basics really, really well—strong identity authentication,… Share on X

I oftentimes think of cybersecurity as a game of visibility. Cybersecurity is all about, what can you do to increase your visibility while denying them visibility? Take visibility away from the attackers while giving more visibility to you. That's the name of the game, and that's what zero trust in many ways is all about. Micro-segmentation, zero trust access, these are technologies that give you visibility while denying visibility to the attackers.

I oftentimes think of cybersecurity as a game of visibility. Cybersecurity is all about, what can you do to increase your visibility while denying them visibility? Take visibility away from the attackers while giving more… Share on X

The attackers might have some success with some phishing attacks through AI and things like that, which gives them a beachhead on some part of your infrastructure. But if they can't propagate within your environment, then they can't really do any harm. Preventing that spread is really what zero trust is all about. You take away the visibility, they can't spread. They can't attack what they can't see. Anyway, I think we want to be wary of the snake oil claims and the magic solutions and focus on using AI and other technologies to do the basics really, really well.

Got you. I know there are—and I won't say names because it's irrelevant—well-known people that say AI is going to be the Skynet; let's call it that. Do you see that as a potential doomsday scenario or reality? Or is that still a little too sci-fi?

I think it's way too sci-fi. For one thing, in many ways, it's giving maybe too much credit to LLMs and other forms of generative AI. If you think about one of those scenarios, you watch the Terminator movie and think about what the AI is doing in that movie, it actually required a fair amount of planning. One of the things that LLMs are actually notoriously bad at is planning.

By the way, they can oftentimes give you answers that appear as planning, just like they can oftentimes give you the answers to math questions. They can't do math, but they can oftentimes give you a correct answer to a math question. Different thing.

When it comes to planning, it's the same thing. They can give you an answer that appears as if there was planning, but they can't actually do robust planning. I don't see how an AI is going to perpetrate a Terminator scenario without robust planning. I think we're pretty far away from that.

What I worry more about is that there's been so much attention given to that doomsday scenario, and therefore it's starving the attention that needs to go to the very bad day scenario. What doomsday I worry about is the very bad day that we're living almost every day with these micro-attacks that are happening all the time, and that we're going to see more and more of more virulent, more scalable through AI. I'm not worried about doomsday, but I'm worried that the attention on doomsday is taking the oxygen out of the environment that we need to focus on these other types of attacks.

I don't think we specifically talked about it. How do you deal with a very bad day? If someone's got a great platform, and they can deep fake my wife with a video that looks like her, a voice that sounds like her, some of my life is public, so there's probably enough information to carry on a conversation. The more public you are, the more vulnerable you are to something like that. How do we manage that then?

I think there are a lot of basic things that you can do there. One is to be aware. Is this communication channel or something that requires some form of authentication, mostly phone calls, video, and things like that? Those are pretty spoofable, so you can't be sure on those.

Then there's the question of, “Well, what's being asked?” If this is just a benign conversation, you're not going to jump through hoops to try and verify anything. But if it's your son claiming he's in jail and needs $10,000 wired to account XYZ, that's a different story. Now you want to raise the level of what you're going to do.

There are a number of simple things you can do. One would be to call back on their phone number to make sure that's the right cell phone. Now that's A level. Another thing that oftentimes people do is you ask secret questions. Yes, there's a lot of publicly available information out there, but there are any number of things you could ask your son that probably isn't available out there, that would be a pretty good indication that you're dealing with the right person.

There are simple things that you can agree upon. There are out of band ways to verify that you're dealing with the person that you think you're dealing with. Again, you don't want to burden yourself with these kinds of verifications all the time, but you want to raise the bar if it's a request that could be damaging.

My wife and I have had that conversation of we have two phrases that we would use. One is, “Are you really who you're claiming to be” verification phrase, and then a distress phrase as well. It's verification, but I really am in trouble.

Yeah, I think that's very useful stuff. For enterprises, of course, you need to make sure that you've got the appropriate processes in place to make sure that a phone call isn't going to trigger a funds transfer, trigger a configuration change, or something that could be damaging. You want to make sure that the only way such changes can be triggered is through some mechanism that's strongly authenticated.

Most good enterprises have the appropriate processes in place. They won't issue a payment without the appropriate PO that's been put in the system and that's been checked off by a second person. There are all kinds of checks and balances. You want to make sure you've got all those things in all the critical areas, not just in your accounting or payments function, but in anything that could be critical. I think that's one of the important lessons.

I think the other one is in the training. I think we need to move from training, for example, phishing. Most enterprises do phishing training. The emphasis has been on, “How do you detect fakes? How do you detect phishing lures?” That's worth doing, but I think we have to move from that to also educating on you can't always tell. Even if those telltale signs aren't there, even if it's perfect, it may still be fake.

Part of the education has to be that you can't always tell what's a fake. Yeah, fine. Learn how to detect the obvious fakes, fine, but don't rely on that. A key lesson of AI literacy is that you can't always tell.

It's interesting because so much of business, at least on the sales and marketing side, is about reducing friction. “How do I make this process as seamless, easy, and the money's just going to slide right into our bank account type of process?”

We're almost talking about the realization that we need to intentionally bring friction into certain experiences in our life to slow things down, to make it a little bit more, I know it's really annoying when you want me to add a new vendor to the system that you've got to come over and get a wet signature, but that's just the friction that we've decided so we don't get scammed.

That's right. Friction is a very good thing. You couldn't walk without friction. We all need friction in certain areas. You don't want to put friction where it's not needed. For the benign requests that maybe dominate most of your day, keep the friction low. But when it's a request that involves meaningful amounts of money, significant configuration changes, or information that might be sensitive, all right, well, now it's time for some real friction. It's necessary.

When it comes from the business environment, this is something that really needs to be led from the top, that the CEO really needs to say, “Hey, if we don't have the right checks and balances in place, we need to put the checks and balances in place. We need to have some friction in place, and we need to empower the employees to question stuff.”

So much of it is, “Hey, just do what I tell you to do.” Now we're telling people, “Don't just do what I say. You need to appropriately push back if something potentially seems out of bandwidth or out of channel.”

I think at the very top, leadership needs to recognize these things. It's not just the CEO. I would take it all the way up to the board, CEO, and all leadership needs to understand the importance of these things to be able to prioritize the right efforts and make sure that we're not being fooled by the snake oil claims.

I know someone who recently got an email from their boss that was, “Hey, I would like you to go out and buy gift cards for everybody on the team.” They're like, “Aha. This is clearly a scam.” They just totally ignored the request. It turned out that their boss legitimately did want them to buy gift cards for the team.

That's great.

It was, “OK, you got the rad idea saying, ‘Hey, this seems fishy; this seems not the right thing.’” But they didn't go back and then check and say, “Hey, I just want to make sure through a different means of communication, do you really want me to do this? OK.”

That's an example where maybe it's a fairly benign request, and maybe the amount of friction that's needed is small, or maybe none at all. It wouldn't exactly be the worst mistake in the world. But if your boss is asking, “Go out and buy a Bentley for employees,” then it's a different story.

Hey, I want my boss to do that.

Yeah. How about that? Yeah, I'm still waiting.

Where do you see the scam or cybersecurity going in the next couple of years? What are the things that aren't happening that we need to be watching out for them to start happening in the near future?

I do think that there's this change in the way we educate employees and the way we build the processes within the corporations to account for this new world, to recognize that these tools can do things that simply weren't possible before. I think that does change the way businesses put in place certain processes as well as education.

I've been a big advocate for making some changes, for example, something as simple as the anti-phishing training to just add an extra layer that lets people know that you can't always detect these things. You need to understand what's the provenance of the request just as an authenticated channel, things like that. There's a big education part of it.

I do think that when it comes to cyber security in general, I think we have to simply look through that lens. We have to consider that the attackers, as they get more and more sophisticated, will get better and better at using AI and using deep fakery, mimicry, and things like that, in their attacks in ways that simply weren't possible before. You always have to ask the question, “How do you defend against those types of attacks?” I think that means that certain types of capabilities become must-haves.

As you heard earlier, I'm a big advocate, for example, for zero trust. It's maybe an overused term, but the principle is right on. Zero Trust, again, is all about denying visibility to the attacker, creating more visibility for the defender. That's exactly what you want to be doing. You want to adopt that kind of posture, get very good at doing the basics and doing them very, very well, and be wary of the snake oil salespeople.

AI doesn't solve every problem?

No. In fact, it's going to cause more problems than it solves in the near term, probably.

That's often true with most transformative technologies. We get really excited about them and then we realize, there are unintended consequences. We need to figure that out before we roll it out much wider.

Actually, related to that, I oftentimes talk about the possibility that we may be at a little bit of a plateau right now. It's hard to tell, but there are indications out there that LLMs, for example, might be roughly at a plateau, meaning just throwing more data and throwing bigger models at it isn't going to really make a big difference. I don't know for sure, but it's possible that we're at a bit of a plateau. I would welcome that.

People treat a plateau as a negative thing, because you always want to be continuing to advance. But a plateau just might be the pause that we need to assimilate the technology into the good use cases and find ways to bar or prevent damage from the bad use cases.

Things going too fast can have all these unintended consequences. While I love progress, I love being on the steep part of the curve. We've lived with the Internet for the last 20–30 years. It's been fantastic. This is a case where a pause might actually be a really good thing.

That makes perfect sense. During the pause, it gives people the ability to refine their product, refine their services, and then move forward once the big things are out of the way.

The last couple of years were so steep on AI that maybe we just need a little bit of time to assimilate all that change and do it in the right way. Maybe. I don't know. That's a tough one to predict, but that's a possibility that we have an opportunity now with a little bit of a plateau to do the right thing here.

That's amazing. Any parting advice as we wrap up here that we haven't already passed along?

I hope people will take the time to really dig into AI and learn how it works. Don't be bamboozled by the claims of magic. Take the time to find the right resources, learn how this stuff works, learn what it can do, learn what it can't do, and don't think of AI as simply the solve-all. Apply it where it makes sense, but don't use it where it doesn't make sense. Don't use it where it can cause harm.

Absolutely. Where can people find you online?

I'm on LinkedIn. I try to regularly post things there. I'm on Twitter. I think in both cases, I just use my full name, @robertblumofe as my handle. I think those are the main things that I use as channels online, Twitter and LinkedIn.

We'll make sure to link those in the show notes. Thank you so much for coming on the podcast today.

Thanks for having me. Yeah, great conversation. I really appreciate the opportunity to talk about these topics that I love.

Awesome.

Exit mobile version