In cybersecurity, we need to understand the mindset of hacking, which is not ethics. We also need to realize that even if we have cybersecurity experts get hacked, there is no reason to feel embarrassed or ashamed when it happens to us. In today’s show we’re going deep into the world of cybersecurity with one of the industry’s most seasoned experts, Sam Curry. With over 30 years of experience in information security, Sam has been defending against cyber threats, shaping security strategies and mentoring the next generation of cyber professionals.
Currently the Global VP and CISO-in-Residence at Zscaler, Sam has also held leadership roles at companies like RSA, McAfee and Arbor Networks where he helped pioneer innovations in VPN technology and personal firewalls. But cybersecurity isn’t just about firewalls and encryption—it’s about mindset. Sam joins us to talk about the hacker mentality, zero-trust security and why even the best security professionals get hacked.
From his early days in cryptography to mitigating major cyber breaches Sam shares his insights on how businesses and individuals can defend themselves in a digital world. If you’ve ever wondered how cybercriminals think, how AI is changing the security landscape or what you can do to stay one step ahead then this episode is for you.
“The biggest problem in security today isn’t just the attackers—it’s the way businesses build single points of failure in peacetime. We need resilience, not just efficiency.” - Sam Curry Share on XShow Notes:
- [00:55] Sam is Global VP and CISO-in-Residence at Zscaler. For the last 32 years, he's been involved in every part of security at some point.
- [01:23] He teaches cyber and used to run RSA Labs at MIT. He currently teaches at Wentworth Institute of Technology, and he also sits on a few boards.
- [02:41] We learn how Sam ended up working in cyber security. He has patents in VPN technology, and was one of the co-inventors of the personal fire law which was sold to McAfee.
- [04:14] There were security principles before 1996.
- [07:38] Sam feels a need and a mission to protect people. It's very personal to him.
- [08:40] He was there for the breach that RSA had. He's also been spearfished.
- [12:47] The shepherd tone is an audio illusion that makes sound that can make people sick because it sounds like it's always increasing.
- [16:31] Scams are way under reported because people are too embarrassed to report them.
- [19:31] Challenges of keeping security up. In peacetime we have to remember to build resilience and be antifragile.
- [22:10] Zero trust is a strategy and architecture for minimizing functionality.
- [28:14] There are immediate benefits from a security perspective to start creating zero trust.
- [30:17] Problems need to be defined correctly.
- [33:03] Even people who've done incredible research on hacking techniques have gotten hacked. There's no shame in it.
- [34:02] We need the hacker mindset. It's an important part of the human community.
- [36:44] The importance of making things easier to understand.
- [38:18] Advice for people wanting to get into cybersecurity is being just this side of ready and tackling things that are a little too big and a little too scary. Also find allies and a network.
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- whatismyipaddress.com
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- Sam Curry on Zscaler
- On The Hook: An InfoSec Podcast
- Sam Curry on LinkedIn
- Sam Curry on Twitter
Transcript:
Sam, thank you so much for coming on the Easy Prey Podcast today.
It’s good to be here. Thanks for having me.
I’m super excited about this conversation. Can you give myself and the audience a little bit of background about who you are and what you do?
My name is Sam and I work for Zscaler. I am Global VP and CISO in Residence at Zscaler. I guess I’ve been in cyber now for 32-odd years. Every part of security at some point—Chief Product Officer, CTO, I’ve run engineering, been a CISO five times.
I’ve also filled the National Security Institute. I teach cyber. Used to run RSA Labs at MIT, and I currently teach at Wentworth Institute of Technology in Nichols College. I sit on a few boards as well. I have too much fun with cyber, basically. I also make sure to mentor students every six months, at least one.
I love that. What got you into cyber to begin with?
By accident.
Tell me the accident.
I did cryptography and cryptanalysis in the early 90s, and I went back to school. Originally, I wanted to be a sci-fi author. Well, I wanted to be an astronaut, but I was never going to be that good, so I decided I studied physics and I studied literature; what do you do with that as a degree?
I wound up going back into tech, and I wanted to go into biotech. I asked a friend of mine because his brother was investing in biotech in the mid-90s. I said, “Can you help me connect with your brother and find something?” I thought, “Biotech is the way to go.”
He said, “Tell me a bit more about your background,” even though I’d known him for a while. Actually, his first job at the college was working for my father who was in tech. He has an amazing career. About an hour into this short conversation, I realized I was being interviewed, and he hired me for a cyber company in ’96 because I’d had some security before that.
As a result, I wound up with patents ironically in VPN technology, and was one of the co-inventors of the personal firewall. We sold to McAfee, but what really got me stuck in it was I went from doing documentation, then QA, and getting into engineering. Eventually, when we were bought by McAfee for product management, we were actually helping people.
We had a mom who called us up and was trying to buy a personal firewall, and I couldn’t understand why something so technical that we had written really for companies to be able to put a barrier at the front of their PCs—we’re talking by that point of ’97 or ’98—why was she buying this for home use?
Turned out she had a differently-abled son. I think that’s the correct term for it. Websites were being used to help with education, and hackers were targeting those websites because the victims didn’t understand what was happening to them. These parents had gotten together and were purchasing $75, at the time—this is ’97 and ’98—personal firewalls.
I was so disgusted that I think that’s when I said, “OK, it’s going to get worse,” and that’s where I found my mission. Then I went to McAfee, Computer Associates, RSA, and so on, and eventually Zscaler.
Nice. You said you worked in security before ’96; was there really much security before ’96?
There sure was. We just didn’t call it cyber. We didn’t even necessarily call it InfoSec, but there were certainly security principles. We absolutely had the CIA. There were some really good secure protocols. IPX/SPX was pretty good, actually, if you remember that; it was Novell, but we had security principles that were extended from physical principles. I did cryptanalysis and signet at the time, so it went from signals. I’ve always loved puzzles. I’ve always loved riddles.
In my personal early days, I did linguistics. I’ve always loved languages. I speak several languages. I did classical Greek and Latin, and I always loved puzzling through these things. It was a natural extension to try to break codes and do that sort of thing.
One of the most useful tools was using computer science to do so. It just made sense. It’s applied mathematics, really, and for a purpose. Then there’s the question of how do you break it and how do you prevent it. And that absolutely has existed for a very long time, arguably since the ’50s as its own discipline.
Everyone knows about Captain Crunch whistle and whatnot, but even my dad was saying in the ’60s and ’70s with time sharing and some of the old—even prior to what we would think of as mainframe—the fights they used to have when we’re single sign-on on some of these shared computing systems.
Long before there was a computer science class, he’d say, “If you can hack the system and break the core of these 3ACF2 rack […] top secret days, because if you can break that and get access as much as you want, I’ll give you an A in the course.” That was his standing challenge to every class.
Then part of it was because he hated the IBM systems they were given. Sorry, IBM; this was a long time ago. He wanted to buy a DEC machine back then. Of course, he could then go to the dean and say, “See, I told you. You should’ve got the other machine,” which was very smart.
Have you preferred trying to break things or trying to prevent things from being broken?
It’s funny you say that because I’m sitting here with a Flipper. I have a Flipper and I’m charging it at the moment.
I actually just finally got around and got mine about a month ago.
I usually have it by my computer. I don’t have it right now. I usually have lock picks because what I found was I’ve always been able to pick locks, but it was a struggle. What I found was I was on Zooms, and I got very bored. Not that this is boring, but during COVID, you […].
What happened was below the screen you’d be looking at people and talking. Where before I would think that the picks were very small and the lock was very small and I’ve got big hands. I was fumbling, and it was transparent locks because I was trying to improve my skills. When I dropped it down in my head, the dimensions changed, and I realized the bandwidth you’re capable of through your nervous system is much greater than what you can perceive.
The locks became much bigger in my head. I’m sitting there and I went click and the next one went click and click. I was like, “Wow, I got out of the visual.” It was a huge lesson for me because it was like a leveling up, and it’s a great analogy for a lot of what we do.
Now to answer your question, I feel a mission and a need to protect people. It’s a very personal thing. I’m fascinated and intrigued with the attacker mindset, and I’ve always had it. But my brother once told me it was no shock I wound up in an industry where I protect people and that I have done so consistently. I think I get more satisfaction, and I have a mission of trying to make sure that people are not really hurt at any scale.
I feel a mission and a need to protect people. It’s a very personal thing. I’m fascinated and intrigued with the attacker mindset, and I’ve always had it. -Sam Curry Share on XI like that. In that thought process, clearly, we always talk about it before I hit the record button. Have you ever been a victim of a scam, a fraud, or a cybersecurity incident?
Several times, actually. I know I said this really quickly to you when you asked me this, and I was comfortable talking about it. I’m always willing to talk about it. By the way, I have a lot of failures, but I don’t think those are failures. Nobody’s a failure for being put in the crosshairs.
Obviously, first, I ran product, and I was CTO at RSA. During RSA’s breach, which is now 13 years ago, and that’s been written about quite extensively, that was miserable. We didn’t just have one attacker; we had two attackers. One of whom was attacking the other one, both from the same nation state. It was absolutely savage.
But I personally have been attacked. I’ve been spearfished. I suppose whaled, which isn’t a great thing to feel. But my wife was attacked too. I remember we were sitting in bed. We went to bed late. We had young children at the time and were just settling down. She got a call that she thought was from her bank and very, very well done. We’re not talking sloppily. This would have been nine years ago.
It was specifically going after her with the correct research. She looked at me and said, “That didn’t feel right.” She turned and I said, “OK, what do you mean?” She said, “I think that was a fraud.” She immediately called the bank and cancelled. They had already done something.
Now, this was nine years ago. This was almost automated and very well done even with the human-machine interface, even then. Look, I ran a product set that stopped and helped to mitigate loss to fraud for banking and card-not-present transactions for half-a-billion people. It doesn’t matter how smart you are or how well-versed in security. You can still be the target and still fall to it.
It doesn’t matter how smart you are or how well-versed in security. You can still be the target and still fall to it. -Sam Curry Share on XThe numbers may be lower, but that’s not the only time either. It’s happened to me several times. In fact, I’ve been personally targeted with everything from a, “We’ve got pictures of you” scandal to, “We’ve got your credit card; click here,” and of course, the, “You haven’t updated your Netflix hits.”
I’m really good at not clicking on anything now. In fact, I think my own security department hates the fact that I even take things from home and forward it just in case. It’s so bad when marketing departments mess up and send you things crafted like they’re phishing.
We went through all this trouble to train people across the business world and say, “Here’s what phishing looks like,” and then the marketing department in some company goes and uses those same practices. They don’t use their logo correctly or they go and use somebody else’s URLs. Why did you do that?
I know someone who got in trouble at his company because he got something, an internal company document or process or whatever, and it looked fishy to him so he just deleted it thinking, “Hey, I’ve done the right thing.” Then two weeks later, they’re like, “Hey, that project, we need it.” He’s like, “What are you talking about?” “Oh, we emailed that to you two weeks ago, and we said, dah-dah-dah-dah.” He’s like, “Oh, I thought that was a phishing attempt, so I just deleted it.”
Yeah, that’s got to feel….You can’t just lob it over the wall and be like, “Well, action item’s on his plate now.” That’s a bad culture. I hope that didn’t stick to him.
No. It was one of those, everyone—
Project manager: “Did you receive it?”
Annoying at the moment, but ultimately it’s like, “Yeah, we look back at it. Yeah, it did seem a little bit…we see how you could have thought that.” It was no harm, no foul, but still an uncomfortable experience to be had, probably on both sides.
Absolutely.
With your wife getting the phone call from the bank, because 10 years ago was pretty early on in terms of good spear phishing. It seems to be pretty early.
Yeah, it’s funny, but there is a recency bias. I remember in 2003 talking about phishing. I remember in 2008, I was talking about spyware. But there’s always this saying, “Well, it’s just happened. It just happened.” At the time, we used the term spear phishing as well now compared to today’s levels that felt primitive. Have you ever heard of the shepard tone, by the way?
No.
People can google this and take a look at it. The shepard tone is an audio illusion that makes a sound. Some people can feel a warning, some people can feel sick from it. It’s like one of those old Star Trek episodes where somebody’s like screaming as a thing’s turning on the wall torture device.
What it does is it sounds like it’s always increasing in frequency. Of course it isn’t, but the way that it loops, your ear thinks it’s continually going up and this can really cause you quite a bit of anxiety.
I love listening to it because it’s like, OK, after a while, you can hear how it’s looping. But it’s that sense of it’s always worse than it was before. It can’t always be worse than it was before. That would be a bubble. But to some degree, it is much worse than it used to be. But even then, we felt like it was heavy at the time.
Let’s face it. If you go back and look at the size of breaches back in the 2010 to 2015 time range, we had big breaches happening. That’s when the Target breach happened. That’s when Equifax happened. That’s when Sony happened. They weren’t recent, there were phishing components in some of those as well.
It’s interesting that you mentioned the recency bias on that because I started thinking when you said that I’m like, “Maybe it’s just that we feel like things are getting worse.”
Maybe they are. Maybe we feel they are, and maybe they are in localized ways. It’s like the second law of thermodynamics, that things tend towards a more entropic stage. But it’s not true locally. That’s how you can get an increase of information in one area. That’s how you can get life created effectively. Otherwise, everything would just go to the cold energy death of the universe in a sprint.
This is true. It can be getting worse locally. I’ll tell you, it’s a lot worse in Ukraine right now than it was five years ago. It can get worse, but overall, things are generally roughly the same. The absolute numbers are going up, but compared to GDP, I don’t know. I haven’t done that number.
I’ve heard ridiculous numbers of tens of trillions. I mean, I’ve heard numbers that make it seem like it’s three or four times the Canadian GNP. Don’t know if that’s true. But I think it’s worth comparing dollars to dollars and comparing it to GNP.
I knew, for instance, that usually in most reports, the ratios are correct year over year. Even if the absolute numbers are wrong. Usually, if the FBI report that comes out every year says ransomware is up 30%, it’s usually correct. Even if it’s a poor instrumentation of ransomware in both years.
No slam on the FBI, by the way. A lot of people call it self-reporting, and it’s a consistent read and all the rest.
Yeah. I mean, I talk about scams, and inevitably, after an episode, I’ll get emails, and I talk to people who have—I think it was Debbie Johnson. She lost a million dollars to a romance scam. Every time she gets up on stage and talks about it, even years later, emotionally dealing with all that again. But then it’s just constant droves of people saying, “Yeah, it happened to me too, and I was too ashamed to report it.”
Whatever numbers that we have on these things, whether they’re ransomware, whether they’re romance scams, crypto scams, or rug pulls, they’re way underreported, and we probably don’t even begin to see the tip of the iceberg of the tragedy in people’s lives.
Completely right.
Now I entirely forget what we planned on talking about. We’ve jumped off this deep end of tragedy here.
There is change, and sometimes change is disruptive. Sometimes when a change hits, it truly is different, and people will talk about a singularity that’s come up a few times.
I think […] was one of the ones who said it first. He was a sci-fi writer, but there others have to, I think von Neumann did too. Werner said it would happen because of genetics, robotics, information technology, and nanotechnology.
When those sorts of asymptotically approach vertically, we wouldn’t know what we would look like afterward. But we don’t have to have anything quite so dramatic to have a massive, disruptive change in how we do things. Simply, the internal combustion engine was one of those moments.
But we look back at it historically and we say we see what came out of the other side of this. Therefore, it was a good thing, or it was a productive thing, or maybe it was a good thing. I don’t think we have a little bit of a bias.
Cory Doctorow, who’s a Canadian sci-fi writer, actually said recently that there’s a bubble effect going on around AI. He said, “But that’s not a bad thing.” He said that anything interesting has a bubble effect. I’m paraphrasing him. The question is, is there value left around afterward? When the bubble bursts, it does leave value. When the telecom bubble burst, it left infrastructure around. When Enron’s bubble burst, it did not.
There are other companies in particular, when there’s a lot of fraud involved and no value created, those are the bubbles that are horrible. But there’s always a bubble effect when something interesting is happening because capital rushes to it.
What I can say is that that’s one of the things you see when there’s a truly disruptive technology. That it actually changes how we invest, in what we invested in many, many ways, both as investors and within our businesses when we choose to build things like R&D.
Now, having said that, I guess what we’ve got is they’re not quite black swans, but what we’ve got is incremental changes. Is it really that different? It can go quite far in some cases, and sometimes it’s just like the shepard tone. Then you’ve got the really disruptive things we’ve got and are now the potential for truly disruptive things, and what happens in that world is security has to catch up, and privacy has to catch up. It can be very, very different.
Do you see that as a particular challenge, getting security to keep up?
Well, yeah. There are a few problems. I used to call it the investor paradox. It is that in peacetime, we tend to create single points of failure. What does your CFO want to do? Well, they want cheaper things, and they want less suppliers. Let’s move to more strategic, less strategic providers who can give us more stuff.
What do you do in wartime? Well, if you really look at the Second World War or the First World War, any significant ability for your opponent to damage your logistics and supply chain, you build redundancy. You go to lots of sources so that if anybody takes out your fuel depot, you have other fuel depots. That wartime tends to change some things.
One thing is the investor paradox. In peacetime, we build infrastructure that’s not very resilient. That’s interesting. We have to remember, especially in peacetime, to build resilience and to be anti-fragile.
This is a difficult thing to do because it puts security at odds with IT. It says, “Hey, we need to really think about disaster recovery and business continuity, and we need two of everything at least. And we need to test this stuff.” “I’m sorry. We have to do the disaster scenarios while everyone’s going. Now we’re trying to really get efficient in our P&L. Could you please just back off?”
That’s just one problem. The biggest problem is still alignment with the business for most of us, because people haven’t realized that when done correctly, you can embrace new change.
But the other thing is that there’s this incrementalism in the pursuit, in a first-order chaos world where the enemy is really nature, in the pursuit of a better infrastructure with closer to five nines, which is what IT does. You get a little better, little better, little better. It also means you never go back and tear out the studs and re-architect.
What do we get? Well, I said in ’96 I filed my first VPN patent. We’re still using VPNs. Why are we still connecting networks to networks and struggling with micro-segmentation? We do. When in fact, it’s fully capable of moving up the stack, turning the network into a commodity, and doing logical segmentation, app targets, and so on.
User groups can do this as a software-defined provisioning and authorization. We don’t because we’re still saying, “Let’s not do away with the investment of 25-plus years, nearly 30 years. Let’s not do away with that.” But in wartime or when a disruptor comes along, you go, “Wait a minute. I have to deal with this topography, this attack surface?”
One of the things I would suggest is that zero trust is in fact a strategy and an architecture for minimizing functionality. Least function and least privilege in an infrastructure such that you have the most flexibility, adaptability, and ability to service and provide high quality to the business so you can support complexity without being complicated. But that requires going back and saying, “OK, that thing we just spent 26 years building, 27 years, maybe we should think about doing it differently,” and people get very scared of that.
IT has become custodians of infrastructure rather than brokers, which is where they should be, where they want to be when you really ask them, “Do you want to be keeping the lights on, or do you want to work at app provisioning?” They know what they want to do, but that’s a big jump. That’s a move your cheese moment.
It’s a hard decision to make. I think for those who’ve been around computers while at Apple, when they went from OS 8 or OS 9 or OS 9 to X, they basically said backwards compatibility. “We’re throwing it all out, and we’re starting from scratch.” That’s a risky move to make.
It is, and I remember having products. I had this with a SIM product back in 2009 when I was at RSA. We had this product called Envision. I had the same product with an access control product at CA. Same problem.
If you don’t provide an easier upgrade path and movement path, it becomes a do-I-even-want-you moment for the customer, and that’s fair. It takes courage to do that. It takes leadership to do that. But you’ve got to be customer-centric when you do it. You don’t have to roll over and do everything that the customer asks. They will come with you in a vision if you articulate it right. That’s just my experience.
But now we’re actually in the cloud age and truly deeply in it. The turnaround with Agile development is so much quicker now than it was back in those shipping-box days that you can be closer to the need. You can observe in close-to-real time if you really use and see that.
Companies have changed things like pricing, and they’ve changed things like delivery mechanisms. They’ve gone from products to services, and they kept their customer base. This is now known how to do it. Of course, some people insist on relearning it all the time, and it can be quite painful.
But I think if you take the zero trust principle to heart, you actually become far less vulnerable. What do I mean by that? I have an analogy that I told you I was going to bring up, but I haven’t told you the analogy yet, which is Go, the game of Go.
When AI was first playing Go—DeepMind did this—it was playing some interesting games of Go. DeepMind gave 59 new openings for the game of Go. Now, the Grand Masters, because it works like a martial art of the game, looked at these openings and said things.
There’s an article in The Atlantic that said it was like watching an alien play. It was like a gift from the far future in how to play the game that they were masters at 8th and 9th Dan type of masters. The openings, they couldn’t fathom why the stones were placed on any of the 361 intersections the way they were.
When you’re that good at Go, it’s a territorial game, you lose by a few intersections. But they were getting trounced. Absolutely trounced. First of all, AI didn’t learn or develop the game as a form of narrative, which is how the human mind works, and it was finding entirely new vectors of attack.
OK, so let’s go back to the security world. In the security world, that translates into vulnerabilities that will be found in places we never expected. In Go, there’s something called jōseki, which is when somebody makes an opening, what is the optimal response to it? There were no jōseki for these movements. The equivalent for us is that there’s no ability to patch that.
It’s funny. There’s still Heartbleed out there. Some of those things are never patchable. You have to find a way to get in the way of it. It will be like that. If you don’t have a zero-trust approach where you say, “If I don’t need the listener, it shouldn’t be out there. If I don’t need to be provisioning and putting somebody onto a network, I shouldn’t do it.”
How do I get the least privileged, least function accessibility? I’ve got a coffee pot, why does it have an FTP server on it? Come on, is it because the COTS OS had it? That’s ridiculous.
This is, at the heart of it, the way that we beat the AI is coming or at least get more resistant to them ahead of time. By the way, to other disruptors, because there’s more than just AI, is by minimizing the number of open intersections they can choose from. Change the rules of the game effectively.
By the way, I say others because we’re on a track of AI development. More things are coming. Some of them will be more disruptive, things like Agentic, AGI, and so on. But there are other tracks of disruptors too, like synthetic biology. There’s robotics. There’s nanotechnology. We’ve got drones. There’s quantum. There are a lot of other things that happen at the same time.
You don’t want to be rethinking a classic architecture that’s 26 years old and trying to figure out how to change all my libraries because I’m no longer effectively encrypting anything. That’s an exaggeration. You’re not sure if you can prove that you are or what might be stolen, and you’re also having to deal with an entirely new form of AI that is just whistling through the vulnerabilities you never expected. Could you imagine doing that?
But I’m not interested in FUD. The way to get ahead of it, quite constructively, is to take these principles, and it’s not a binary, zero, or one game. You can start to get less trust in an environment that has immediate benefits from a security risk perspective, and you can approach simplification of IT, which saves you money, makes it a better user experience, and frankly, easier to support if you do it right.
Nice. You were talking earlier about lock picks. I was just thinking now when we were talking about other vulnerabilities. I’m having a conversation with somebody about how they bought a new lock for their door. They were really proud, like, “It’s going to take a thief a half an hour or 45 minutes. Yeah, you can pick it. But it’s going to take you 45 minutes, so therefore no one’s ever going to get in my house.”
I looked at it, there’s a rock on the ground, and I say, “You know you’ve got a window right there. Sure, you’ve got a lock that’s going to take me 45 minutes to pick it, but I could throw that rock through that window in two seconds.”
I would have been impressed if he’d somehow wired an electric current to the rock and zapped the person who picked it up like that. I’d be like, “Come on, dude.”
Then he’s actually thought about it.
Well, probably because you suggested it. How do you booby trap the path of least resistance? That’s a deception practice.
But again, like it comes down to that, we’re talking about biases that we get biased and thinking I have to protect my door because that’s where I come and go from. Therefore, that’s what I have to protect.
That’s not actually the problem.
The problem is that I can pop up your window faster than anyone can pick a lock. I don’t need to pick the lock. There’s another option.
Yeah. The problem has to be defined right. This is a fundamental thing I learned actually when I was at Arbor Networks, and we were part of Dan Hurd at that time and they had this amazing system for problem definition.
In business, they used to say a problem is defined as a target, a miss, and a trend. Then you go find the root cause by going to the actual data around it. This was so useful because the problem for your house is not how do I make the door stronger. It’s how I prevent people from getting in. That’s the problem.
As soon as you start fixating on a path of least resistance, you’re squeezing the balloon. It’s going somewhere else. I’m mixing metaphors. Sorry about that. But you’re not defining the problem correctly. You’re getting distracted with the beauty of a solution. The same thing can happen in business, and it can certainly happen in security.
Have you found that that was a common practice in security of like, “We’re so focused on preventing X, Y, or Z?”
Oh, yeah. This is the main problem with GRC. GRC is wonderful. It catalyzes behavior. We get some regulation. It says, “You now have to pay attention. Congratulations.” Then the problem is now not defined correctly. I’ll give you an example. Why do we say that you need to log six failed logins?
Because you want to detect a pattern.
Something brute force. Bad guys don’t do that now unless they’re trying to distract you or fill up your logs. They don’t. They turn up and they get the password right. The last thing they want to do is get it wrong. They don’t brute force the door anymore because everybody has that rule now. Arguably, if you didn’t have that rule, maybe they wouldn’t do it. But, techniques have evolved to the point where that rule is useless and is filling up logs.
You know what that rule does? It actually tells you who’s struggling in your user base. We have countless examples of that. The actual things that you care about, from finding bad guys and improving incident response perspectives, are not the same as the things you’re required to collect.
As an example, I realized somebody now is chewing on their keyboard listening to this, but I imagine a few other people are nodding as well. I have a good friend, Josh Corman, who says HIPAA. He challenges them to say, why do you still need it? That’s bold. I hope I didn’t just call him out on that, but I have a tremendous respect for him saying that. He said it publicly on a podcast so it’s OK.
You have to define the problem correctly. Then you have to keep your eye on the problem. You have to keep making sure you’re still defining it correctly. The thing about human biases is that you’re still human.
At the end of the day, it doesn’t matter how smart you are. You can still get hacked. None of us are immune. I know some really brilliant people who’ve written with incredible insight on hacking techniques and hacking research who get hacked, and there’s no shame in it.
At the end of the day, it doesn’t matter how smart you are. You can still get hacked. None of us are immune. -Sam Curry Share on XThe fact is that one of the techniques the bad guys use is, in fact, trying to keep shame from keeping us from sharing it. To hell with that. I’ll always share it, even if it’s mortally embarrassing. I know my peers will be like, “No, dude. We got your back.” Because otherwise, this wouldn’t be our industry.
It always comes back to we have to figure out how to think differently. Not that I’m trying to promote an Apple slogan here.
They may have that as a marketing slogan, but they don’t have a monopoly on the idea. I think it’s really important, and you and I have spoken about this. It’s really important that in any population, you have those who think differently. We need the rebel mindset. We need the rebel archetype. We need hackers to be an important part of the human species. We’re talking about psychopaths, actually.
I’m not a psychologist. I don’t do clinical psychology. Feel free to comment or whatever if anybody is a clinical psychologist. From what I understand, there’s a fairly consistent rate of sociopathy and psychopathy in most populations, which means it serves a purpose or did at some point in human demographics and populations.
The same is true of other supposed aberrant or statistically marginal groups. They’re part of human society, and they’re important for our health as a species and our resilience.
The same is true of hackers. It’s a mindset. We need it to make sure that person who’s always annoying and testing the boundaries, well they’re the ones that made sure that your hut wasn’t going to collapse by testing it to see if it would collapse in a few of them. I’m imagining prehistoric times so we’re out of science and into my opinion, but there you go. I believe that hacking is a mindset that’s important to us as a species.
I think prior to computers or people who have computers to get with it, there are always people who just have a mindset for trying to break things.
Insurance people did that.
There’s a difference between malicious and not malicious. Let’s take the malicious versus non-malicious.
A hacking mindset is one thing. Hacking ethics and what you do with it is another. But our own intelligence capability is as Western nations certainly came out of the insurance industry, because people had actuarial data; they knew exactly what was insured, they had thought about weaknesses, and then they could tell the airplanes where to send bombs.
That mindset of, “How do I break stuff? How do I poke at stuff? What happens when I do the unexpected?” Like a great personality for someone who’s doing QA. “Let me do all the things that people just don’t do. Let me just not follow the rules.”
The funny thing is I think we make it really hard for the younger generations coming in because we speak a different language and we have acronyms everywhere. They emerged slowly and arbitrarily. The actual ideas and concepts are very accessible.
I would implore anyone who’s listening to this—and this is one of the reasons that I mentor students. I would implore you to make it easier to understand. If you can’t make something easy to understand, you don’t know it well enough.
When I hear somebody pontificate with 50-cent words in long sentences, and they’re effectively gatekeeping from another generation, I’m unimpressed. But if someone can take something like, “Here’s zero trust. What is it? It’s making sure that only the things that are needed to be accessed can be accessed for as long as they need it. It’s a very simple definition with some big implications. Let’s go look at them.”
That’s probably the best definition I’ve heard.
Oh, thank you. See, I work at making it simpler—not too simple—and I look for analogies, because then you can start to get them in the mindset to go into the details. If you just throw 50-cent words at people and two sentences for a textbook, you just want to impress your peers. I’d implore anyone listening to make it easier because we shouldn’t be gatekeeping. That’s the effect, even if it’s not your goal. Just solve the problem.
As we wrap up here, for anyone who wants to get into cybersecurity, what pathway or recommendations do you have for them?
My biggest advice is I realize this is really hard for some people that look at me and they see a white man. For some people, it’s especially difficult, but for everyone, we need you, and you should never be ready for the next job. You should be just this side of ready. You should be tackling something that’s a little too big and a little bit scary.
The biggest thing you can do is find allies and find a network. I put this out there, and it’s amazing. I never get overwhelmed. I always say contact me. If you want to get in, I will beat up the terrible sales pitch.
I know three things about a salesperson, by the way. I know that they don’t know me. I know that they think my product is necessary, and they just want to give me a PowerPoint that they can trap me with.
If a salesperson breaks that mold and comes to me legitimately to make a human connection, I will always listen. But if anyone ever comes to me and says, “I want to get into cyber. Help me,” I will do anything for them. Anything. I know that’s true of just about everyone I know who’s a peer, except for a few people that I don’t really like, and they know it.
Honestly, I think this space has got so much room for innovation and different ways of thinking. Don’t wait to be ready. Don’t think you need to go the formal path of a certification, for instance. Do it by all means, but don’t wait to start. And work on making friends and allies because we want you.
Don’t wait to be ready. Don’t think you need to go the formal path of a certification, for instance. Do it by all means, but don’t wait to start. And work on making friends and allies because we want you. -Sam Curry Share on XI love it. Perfect segue. If people want to get ahold of you, how can they find you?
LinkedIn is probably the easiest. If you want to send me an email, it’s pretty obvious. It’s just [email protected]. My first letter and last name. If people start spamming me, then I’ll take care of that. That would be easy enough to do anyway. The best way is probably LinkedIn. Just send me a message and connect with me.
It is not a community of trust, which means I will accept invites, but if people then prove to me that they’re just doing it for malicious reasons, they will be blocked with prejudice. But if someone says, “I just want to get to know you,” or, “Can you give me some advice for my career?” Boom. I will answer you. It’s coming.
I love that. Sam, thank you so much for coming on the podcast today.
Thanks for having me. It’s been an absolute pleasure, and I look forward to hearing it.
Leave a Reply