Security risks are dynamic. Projects, employees, change, tools, and configurations are modified. Many companies utilize PEN testers on an annual basis, but as quickly as systems are revised, you may need to implement threat emulation for regular monitoring.
Today’s guest is Andrew Costis. Andrew is the Chapter Lead of the Adversary Research Team at Attack IQ. He has over 22 years of professional industry experience and previously worked in the Threat Analysis Unit Team at Firmware, Carbon Black, Logrhythm Labs, performing security research, reverse engineering malware, tracking and discovering new campaigns and threats. Andrew has delivered various talks at DefCon, Adversary Village, Black Hat, B Side, Cyber Risk Alliance, Security Weekly, IT Pro, Bright Talk, SE Magazine, and others.
”“Typically, Share on X have access, then the intent is to get their hands on as much sensitive information as possible because everything has a resale value.” – Andrew Costis” username=”easypreypodcast”]
Show Notes:
- [1:14] – Andrew shares his background and what he currently does in his career at Attack IQ.
- [3:49] – At the time of this recording, there has been a major global security panic.
- [6:06] – There are many programs that we use on a regular basis that we don’t always consider the security of.
- [8:09] – Historically, companies would pay for an external pen test. Andrew describes the purpose of this and how they usually went.
- [9:33] – Pen tests and threat emulation do not need to be limited to just once a year.
- [10:45] – Andrew’s team is in the business of testing post-breached systems. But they preach prevention.
- [11:55] – Attackers are lazy in the sense that they will reuse the same strategies over and over again.
- [14:13] – Many programs we use may be caught in the crosshairs of attacks and vulnerabilities in other companies.
- [16:41] – Andrew discusses the frequency of really critical CVEs.
- [19:01] – What do attackers go after when they’ve breached a system?
- [21:04] – The priority for attackers is to get in quickly and make the victim’s data unavailable.
- [22:24] – A lot of people are under the impression of vulnerability testers. “Fire and forget it” is not a beneficial mindset.
- [24:56] – If we run every test, the amount of data will be overwhelming.
- [27:03] – In his experience, there has been client testing that has been overwhelmingly easy to breach.
- [29:07] – There are also organizations that have done a fantastic job. But vulnerabilities will still be found.
- [30:18] – The red team is not going to be able to cover your entire organization.
- [32:15] – Threat emulation and pen testing are technically the same thing. Andrew explains how she sees the difference.
- [33:50] – How are vulnerabilities and tests prioritized?
- [36:19] – Andrew describes the things his team works on and their objectives for customers and clients.
- [38:34] – The outage at the time of this recording had a big impact. It gave a really good idea of what could happen if it were a real security breach.
- [41:37] – There are a ton of free resources out there. The primary resource at Attack IQ is the free Attack IQ Academy.
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- whatismyipaddress.com
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- Andrew Costis at Attack IQ
Transcript:
Andrew, thank you so much for coming on the Easy Prey Podcast today.
Thank you so much, Chris, for having me. It's a real pleasure to be here.
You're very welcome. Can you give myself and the audience a little bit of background about who you are and what you do?
Yeah, sure. My name is Andrew Costis. Many of my coworkers call me AC. As you can probably tell by the funny accent, I'm based in the UK. I currently head up the Adversary Research Team at AttackIQ. I'm the chapter or team lead of the Adversary Research Team. I've been here for just over three years now. I was previously at VMware Carbon Black working in threat research, doing lots of malware analysis and tracking of all the bad guys and things. Prior to that, I was working at LogRhythm doing similar kind of work.
How in the world did you get into the field? Was it something that you always wanted to do since you were a kid, or did someone tap you on the shoulder at some point and said, “Hey, you've got some skill, you should do this”?
That's a great question. I was straight out of school, straight into the workplace, doing field engineer type work, hardware, break, fix, completely self-taught. I've got a few certs along the way, and then just gradually worked my way up to sysadmin, network admin, wearing all the hats, that kind of stuff. Eventually, an opportunity came about. I'd already expressed a lot of interest in breaking into security and played a long waiting game, but eventually a position opened up at a company that I was working at. That brings me to the present day where the last three or four job roles have been solely focused around CTI, malware analysis, reverse engineering, and all the good stuff.
Has your focus been hardware, software? Which angle is your background?
I guess when I was brand new to the industry 20-plus years ago, it was definitely originally hardware but only for a couple of years, and then eventually moved into more of the software side. Certainly over recent years, it's been very heavily focused on software, reverse engineering, and all that kind of stuff.
That's cool. Let's get this out of the way at the beginning of the part of the conversation. Today is Friday, July 19th, and there's been math. I woke up to airlines are down, don't even bother showing up at the airport, worldwide crisis. What's going on? What did that make your day look like?
Yeah, certainly. It actually almost brings back PTSD from the WannaCry week, as I call it, back in 2017. This morning when I logged in, I obviously caught up on messages, newsfeeds, and whatnot. It had a very similar feel to the WannaCry global problem that that introduced. Yeah, it started off with a bit of panic. I got to the facts pretty quickly.
There was obviously a lot of speculation that this could be a cyber attack. As it turns out, there was a software bug introduced clearly by accident by CrowdStrike. They would clearly not have intentionally wanted to cause such an outcome for so many people today, especially on a Friday, but it just so happens that the nature of the bug, there was no simple, quick way of automatically reverting back to the pre-patch state. That in itself causes a whole load of issues and then.
You also have the added complexity of having BitLocker enabled on certain Windows machines. It adds to the complexities of recovering BitLocker of keys and then being able to boot back into normal mode. Yeah, it's certainly been a, an interesting day so far.
In general, how long into an incident does it take to figure out, “Is this an attack or is this a bug?”
I think with this particular example, CrowdStrike, they're not a small company. They have thousands of customers, millions of endpoints that they're protecting. I think for this particular example, people were so quick to pick up on the fact of the root cause really of what instigated this domino effect almost. That helps because there's just so many people that use CrowdStrike, and not solely CrowdStrike, but, EDR, XDR, and certain security tools now were almost expected. Antivirus years ago was one of the only key security products that were used, and now EDR is also one of those.
You're talking about the domino effect. With large installations and systems like this, does almost getting them back online become almost as much of a problem in terms of we have to take more stuff down and then bring it up in a stage in a strategic way, as opposed to just reboot that machine or apply a patch and hit reboot, and then magically as soon as it's up and running, everything is perfect?
Yeah, I think so. There's much more of a forensic approach to this just solely because it's not just Windows desktops, laptops, and PCs that are impacted by this. There are Windows servers, and there are multiple versions of Windows on multiple different versions of patch versions of Windows. That adds complexities.
Obviously, most organizations use Active Directory. There have been reports on Reddit and other places that Active Directory domain controllers are also impacted if they have happened to have the CrowdStrike sensor installed. Those types of situations would definitely cause a lot more headaches than, if there is such a thing, a standard configuration.
All the funs of dealing with an incident real time.
When you talk about threat emulation and what you do, what is threat emulation and how do you go about performing it?
Before I do, let me take a just a minute just to look back first of what happened before threat emulation came about. Historically, many companies out there would pay for a pentest, typically an external pentest, anywhere from one to five, however many times per year, and they would have a specific target with a very specific set of rules, and a specific scope of what they can and can't do, and certain rules of engagement. That in itself would happen typically over a five-day period. Then a report would be given back with the various findings and the various recommendations for how to remediate and mitigate those particular gaps.
That's been how things have been for a long time. At some point, threat emulation, sometimes also called adversary emulation, came around. This is the more systematic, iterative, repeatable approach, whereby customers and clients are able to test their internal assets, their security controls, as well as their people, processes, and technology all year round. It doesn't have to be just once a year. They can test a single technique, which is typically aligned to MITRE ATT&CK, a really common, widely adopted taxonomy and framework used throughout the industry.
They can use real-world adversary behaviors aligned to the ATT&CK framework and put those to test every single day of the year if they wanted to, or once a month, once a week, et cetera, and start to build almost a blueprint of how their network and security controllers are performing, where their weaknesses and gaps are, what are they good at in terms of visibility, what their response time’s like in terms of their SOC and their IR processes. Then they can start to use threat emulation or adversary emulation to start improving, start to strengthen their security programs, start to improve their overall security defenses. In a nutshell, there's lots of other components to it, but that's hopefully a good summary.
For you and your company, do you, specifically technical, or do you also work on the social engineering front?
We are very much in the business of testing or enabling our customers to test post-compromise TTPs. We very much preach the “assume breach” mindset. As we know, every day is a zero day. Every day is a zero day, and there is no getting away from that. Yes, sometimes you might get slightly quieter weeks where there haven't been too many critical CVEs, but we now come to expect that at some point, every single week or every single month, there's going to be one or more very critical CVEs and exploits quite possibly that lead in the world exploitation that needs to be addressed urgently.
The thing is that these critical zero days or end days, they’re not going away. The attackers are lazy. They don't necessarily create their own custom tooling for every single attack. Even nation-states don't even do that. There's a common misconception that nation states formerly that used to be called APTs—Advanced Persistence Threats—must be super advanced. The truth is that is not a lie, but the reality is that they are also equally as lazy as cyber criminals.
A little bit of why should I invest the time and resources to invent something new when there's plenty of tools out there that will give me a good chance.
Exactly. In its simplest form, it's just reusing code and tools in the same way that people in the legitimate workplace reuse code, tools, and scripts. Why start from the beginning if there's something that already exists? People, for legitimate reasons, do that every single day as part of their normal job. Why would different actors not do the same?
Yeah. Let's take a step back. What is a CVE? And then I've got two follow-up questions about that. I know what they are. I forget what mailing list I was on. I'd get 40 or 50 of them every day. After a while, I was like, “Oh, gosh, this is concerning.” This is not actionable for me personally, so I stopped it. But for the audience, what is a CVE?
Common vulnerabilities and exposures. These are typically initial access type of vectors. Not always, but typically target web-connected devices. These are typically vulnerabilities that are found that have a scale of severity from very, very high down to very, very low. These could lead to things like remote code execution, local privilege escalation, man-in-the-middle attacks, side-channel attacks.
There’s so many categories we've seen in the last year, you know a flurry of critical CVEs that are targeting other security vendors like Palo Alto, Ivanti, Cisco Secure, Citrix Gateway. Those types of devices that are very prevalent in customer environments are very much in the crosshairs of the attackers.
When these CVEs are first raised to the vendors, there's a very small window to get the vendors to address and hopefully patch the vulnerability that was reported by the researcher or the team who originally reported it before the threat actors and the cyber criminals start to weaponize and then start to exploit that particular vulnerability in the world. They can and often do that very rapidly and very quickly.
With the CVEs, are they issued upon responsible disclosure before the disclosure happens? Where in the cycle of responsible processing of these things does that happen?
Typically, the responsible researcher or oftentimes, other security vendors report CVEs through the formal channels, and they'll obviously notify the vendor as well. At that point, it's up to the vendor to provide a response not only to the researcher or third-party company, but also to their customer base. There's obviously a mixture of commercial as well as open-source tooling.
Some software is ubiquitous. Apache, for example—Apache web server—there’s an open-source community that are responsible probably for looking into those types of vulnerabilities. Whereas a commercial vendor, there will be a specific engineering team that will be looking into that. Typically, one would hope that it would all follow the same kind of process. The thing that differs is just the response times and speeds of some vendors between others obviously differs quite substantially at times.
Yeah, I assume there are some vendors out there that when things get reported, they just don't say. There's no public disclosure of it. Maybe it does get patched, maybe it doesn't, but it's one of those, “We don't comment on pending vulnerabilities.”
Yeah.
When you say they're classified as critical down to trivial or non-critical, what's the frequency of critical CVEs?
Looking at just this year alone, January/February time, there was definitely a noticeable flurry of really critical CVEs that are affecting highly prevalent security controls and gateway devices, things that are externally facing, because attackers are trying to leverage that as an initial access point, probably because in recent years, Microsoft have clamped down on some of their malicious attachment kind of vectors that have always been very common. They've probably reduced a little bit just because Microsoft have upped their games of security, so attackers have been adjusted. They seemingly, at least in the last year or so, have started to get a bit more creative.
To my earlier point, once the attackers are in, the post-compromise techniques and tooling that they use is quite often the same or at least very similar. My point is that every single day, a new CVE, a new vulnerability, a new exploit will always be found by a researcher or a company. These will often be quite unique. However, that is only opening the front door. What they do once they're inside, they typically follow a very similar playbook from nation state actors down to your typical e-crime cyber criminals.
Once someone has broken into a platform or a system, what are the goods that they go after?
Typically, again, it can vary. We see this time and time again with ransomware, for example. They typically gain the initial foothold. They do discovery. They perform privilege escalation to a higher privilege user account. That gives them access to additional machines, where they can further entrench into the network, move laterally. Eventually, once they discover certain crown jewel assets that might have good, juicy information, they will then typically package it up, compress it up, and then send it back.
Some actors will bring CT frameworks and rely on CT frameworks to, you know, maintain persistence. Others might use offensive security tooling, which are typically open-source tooling, which are often used for post-compromise exploitation to automate a lot of what I've just said. Typically, once they have access, then the intent is often just to try and get their hands on as much sensitive information as possible, because everything has a resale value. It's not solely about ransom payments, necessarily, when it comes to ransomware, but it's also other details that can also be resold on the dark web forums and places like that.
Typically, once they have access, then the intent is often just to try and get their hands on as much sensitive information as possible, because everything has a resale value. -Andrew Costis Share on XGot you. Maybe this is a little inside baseball. Do attackers often close up the holes behind them so no one else can follow them in?
At least from my experience, I don't do so much instant response and SOC work these days. At least from my experience, I've not personally witnessed that. I still have plenty of friends that do work in SOC and IR. I've not heard of those types of stories.
Hackers are, as I said, generally lazy. They carry out their mission and then they're off. They don't really care. Yes, there have been known to be certain actors and groups that are a little bit more covert. They're a little bit more stealthy. They're a little bit more careful at trying to remain undetected for longer periods of time.
For example, ransomware, there is not that priority for them. The priority is to get in quick, steal information quickly, encrypt very important file servers and systems, make the victim's data unavailable, and then pressure them through extortion tactics to try and claw back some kind of payment for their minimal amount of efforts.
The priority is to get in quick, steal information quickly, encrypt very important file servers and systems, make the victim's data unavailable, and then pressure them through extortion tactics to try and claw back some kind of… Share on XLet's switch back to your side then for the threat emulation. Someone says, “Hey, we've got this platform, this system, we want you to test.” Is the first thing that you do is you pull out all the script kiddie toys and let those things go first, or do you have your own built stuff that you work through? Or is each opportunity unique and that you try to actually find new things?
Yeah, that's a really good question. The typical approach is I want to just fire everything at the production network and all of our assets to get results. The reason for that is there's this almost misconception behind adversary emulation. A lot of people are still in the mindset of vulnerability scanners. They pick their network range and they pick a certain operating system type, for example, or certain servers that are running certain operators, certain software, and then they just fire and forget, and then they get back a lot of results. That's the mindset that they used to.
I guess the benefits of doing threat emulation is that there's not necessarily a one size fits all. We definitely don't recommend to run everything. In our library alone, we have probably over 5,000 unique scenarios, if not more. The thing is that once you generate results, from an audit perspective, you cannot unsee those results.
AttackIQ's recommendation is to start small. Threat emulation isn't just solely about using a piece of software. This is actually about building a security program and repeatable practice that relies on communication and collaboration between your blue team, your red team, your stakeholders, right from the top level of CISO all the way down to the boots on the ground. For that reason, that's why we recommend start small. Start with just a couple of techniques. Just get into the mindset, the rhythm; build that muscle memory first.
Threat emulation isn't just solely about using a piece of software. This is actually about building a security program and repeatable practice that relies on communication and collaboration between your blue team, your red team,… Share on XEven just by testing one atomic technique, whether it's technical findings, holes in systems or controls, or even process-related findings, maybe you did detect something, but maybe your SOC didn't know about that detection until the following day. That in itself is a finding. It may not necessarily be a direct finding. I personally call it an indirect finding, but indirect findings are just as useful as knowing, were you not able to exfiltrate data to a cloud instance, for example?
I've been through one or two pentests, and one of them was absolutely overwhelming. Here's this vast dump of stuff with little-to-no prioritization in terms of severity, and it's going to take us forever just to read through all this and try to figure out, “Is this something that we should address first or last?” Is part of the process of what you do is helping to prioritize and determine severity?
Not necessarily with the decision making of the prioritization, because that's down to each client and what their priorities are, but we certainly share our advice from our own experiences in terms of what we would deem more critical. Again, criticality can be subjective because if I can compromise a dev machine in a cloud instance, to some companies that might be a huge deal. Maybe they could get onto a dev machine and use a local compiler to compile some malware, in which case it's a very big deal. Maybe to other companies, it's not.
We do certainly, with every single scenario, whether it's a single atomic scenario or whether it's what we call an attack flow, which is a chain in of together of multiple techniques or scenarios, but regardless of how you run those assessments using our software, we bring to the surface what we considered to be the bare minimum of mitigation advice, detection advice, Sigma rules, all that good stuff, as well as obviously the IOCs, as well as the ability to actually log in directly to those security controls that are in your estate and actually try and piece together how the behaviors might compare to how you might respond, how they might be presented by your third-party tooling, and so on.
Got you. In some of your tests, have you ever done work for a client and been like, “Oh, my gosh, that was way too easy. We basically got access to everything without even trying”?
Yeah. It is the short answer. Yeah, that certainly is the sad truth. However, on the flip side, although I don't do so much customer face and work in this particular role, but what I love is hearing all the success stories from customers. Like everything, you have a new security product and everyone's skeptical, and it's called threat emulation or adversary emulation, or breach and attack simulation. It's introduced and it goes through this infancy of no one's sure whether it was worth the investment.
To your point, it's not a case of if, but a hundred percent of the time, when it finds and shines the lights on gaping holes with all of these security controls that these customers often have, which the last count I heard, I think the average now is about 40 or 50, it could be higher security controls in a typical organization. The misconception is the more tools, the better. When they find these gaping holes through just doing threat emulation and adversary emulation, it is eye opening. It is very rewarding.
As sad as it is, it's also very rewarding to see the light bulb go off and then come to realization like, “We're actually getting value, not just from our products, but also we're starting to now get value from all of our investments across all of our security controls.”
Have you run across the opposite where you had a client customer that you thought, “Oh, this is going to be a piece of cake” there?” The reputation of irresponsibility precedes them. Then when you do a test, it's like, “Oh, things are actually pretty good or way better than we thought.”
Yeah, many from colleagues, I've heard a few stories. There are definitely some companies out there that are doing an exceptional job. They still find stuff, though. No organization is perfect, and no organization is going to be bulletproof 365 days of the year. Things change, configurations change, staff churn, people come and go, new projects start up and shut down, tooling changes. Companies go into bake-offs between EDR providers, for example. Things are changing all the time. It's a very dynamic space to secure the cyber in any organization, but there are companies that do a pretty good job though, for sure.
Is one of the biggest challenges from the defensive perspective is just because you got it right today, doesn’t mean you're right tomorrow?
This is a really good point because, again, I'll go back to my example of pentesting and even red teaming. Even if you're fortunate to have an internal red team and you are able to perform semi-regular red team engagements, maybe once a quarter, once a month even, the point is that the red team is not going to be able to cover your entire state. They're just not going to, the same with a pentest. Even if you throw all the money in the world, they probably still wouldn't be able to cover your entire organization.
This is the power of adversary emulation. You can test all year round. Why is that important? Because it gives you that visibility when things regress. We don't really talk too much about regression because people are usually just so horrified about the initial findings that they might take a while just to absorb the shock. But once they've reached that point, then there's the topic of regression.
You may have a customer or a client that is very proactive when it comes to mitigation. They might close out that firewall port, protocol, or whatever to enable a more secure configuration. But maybe in two months’ time, a patch update comes out, and it might regress that particular configuration. How would anyone know? Who's monitoring the firewall configuration? Maybe there are people, but is anyone likely to be monitoring it all year round? Probably not.
Yeah. People aren't saying, “Oh, we applied a patch, so now we have to go retest everything that we think was secure. We need to retest it all now because we applied a patch that was supposed to make security better, not worse.” What's the difference between threat emulation and actual testing, or is it the same thing?
I think it's the same thing. The way I view it is threat emulation was probably the original term that came about, and that was there to describe individual singular testing of individual techniques or subtechniques and behaviors. You can think of tools like Atomic Red Team, and I think they were one of the first, probably, to come up with an open source tool that any company of any size can individually test techniques. Whereas I view adversary emulation as the more end-to-end, from post initial access to their end goal.
Did they execute some malware? If so, what happened next? Did they set up persistence? Did they then move that, et cetera? Attack flows—we internally call them attack graphs—but I think MITRE calls them attack flows. Adversary emulation, to me, is an attack flow, whereas threat emulation, to me, is an atomic singular test. Who knows? Maybe I'm wrong.
As long as it results in people tightening their security and getting it right, I guess it doesn't really matter what the terminology is.
Exactly.
When someone hires someone to come in and review their security, what kind of findings do you guys produce, and what do they do with them? How do they sort through what we should and shouldn't prioritize in addressing?
It can vary drastically from one company to the next. Adversary emulation or threat emulation is very CTI driven. Typically it's very CTI driven. The priorities will vary based on what is considered a priority to that company. For example, it could be that a company is more focused on their endpoint security, their end user compute as opposed to their server environment, or another company could be focused more on cloud assets because maybe they don't have that many on-prem assets, or maybe they don't have many endpoint devices.
Some customers might be more focused on traffic-based testing, testing content filtering or next-gen firewalls, IDS, IPS, that type of thing. They're more interested in directionality of traffic, ingress, egress, what's allowed, what's blocked, and that kind of stuff. It really can vary. When I say CTI driven, again, it comes down to what industry, what vertical are they operating in?
Obviously, ransomware has been a hot topic for multiple years now. Some organizations are more curious about certain actors and groups compared to other customers that are working or operating in different industries and sectors. There's definitely a lot of variables and unfortunately, no straightforward answer. I'll give the classic answer of, it depends.
I like your it depends with the scope, magnitude, the industry, and all that. I think that all makes sense. Are there scenarios that keep you awake at night of, “Oh, gosh. What if we forgot about this, today's patch, and airlines going down?” Are there things that keep you awake at night, even if it's not necessarily threat emulation oriented, but one little patch takes down huge things?
Not really. I'm really blessed. I work with a great team of people. We work around the clock. One of our main goals as a team is we respond to CISA—Cybersecurity Infrastructure Security Agency. They release cybersecurity advisories and alerts. One of our main SLA objectives is to respond to and create content that our customers can then test in production. Not only that, but obviously we're continuously tracking multiple threat actors and groups all year round.
We have some very highly skilled people on the team. That gives me reassurance. Obviously every atomic scenario, every adversary emulation attack flow that we create and release to our customers, it is designed to run in production safely at scale continuously all year round without breaking anything. We go through a very involved QA process for every single thing that we release.
We're not spinning up a virtual machine or some virtual instance, doing all the tests, and then disposing it. We are actually testing everything in production, whether it's on a server endpoint, web application, and API requests, the various different ways that you can test your assets and your applications. Everything that we build is designed to run safely.
From that side of things, that doesn't keep me up at night. Like I say, there's always a big cyber attack. We hear about it all the time. I've been doing this kind of work for several years now, and you build up a tolerance to it. Like we were saying earlier, you wake up today's headlines and you think, “Is today going to be another WannaCry day, or is it going to be worse? Who knows?”
What would be the worst situation in WannaCry?
Honestly, I think we're seeing it right now. The outage that's affected in all of these industries and all of these organizations today, I would expect there to be a similar kind of impact. Perhaps the impact would be worse, but I think today has given a very good sampling of what could come. It definitely gives us a bit of foresight into maybe withdrawing some cash, because maybe today is the day you go to the shops and you can't use your card, for example.
In a sense, the intent behind what caused the failure is irrelevant to people who aren't running the platforms. If I'm an airline passenger, it doesn't really matter whether someone intentionally took down the airlines, computers, or it was a patch that went awry that took down the computers. The result is the same thing to me: I'm not getting to my destination.
Yup, exactly. A quick analogy, but I'm a big Formula One fan. There was a recent race the other day and two drivers collided. There was this ongoing discussion for the next week of who was at fault. As you say, what does it matter? The outcome is still the same. Both drivers are out.
To the technical audience, yeah, of course we care. We want to know the juicy details of, was it a cyber attack? If so, who was behind it? If not, what caused some patch to cause all this damage? To the average non technical family member or friend, they don't really care. They just want access to Netflix, Xbox, or whatever, because it's a Friday night. They want to get access to their things.
The remediation, it's not like there's a guarantee of if it was an intentional downing of a platform or an unintentional; one does not necessarily mean it's quicker or easier to get the platform back up and running. An unintentional downing of a platform can be just as bad as an intentional one.
Yeah.
There was the automotive support system platform that was ransomwared. That was clearly intentional and probably down for longer because it was intentional as opposed to just a platform issue.
Yeah.
If people want to learn more about the industry or to learn more about what you do, what kind of resources are available?
Yeah, absolutely. There's a ton of free resources out there, not just from us, but just from the wider community as well. I'd say that the primary resource for AttackIQ is the AttackIQ Academy. This is a free, widely available. It's open to everyone. You only need to create an account and it opens up lots of free training videos, courses, webinars. You can actually earn CPE credits along the way. It doesn't just only talk about breach and attack simulation, purple teaming, and threat informed defense, but it talks about many other topics such as AI and lots of other stuff as well. There's also expert speakers on there that also give their perspectives in a general sense.
Obviously, check out the Academy. Obviously, MITRE as well. They've got some great free tools and projects. The MITRE ATT&CK Navigator, for example, is a great way of overlaying techniques, mitigations, detections over the heat map, and building custom heat maps and stuff. There's the ATT&CK Workbench tool, which is almost like your offline version of MITRE ATT&CK, where you can annotate the ATT&CK framework and collaborate and share notes with colleagues and peers.
Atomic Red Team as well. I know I probably shouldn't be plugging them, but they're a free resource and they've been around for a long time. I personally regard them very highly, and I know many of my colleagues do, but it's a good starting point for sure. Another one is vector.io, which, again, is just a free platform for organizing your assessments and again, starting that process of collaboration and knowledge sharing. There's probably many others I can mention, but those are the ones that come to the top of mind.
Have you found our educational institutions catching up with training people in this field, or is it still pretty much learn on the job, figure it out yourself?
What, for my role in particular?
Just being involved in threat research and whatnot, and mitigation.
Yeah. Very much, cyber security as an industry is very much sink or swim for the most part. Threat emulation, more specifically, there definitely is a lack of awareness and a lack of knowledge around what it is, what the benefits are. I think part of the issue is that customers and employees are probably burned out with just hearing about all the shiny new toys that all of these vendors are releasing and I've been saying for years, there is no shortage of security tools out there. What is, in my opinion, the shortage of is proactively addressing these gaping holes that threat emulation and adversary emulation can uncover that other tools cannot. Simple as that.
Sounds good. If people want to be able to connect with you, where can they find you?
They can hit me up on LinkedIn. I used to tweet, but I just lurk these days. For sure, LinkedIn is good.
Awesome. Andrew, thank you so much for coming on the Easy Prey Podcast today.
Cool. Thanks for having me, Chris. It's been a pleasure. Thank you.
Leave a Reply