With the increase in targeted cyber attacks, it's more important than ever for organizations to quickly identify and respond to threats. AI is helping security teams by acting as virtual analysts, handling much of the investigation work. However, human oversight is still essential for the final steps and judgment.
Today's guest is Michael Lyborg. Michael is the Chief Information Security Officer at Swimlane. Prior to taking his current role, Michael was Global Vice President of Advisory Services, a highly sought-after expert by the world's largest Fortune 500 companies and global government agencies to advise on the creation and operation of industry-leading security operations.
In this episode Michael shares his experience and wisdom on today’s cybersecurity challenges. We talk about the balance of automation and human oversight, the risks and rewards of putting AI into security operations, and defense in depth strategies. Michael also covers how military style threat assessments can help with cybersecurity, how AI is evolving for threat prioritization and analysis, and the need for continuous testing and monitoring to prevent automation failures. If you want to know how to stay ahead in a complex cyber world, this episode is full of practical advice.
“For every automation you introduce into your personal or professional life, test it and then build in controls to make sure that it’s running.” - Michael Lyborg Share on XShow Notes:
- [01:06] Michael has been with Swimlane for about 7 years mainly focusing on larger enterprises, government clients, and partners. He's helping with the automation journey and experience. He also built security programs for other companies and was a Marine.
- [02:07] Prior to the Marines, he did IT and network security. Michael is originally from Sweden.
- [04:22] Operational risk management or conducting a limited threat assessment. He's always thinking like a hacker and looking for gaps in security.
- [06:29] Michael tells a story about his wife's recent experience with a cybersecurity scam.
- [12:11] How a company decides what level of friction is appropriate to implement proper security.
- [13:59] Michael talks about balancing what is and isn’t automated.
- [16:16] Michael shares the story about his early days of automation.
- [17:23] Continuously review and monitor your automations.
- [18:41] Starting with documentation is a good first step.
- [21:45] Michael talks about how awesome it is being able to work in security and automation and help businesses grow and achieve outcomes. He believes in automating the mundane tasks.
- [22:26] We learn about AI being involved in the defensive side of cybersecurity.
- [24:50] AI can also bridge the gap between the security team and non-technical people.
- [26:33] We discuss places where AI probably shouldn't be used.
- [27:58] Find where AI works for you and then think about incorporating it in your security services.
- [31:01] The importance of having controls in place when using AI whether it's for security or data analysis.
- [33:00] Risk can be reduced by training on specific tasks.
- [34:18] Michael shares the value of mixing human and artificial intelligence through Swimlane.
- [39:08] The importance of bridging gaps and getting rid of silos.
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- whatismyipaddress.com
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- Michael Lyborg on Swimlane
- Michael Lyborg on LinkedIn
Transcript:
Michael, thank you for coming on the Easy Prey Podcast today.
Thank you, Chris. Happy to be here.
Can you give myself and the audience a little bit of background about who you are and what you do?
I’ve been with Swimlane for about seven years now, mainly focused on larger enterprises, government clients and partners trying to help with their automation journey and experience overall.
Prior to that, I built a couple of security programs for some public companies, and way before that, I worked for the US government in the Department of Defense, specifically the United States Marine Corps. Once a Marine, always a Marine. That’s still me.
People can’t see me right now, but I still keep my hair fairly short. What happened before that? Before the Marine Corps, I did IT and network security so more on the NOC side, early days of routing and switching and the Cisco days.
Way before that, if people are having problems understanding what’s coming out of my mouth, I migrated from Sweden, so I was originally born and raised in Sweden. That is who I am and hopefully we’ll go in more to what I’m currently doing or what we’re currently doing here at Swimlane.
What got you interested in IT in the beginning?
I think it was because I broke my dad’s computer in the early 90s, and he’s like, “Well, you’re going to have to fix it.” And that set the path. I did a couple of internships then in Sweden and here in the US as I was going through school, and then finally landed a full-blown job. I’m fixing monitors, printers, and computers, and then shifting away focus to remote access and network services. 9/11 then shifted that to more kinetic operations; I think that’s a friendly word.
As we continue to advance around the globe with our strategic goals and missions, operational and tactically, technology becomes a bigger piece. It was always that, “Hey, tell me who here has experience with HF, VHF, UHF, IP protocol, routing, packets, and other fun things.” I did what you should never do, but I volunteered a lot.
As I wrapped up my tenure with the Marine Corps, you had a choice to make. Do I do the contracting route or work for the US government to continue on that path? Or do I take all these very mixed and matched technology, kinetic force protection, just security in general, and try to go back to the network and cyber information security programs? I picked the latter and it has not been a dull day since.
I’m really curious because you have—we’ll stick with the phrase—a kinetic background. How has that impacted what you do in the virtual world?
There’s almost a curse word and probably the US military called operational risk management. But everything we do and everywhere we go, you would conduct a limited threat assessment or sometimes a really large scale if you’re doing a major operation.
That threat assessment and risk overall where we calculated and we did what’s the most likely course of action that we are going to do and that the enemy is going to do, and here’s probably what it looks like. The whole war gaming, sandboxing, table-topping, that hasn’t really changed.
Some of the TTPs, obviously from a defender’s point of view, but always thinking like an attacker has helped out in the public and private sectors here on the cyber and info side.
It’s the skillset of what would I do if I were trying to do this?
Absolutely. Full gap analysis, looking at our fields of fire, what we can protect, what’s our center of gravity, and what do we do if the primary fails. The secondary or the tertiary? Always having the contingency of fallback plans when it comes to that defense in-depth.
That’s interesting that the military experience would actually benefit in the IT space.
Yeah, but I think it’s also right when we do threat assessments, whether it’s third party or internal. If you’re able to get through one system, then see, OK, what’s the most likely course of action now or the probable course of action, then follow that down and see where you can pivot, and then new risk exposure, new risk assessments and that’s continuous. Whether you’re doing it from an attacker’s or defender’s perspective, it’s helped me a lot, at least.
That’s really neat. Before we dig into the meat of this, I want to ask the question that I asked probably most of my guests. Have you been a victim of a cybersecurity incident or fraud or scam either personally or professionally? And can you talk about that?
Yeah. I’ll do a recent one where my wife, like all of us, does a lot of online shopping and I’m still a bit of a hobbyist. I have advanced network firewalls here at the house and over the WiFi network. I was in Europe and received an alert about firewalls. The IPS that actually ended up blocking. But my wife was at an online shopping store. I’ll leave out the name and have attempted to put in and submit a payment.
The systems actually worked and blocked that transaction, so that’s a success story. I think we all have lots of failures, and we can do a million things right, but if we do one thing less than ideal, then we have other outcomes.
Phishing is still really real, whether it’s personal or social media. We always see that. I’ve still fallen for it. You click on a link, or you review a document, or you sandbox something and then assess and see what they’re trying to do. Those have still happened from the corporate perspective and a personal perspective. But I’ve been very, very fortunate that I know of. I’m not having any major exposure. Most of them are from public breaches, etc.
There’s not much the public can do about breaches that they have no control over.
And I’m hoping there’s a little bit stronger legislation to that moving forward where people are held accountable for some of those control failures.
Right now, at least in the US, we’re still pretty loosey goosey with what happens after a data breach. It’s obligatory, “Hey, we’ll give you a year of identity theft protection.” “Great; thanks.”
But then it’s the burden right on the individuals of rotating and rolling all their credentials and bolts and trying to figure out if there’s been any secondary impact on their personal or financial status.
All very scary things. Has your professional background you think has helped you be safer personally in terms of cybersecurity?
Yeah. We should probably throw my wife on here because she doesn’t obviously enjoy most of this, but the paranoia and really having a zero-trust mindset has been pretty helpful for us, I think, as a family. You don’t need to access everything in the world and we should inspect what we expect, so I think to answer your question, very briefly is just yes, it helps out.
If you can’t answer this question, that’s fine. Has your cybersecurity profession helped? Do you have kids and has it helped them?
They’re really young. I think so, but it’s also really healthy to see the board of education here where we live. It’s very proactive when communicating and everything else. Trying to socialize and educate at an earlier age, I think that’s awesome.
Then if they can then also tap in or we as professionals can go and provide advice and assist. It’s like, “Hey, have you thought about this?” Or, “Here’s a couple of public resources that you can disseminate to the rest of the parents or the student body of dos and don’ts.” I think that’s super helpful. We all come together and just help each other out a bit.
It’s been interesting to hear different parents’ interactions with their kids from the realization that my cybersecurity paranoia is a little too much for my five-year-old. I need to figure out how to tone it down and put it on a plane that they can understand without frightening them about life.
And I like that too, as far as whether it’s personal or professional. It’s easy to implement technical controls to restrict and try to help mitigate some of the risks. But every control that you put in often introduces a level of friction, whether that’s your wife, your kids, or the company you’re working with or for.
There’s still risk and you can’t go full speed ahead and just stop everything because if productivity goes down, then that obviously might be even more of a risk to the business or to your personal livelihood that you want to deal with right then and there.
Measured approach, small incremental improvements, I think, is really huge, then coupling that with education. But just education without any technical controls, I think we’ve all seen that doesn’t really work. So it has to be a good hybrid mesh of the both.
How does a company figure out what level of friction is acceptable for their environment? Maybe a bank might decide, “Hey, we’re OK with tons of friction with our employees.” But you’re a manufacturing company. Your employees have to spend 20 minutes verifying stuff before they can sit down at their desks. That’s probably not productive for the environment. How does a company figure out where they should be in that model?
I think it’s important to manage those relationships in the business. So interview stakeholders and all the way down to the bottom of the front line. Whether that’s your security analyst, your IT team, your inside-outside salespeople to people on the floor in a manufacturing facility.
Looking at that human behavior when you test something, hopefully with a smaller test bed or population, to see their reactions, receive feedback, and enumerate across the security space to make sure that you’re managing those technical controls.
Above all, people are great, but we’re human and we often use that shortest path, so it’s easy to then say on that note, “We don’t like this.” “OK, but why? How would you like to receive the change?” So rinse and repeat. It’s a bit of a plug here, like automation obviously helps out. We’re doing things at a higher speed and pace. Remove the noise.
What are some of the risks of automation? From my own life, I’ve realized that if you over-automate, in some cases it can reduce its own level of risks, and if something fails in the automation, it becomes more devastating. How do you balance what is and what isn’t automated when it comes to cybersecurity?
We try to focus a lot on what has the biggest impact. The highest volume and something that has low risk. If you take care of that first, obviously you then get back what you can’t buy, which is time. But for every automation they introduce into your personal/professional life, test it, and then build in controls to make sure that it’s running.
What happens if it’s failing? How do you get notified? Who is supposed to fix it? Those sound basic, but they can become pretty complex tasks of routing them through your organization. But mainly focus on the high volume first, and test a lot of it or test all of it for an extended period.
I was chuckling to myself because a long time ago in my career, I had built let’s call it an automated anti-fraud risk platform. At some point, we started noticing that there’s stuff slipping through that shouldn’t be slipping through and I had never built anything to make sure that the automation was staying running.
At some point, it had failed, not alerting anybody, definitely did not alert me until we started seeing the results of it. It’s like, “Oh, OK. I’ve got to make sure the automation is running. It doesn’t help if it doesn’t exist.”
Absolutely. I think to couple with just automation in general, an example of bad automation is garbage in often equals garbage out. Make sure that you trust or at least as part of zero trust, you don’t test everything and you make sure that the signals that you’re getting are then provided with the output. Then you reverse-map everything.
A good example is when I worked with someone many years ago. Keep in mind that when we started the early days of automation, it was Bash, PowerShell, and Python scripts that somebody had to feed and maintain. It was very complex, whether you ran on a cron schedule once a day or once a month to help out with some reporting. Or if it was trigger-based, OK, what’s the trigger? What happens if that trigger fails? To know where it’s like low code, we can do it visually.
But going back to the example, someone thought it was a good idea to bring in a bunch of low-fidelity IP blocks, and then they were truly just automated drop-in between their VPCs or virtual networks from private to public and hybrid clouds. They ended up cutting off one of their own network segments. Really good, high-quality information in will let you automate more effectively without those undesirable outcomes.
Is there a schedule that people should review their automations?
I would say continuously. For everything that we have automated, we’ve actually automated monitoring like synthetics to then say, “Well, this has changed. What are you going to do about it?” Or maybe you haven’t received all the firewall logs in the last hour so that means something is either wrong with your SIM or with your signal sources. Then have those journeys mapped out and know who can then bring in from a collaboration perspective to help fix it.
Then I mentioned documentation of those automations has got to be super-high importance.
That’s probably one of the best things that we do. A lot of organizations have pretty decent, let’s just call them run books—if this, then that. So if we start with that, even if it’s a, “Hey, here’s a tier-one triage task. Here are the 19 things you’re going to do if you get something like this.”
To pick that, digitize it, automate it, orchestrate it, whether that’s 50%, 70%, 90%, or 100%, now you already have the backing documentation to support what you’re then automating in those playbooks.
That’s a good first step. Start without documentation, interview the people who it impacts, figure out how they do it, and then automate it and orchestrate it.
Yeah, because reverse engineering documentation is always a nightmare.
Whiteboards are great.
We’ve got the process. Now we need to figure out to document what it actually has been doing and how long it’s been doing it.
And a lot of the tooling that we have allows you to export your configuration, so that it’s easier to transform what you have in your system and then go from there.
What are some of the processes that absolutely should always retain, that at least currently should be retained in the human space?
We use something like risk-based prioritization. We’re very fortunate with the signal sources that we have. We know if it’s a VIP user or high-value asset or customer data and then what type of data.
We started with the data first, and then we map our enterprise and cloud ecosystems. Anything that’s a customer is obviously confidential, and they can move up to restricted for financial and other data stores. That made it easy to contextualize the priority of these different places of information or sources of information.
I’m trying not to be too long-winded, but if any of them come in and they don’t match a certain set of parameters or something that you know, don’t try to automate your way around that at first. Build baselines based upon what you know, but then take that from an automated case, maybe to escalate it to an incident and force people to have eyes on. Especially when it comes to those examples that I mentioned earlier, like service accounts, API keys, and rotating authentication authorization methods. If you messed that up, then the downstream effects can be pretty costly.
But at the same time, not doing them or automating 70%, 80%, or 90%, now you’re having your teams waste all their effort on doing very mundane, repetitive tasks, so it’s definitely a balancing act.
It’s the thing that after there’s been a cybersecurity incident, there’s probably more human time that needs to be spent reevaluating what we missed. What did the automation not catch?
Yeah, the after actions, the post blast, those things. Often, people used to think they’re not fun, but I always say that what we do for a living is awesome. We get to utilize some of the best technologies in the world to meet our corporate goals and outcomes that we need to help the business grow, secure our customers and internal enterprise. That’s all fun. When you do an investigation, that should still be fun.
We get to utilize some of the best technologies in the world to meet our corporate goals and outcomes that we need to help the business grow, secure our customers and internal enterprise. That’s all fun. When you do an… Share on XComing to work on Monday is fun. Leaving work on Friday should also be fun. Everything in between should not feel like you’re just managing tickets and you’re not enjoying that.
Those are the things that we generally start first with. What do you not like about your job? If you have a hundred people doing the same a hundred tasks a day, and they each take 30 seconds, that adds up to money. So start there and then work your way out.
Where do you see AI being involved in cybersecurity on the defensive side, let’s say?
It’s a great segue, I think. We’re fortunate. I think our co-founder, Cody, said something really interesting a while back and it’s changed how I look at it even. He used to look a lot internally like how did we build our AI? How do we segregate it? How do we separate it? How do we protect it from the wider Internet? What are gateways? How have we trained the data sets? How are we maintaining it? All of those things are important.
But then it’s also, well that’s a very singular focus. Look at one AI tool, where we’re recording this now, where we can transcribe it, and use generative AI to provide a summary statement automatically. That’s probably a good use of it. Not necessarily security-focused, but then let’s flip that.
From a security team, one of the costlier things that we do is when we transfer the responsibility or an investigation between language barriers, possibly a time zone, then using AI to translate what you’ve done, what the next steps are, recommended actions, and observations, super powerful.
I think most of the tools that we have in our stack have a level of AIML on it. It helps do most of the heavy lifting on the signal source side so that when we get it into our fusion center, it already has most of the enrichments and context. Now we can then ask, “OK, so what?”
Let’s look at the lessons learned the past 100 times that we’ve done these investigations. What did we do there? So that if we have a junior level analyst triage one of these cases, or maybe escalate it to an actual incident, a lot of that work and lessons learned can then be rolled up, summarized, and help those individuals out as well.
Do you also see AI being helpful in bridging the gap between the security team and people who are not technically inclined in writing reports and analysis? “Hey, take this incident and explain it in a way that a nontechnical person would understand.”
Absolutely, and if you ask me what I didn’t used to like doing the last couple of years—most of it’s obviously prior to Swimlane, but that was exactly to your point—taking CSV data or structured/unstructured data from 30 different sources, making sense of them and then spending 4–8 hours in a week on them, making them to pivot tables, and then present that to the board or the rest of the leadership team.
That’s not a good use of time, but we didn’t really often ask too. It’s like, “Well, how would you like to consume this information?” Now with AI, say, “Hey, my name is Mike. I’m the CISO at Swimlane. Please provide a nontechnical summary of the last observables for the last quarter over quarter, month over month, last year, and then tell people if I’m improving, or if we are improving, or we’re declining, or we’re increasing mean time to detect and respond. What are these KPIs?” That is super awesome.
To get a summary paragraph, that makes sense. Obviously, test it. Make sure that the data is correct there, but that has now opened up a lot of new doors on how we think about the problem instead of translating very technical and security-related KPIs to necessarily not security-related people.
Do you see places where AI should absolutely not currently be used?
I think an interesting place where we’re working quite a bit on so I’m not going to say should not be used.
Not ready for primetime usage, how about that?
A good one is if we go to 100 or 1000 different customers of ours and we ask them, like, “How do you do phishing triage, or how do you do EDR or SIM triage?” We’ll get a baseline of steps one through five, and then a huge deviation generally in the middle to master the business process. Then a lot at the end, like the last mile.
Where I think what’s going to be interesting, I think in the coming year, is how we use AI to help guide people build out that automation, because right now it may capture 70% of those playbooks should or would look like, but to expect it to do 100% everything that matches your business process, that’s not real right now.
That’s a bad thing if someone is promising you that, you should run away.
I definitely agree with that. Anyhow, no matter if the AI is part of your security stack or your general IT or something else, figure out what it works really well for and then see how you can incorporate it. Maybe even to your security program like translation services and other things.
Where do you see AI impacting security, let’s say, five years down the road?
I think we’re going to continue. I just came from Black Hat Europe, right in London, and it’s hard to walk 10 feet without seeing automation and AI literally under every logo. Then it’s seeing, well how are they using it? How should that be adopted? How can we help educate our peers, our customers, and partners on? This is where it works really well.
As an example, one thing where we’ve seen huge advancement is let’s just call them virtual analysts. It doesn’t mean that you should auto-close everything and just trust AI 100%. But to do that, everything up to the last mile, or the last 30 seconds of an investigation, maybe even.
Those are where I think we’re going to continue to see the biggest leaps and bounds, where depending on how everybody’s training the data, we should be able to do a lot more automation, a lot more context-based, risk-based, priority-based.
Really the prioritization and closing and figuring out if anomaly X equals Y, then go off and do these other things, but just having AI on its own isn’t really going to solve the problem. You’ve really got to couple it with that human intelligence to match your internal business processes, and then push it all the way to the right, like how do we enable those actual human analysts via the virtual analysts?
I’m maybe grasping at straws here. Is there a supply chain risk with AI being used in the security space, in that, “Hey, if I’m a nation-state, if I can manipulate or introduce bad data into somebody’s AI learning, I can get it to do things that it shouldn’t be doing, or it’s going to leave holes where it shouldn’t be leaving holes.”
I think that in my perspective, that goes outside of AI as well. It’s literally every library of packages that we import. “OK, what automated processes can we build to make sure that we’ve tested and we’ve vetted it, and then looking at the full supply chain and S bomb in bulk to then make sense out of everything?” All the ingredients in your secret steak or whatever it is that you’re building.
It’s not just for AI, but you’re absolutely spot on. Then trusting AI blindly without having the controls in place would be just as bad as doing it as part of your regular CICD or software development life cycle.
I hadn’t thought about supply chain issues on AI or even large language models, like how would I go about feeding bad data? If we see the results in the legal space with AI, that it seems to hallucinate pretty specifically in the legal space. But it makes me wonder how easy it is to put content out there that AI will pick up and inappropriately ingest.
I think we see it every day. It’s just a matter of curating the data, building out those kinds of controls and tests around that to see what happens when that gets ingested, then figure out how we make sure that this doesn’t happen again.
That’s for other minds other than you and I to figure out.
Yeah, I mean, I think it’s super interesting, but I don’t have that 40-pound brain that a lot of the people at our company, Swimlane, have so I’ll leave that to them.
Advice and assistance and saying, “Hey, we saw this on another model. What are we doing?” Then also be specific. If you’re building a tool for your customers or consumers that is security-related, I always joke, but I don’t have it trained on how to cook a filet mignon. It’s not necessarily relevant to that data set.
You can reduce a lot of the risk by training it on those specific tasks that you wanted to respond to or generate information about, then test that automatically, and throw all the wrenches that you can in those systems to see what happens when things don’t go as expected.
Do you get to do that as part of your job or your team’s of let’s try to break the stuff that we’ve built?
Yeah, so whenever we see things in the wild, it’s then really working with our AIML team, the data scientists, and then, “OK, let’s see what happens when we do this.” I know I just saw it this weekend. Amazon has published some really good tooling as well on how to test and automate a lot of these.
Fun. Is there anything that we’ve missed about what Swimlane does, who your target audience is, and why they should use you?
No, we haven’t missed anything. I mean, this could be a four-hour or eight-hour talk. I think the biggest thing where we see real value, whether it’s through human intelligence and/or human intelligence mixed with artificial, is really the unification. You can buy all the best platforms in the world and try to solve for—we call it—good enough, but that good enough may not be good enough.
It’s easier to manage one massive platform that does everything for you. Where I think we’re uniquely placed in the market is we work with anyone and anything. It doesn’t even necessarily have to have an API.
We can reach in and get data from almost any data source, and we can affect and then impact IT fraud and all these different ecosystems to get the desired outcomes and automate those via orchestration engines to execute the tasks on behalf of a human. Then marry and merge all that into ultimately, at the end of the day, as a business, what are we trying to do? We’re trying to manage risk. We’re managing risk and we have all these regulators saying, “Show me evidence of all of this.”
Well, if you’ve orchestrated everything really well, and now you’re talking to these bespoke identity providers or cloud ecosystems, MSPs, MDRs, and EDRs, and I can have a very long list of acronyms, but let’s just call them signal sources. Then given the ability to unify them where you don’t have to train your teams from an IT security or risk perspective to know 30 different tools.
Train them to know your tool, so if you do replace your EDR, is there even a user experience change? Or is it we received a signal? Mike visited a website with a poor reputation and clicked on a link to download a payload and we isolated his endpoint, as an example.
That doesn’t change from vendors X, Y, and Z. So abstracting all of that and allowing them to do more with less, that’s really the real value, I think, of orchestration automation.
I could add more to that, but the unification of all of these is really where the most fun is, because now we can draw business intelligence data out of that.
Like year over year, if you did replace a SIM vendor or EDR or something like that, did our efficiencies increase? OK, then why is that? Did we reduce OpEx or CapEx? OK, now we can get two good things and then present them to the rest of the leadership team and the board and investors, and then go from there saying we’re making real progress here.
That discussion made me think of a question. Because you’re involved with multiple customers, do you see any interesting trends in the data of we’re seeing a lot of this that we haven’t seen before?
No, I think it’s just a steady increase in pretty much everything, and I know that’s a horrible answer. Obviously CSPM, vulnerability management, it’s hard to say this is the most important thing that you should focus on. But we obviously see a lot of targeted attacks as well. I think it’s more on being able to build out some attribution, because then you can figure out what they are really trying to get to.
The TTPs and you may not get it if you’re if you have an EDR team and identity team and the same team and mother team and they don’t work together and you don’t have the luxury of running something like Swimlane. Now you have all these really smart, brilliant teams working in silos.
By unifying them together, now you can pull all those needles out of many different haystacks, then make sense of them all, and maybe learn something that you wouldn’t see from a singular plane.
The analogy I’m thinking of is when you’ve played one of those games where you’re trying to find the words in the little boxes, a word search, that once you’ve been looking at that word search and you can’t find any more words, someone comes over, looks over your shoulder and goes, “I see this one, this one, and this one” because they haven’t been looking at the same data that you have, that magically they could just see things that you’ve missed.
And that’s the same thing with IT and OT is bridging all these gaps and getting rid of the silos. We have a use case where we run this. All the RFID, the key cards into our office, secure rooms, and facilities actually go up to our fusion center as well. Now you can actually track a maintenance person coming in.
If you then see that they try to gain access or did gain access into a room which they obviously shouldn’t, now you start seeing network anomalies and identity things right now. Now you can trace back from a physical perspective all the way through, and that helps out with the investigation.
You don’t have to then go and pivot and submit a ticket to IT and say like, “Hey, give me the last 90 days of all the access and audit logs from our physical security system,” because it’s already part of all the information that’s been normalized and enriched into your investigation.
“That new contractor, Bob, walked into the server room and all of a sudden, lots of things started happening that shouldn’t have happened.”
“Yeah, they unplugged that red cable.”
“We need to go have a conversation with Bob.”
If people want to reach out and have a conversation with you, where can they find you and Swimlane online?
The best bet is swimlane.com. We have some excellent blog posts and we can share some of these links with you here in a bit. There’s so much content. It’s a good thing to use our swimlane.com front end to narrow down the search on what you’re interested in learning more about, whether it’s just IT operations, security operations, OT, fraud, and all the other vulnerability management use cases. Or just risk and compliance as well, like how can we help you automate what you don’t like doing?
Let’s have a conversation afterward. There are lots of stuff in my life I don’t like doing.
There you go.
Michael, thank you so much for coming on the podcast today. I really appreciate your insights and what you bring to the table.
Thank you, Chris. It’s been my pleasure.
Leave a Reply