Site icon Easy Prey Podcast

5 Common Uses of Synthetic IDs with Stuart Wells

“I love figuring out how the fraudsters work and then reverse engineering how they do it to prevent it.” - Stuart Wells Share on X

“It’s a very simple process for the end user, but behind the scenes there’s a tremendous amount of machine learning and artificial intelligence used to do the verification or fraud assessment.” - Stuart Wells Share on X

There are entire communities set up to fight fraud that use synthetic IDs as well as scammer gangs that are advancing their creative ways to use these documents. This is changing regularly as biometric authentication advances continue. 

Today’s guest is Stuart Wells. Stuart is the CTO of Jumio and is responsible for all aspects of Jumio’s innovation, machine learning, and engineering. He is an industry veteran with more than 30 years of tech experience. He was previously the Chief Product and Technology Officer at FICO, and held executive positions at Avaya and Sun Microsystems.

“Biometrics as a tool to use against fraudsters has grown globally.” - Stuart Wells Share on X

Show Notes:

“Scammers come out when they see a pool of victims somewhere. Scammers want to get a piece of that and they will figure out how.” - Michelle Couch-Friedman Share on X

Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. 

Links and Resources:

Transcript:

Stuart, thank you so much for coming on the Easy Prey Podcast today.

My pleasure, Chris.

Can you give me and the audience a little bit of background about what you do?

I'm the Chief Technology Officer here at Jumio. I'm responsible for all of their software development, cloud services, and machine learning, the majority of which goes towards verifying documents from over 200 countries, thousands of documents approaching 7,000 and above. It's our job here to verify the legitimate document, detect fraud, and prevent fraud basically on a daily basis no matter where it's happening in the world.

Is there something that got you interested in being in this field?

I think in the area of financial services, my involvement is almost 30 years, whether it's the big banks of Wall Street, where I worked for a few years, and then globally. These are different types of fraud solutions: credit card fraud online, credit/debit, ATM, account takeover. With Jumio, it's basically identity theft and identity verification.

Just on a daily basis, it's like being a miniature Sherlock Holmes. I see attacks, and some of them are nothing more than a sticky on top of a document, and then others are far more sophisticated, like deep fake. Generally, I love to try and figure out how the fraudsters work and then reverse engineer how they do it and prevent it.

I like the idea that you're trying to figure out what they're doing and not just looking at the results of what they've done.

That's what makes it scientifically interesting and incredibly intellectually stimulating, actually.

What is the history of fraudulent documents, and how has it changed in the last couple of years? You think back to the movies from 20 years ago, it was take a photo cut, print it out, cut it up, slip it under a piece of plastic, and you've got a new passport.

Actually, it goes back a lot further than that. We're talking about bank notes from 100 years ago. The science of document security features is literally decades old. Initially, it was to prevent people manufacturing banknotes, and today a lot of that technology—it's incredible technology, actually—is used to prevent fraud.

For example, whether it's a UK passport, US passport, South African, there are actually 14 different security features on one page of the passport. It's everything from tactile text where you can actually feel the text, whether it's laser perforations, whether it's hologram, or whether it's a polycarbonate covering, which is the same material that's used in bulletproof glass. Incredible science in just a passport, including the chip, which is embedded in the passport too.

I'm probably going to jump way ahead. There's probably a fundamental difference between a document like a passport that you're trying to use to enter a country versus a passport that you're submitting for some online verification.

No, they're exactly the same passports. The whole idea of the user journey is to make it as frictionless as possible. Whether it's a Jumio or vendor, what we're trying to do is we're trying to have the user take up using a mobile phone or their laptop, take a picture of the main page in a passport, and then take a selfie. What we do is we match the images.

It's far more than that because we try to assess whether the age is the correct age. We try to assess whether the older image in the document has been tampered with or not. It's a very simple process for the end user. But behind the scenes, there's a tremendous amount of machine learning and artificial intelligence used to actually do that verification or fraud assessment.

I got all interesting questions brewing in my head here in how people do some of these things. Let's talk about the practical applications of the synthetic IDs, and we'll talk a little bit about how they're created and whatnot. Where are people predominantly trying to use synthetic IDs?

Pretty much in every walk of life, whether it's getting on an aircraft to travel internally or overseas, whether it's trying to rent or lease accommodation, whether you're using it to get employment, or betting/gambling. They're all examples of opportunities for fraudsters to use synthetic identities.

Recently for the Super Bowl, there were a lot of scam activities. Again, using that process, you can actually verify a user or help identify a fraudster behind the scenes. Anyway, the opportunity for fraudsters is immense. It's not limited to one industry. We see it whether it's hospitality, healthcare, banking, et cetera.

Do you see a lot more on the side of someone who is actually trying to commit criminal activity? I'm trying to think of the proper terms. If you have someone who is in a country illegally, let’s say, they're just trying to open a bank account, they're not trying to steal money from anybody, and they're not trying to commit a crime with that identity other than creating the account, is there a larger percentage that's being used for purely fraudulent purposes versus someone who's just trying to establish an identity for themselves?

I think there are large volumes of both. You can buy a synthetic identity. Until very recently, you could buy a synthetic identity. It was extremely high quality, between $400 and $1000. Today, for $20, you can buy a synthetic identity. The barriers to obtaining high-quality documents are very low today. Whether it's being used for fraudsters in mass attacks or it's just an individual, I would say the option is both.

I think the more challenging thing is the automation that fraudsters use. They can obtain a thousand documents on the dark web. What they can do is they can use different techniques, like face morph or deep fake, take the image that's on the document, marry that and morph with a selfie, and they can attack hundreds of times in a day. On both sides, whether it's an individual user using it just to get employment, establish a residency, or whether it's on the other side of the mass attacks, we see everything.

Are you seeing a lot more businesses trying to use more sophisticated attempts even outside of financial services or identity-type of services, that they're just trying to use more products like Jumio in order to authenticate their customers?

Yes, because there are multiple reasons for that, not only to protect the financial assets of their business or whatever business they're in, but it's also reputational damage. There's a tremendous amount.

Studies say that 65% of people are aware of deep fakes and are concerned about deep fakes, but the growth in deep fakes is well over 700% recently. -Stuart Wells Share on X

We briefly touched on it earlier—deep fakes. Studies say that 65% of people are aware of deep fakes and are concerned about deep fakes, but the growth in deep fakes is well over 700% recently. I would say that there's a tremendous amount of awareness. The fraudsters are becoming evermore sophisticated; we've got to become more sophisticated to prevent them. Because of the awareness, because of the press, more and more businesses are using Jumio and Jumio-like services.

I know my own scenario dealing with attempted identity theft, so to speak. There was a vendor that I used. I won't say who the vendor is or what I use them for because that's not relevant, but I got a message from them saying that someone had accessed my account. I knew immediately that I hadn't accessed the account.

I was able to get into the account and clean up what had been done. I ended up calling the company. They said, “Let's transfer you to the fraud department.” I got transferred to the fraud department, and someone had created a fake passport with my name on it and a number of other documents in order to gain access to prove that it was me that had got locked out of the account, and they were trying to get into the account.

I start to wonder, are companies with more valuable accounts—this wasn't financial or anything like that. I had never provided them a photo ID in setting up the account. Most companies don't ask you to do stuff like that. Are we going to start seeing more companies asking for more identity documents in creating accounts so that they can do a better job of preventing fraudsters from getting into our accounts?

Most definitely. Biometrics as a tool to use against fraudsters has grown globally. I think there was a recent study. There's only one country that doesn't use biometrics in the top 100 in the world. Biometrics are used to get on a train in Tokyo or in Moscow.

Joking apart, for a while there, the Chinese were experimenting limiting the number of sheets of toilet paper using biometrics. Generally, biometrics are at least either by themselves or as part of a two-tier, multi-factor authentication scheme. They're becoming more prevalent.

Interesting identity story, I suppose, maybe. My wife and I had been traveling overseas. We came back to the US. We're used to, “OK, here's my passport, here's this, here's that, stamp this. Where have you been? What were you doing?”

We came back in through Los Angeles International Airport, and we're going down the pathway and they're just like, “Just look at that screen over there.” We looked at it and they're like, “OK, go ahead.” I'm like, “You didn't even want any document from me.” I was almost alarmed with the ease that I got through.

That happened to me. I was coming back from Dublin on Saturday and using Global Entry. I just walked straight through, took the picture, and I just walked through security. They never looked at my passport. It's very convenient; it's frictionless. As long as the quality of the facial biometric is high enough, I think it's a very good mechanism for doing authentication.

That's slightly different from when it's a selfie, because there's a human being that can actually do a second verification. They're looking at the captured image, they're looking at you, they're making sure the two things match.

When you're doing a remote selfie, you've got to be a lot more careful because the fraudsters can inject the image into the camera or into the laptop. They're bypassing the camera. Even though you're receiving, you think, a selfie, you are not. What you're receiving is a digitally created image, and that's the real challenge today.

But it's both sides. As you pointed out, there's the convenience and speed of using facial matching, but then depending on the communication vehicle, it can also cause all problems or attack vectors.

With respect to the deep fakes, the usage of them increasing, and people being generally aware of it, have we gotten to the point now where using free software, I can create a layer over a different face on me and do a FaceTime with someone and look like someone else, maybe not someone who's well-known but not me?

Yeah. There's a story that was published not two weeks ago, where a $25 million transaction took place. It was basically based on a deep fake. The person thought they were looking at work colleagues in a video communication. They were looking at, basically, images which had been morphed onto the face. It was such high quality that that person believed that they were dealing with their colleagues and made a $25 million mistake.

What people don't realize is that you buy a thousand IDs of different types, and then you take the legitimate holder image that's on the ID, and then they take the face of the fraudster and do a morph. Now the fraudster has an image that looks very like the holder on a legitimate ID, and that will be matched and therefore verified. That's a real challenge for businesses. It requires some very sophisticated modeling to catch it.

That's alarming from both the business side and from the consumer side. With the rate that AI is growing and computational power is growing, this seems like it is a race that is not going in a direction that we can manage.

Today, you need machine learning to detect deep fraud that's based on machine learning. It used to be that you could have an army of well-trained humans who could look at the selfie, could look at the document, and could say, “Yes, fraud is taking place,” but that's no longer the case. The quality of the documents being generated by adversarial networks, which is a type of machine learning and of course the deep fakes, the quality is so high. Even someone who's trained these days can spot them.

There are some things that make deep fakes, at least today, discoverable by human beings. For example, you're blinking. It turns out blinking is difficult for a deep fake to reproduce. It turns out the movement of your facial muscles with your voice, also, that synchronization is difficult for deep fake models to produce. So synchronization and natural human movements.

The voice, it turns out that at least for you and I, we won't be able to detect a deep fake-created voice. But a machine learning model will spot the fact that it's been created by a deep fake model because our vocal tracks have all these irregularities, and the model will be able to pick that up. There are natural movements that models still have difficulty reproducing today. Their improvement will continue, and we'll just have to continue to find innovative ways to discover them.

What is the impact of all this on consumers?

It can influence their buying behavior. It can put them in great difficulty financially because their reputation could be destroyed. Their financial well-being could be destroyed. Their relationships with their family members could be destroyed. Without being too alarmist, it can have significant impacts on an individual or a community with some of the deep fake attacks.

I don't know if you are familiar with the Zelensky video, but if the Ukrainian troops had to believe that the video was real, it might have caused a different outcome early in the war between Russia and Ukraine. I think it's a real threat to our society.

Clearly, there are things that businesses can do, because they have the resources that they can choose to use to protect their company, but how can I, as a consumer, know if I'm seeing a video, whether it's a deep fake or not?

Most of us have good intuition. I think if somebody's trying to sell you a bill of goods, your spider senses should go up. I think the first thing is listen to your inner voice and go check a second source before you do anything such as transfer money. I think there are technologies that you can also apply.

Most of us have good intuition. I think if somebody's trying to sell you a bill of goods, your spider senses should go up. I think the first thing is listen to your inner voice and go check a second source before you do anything… Share on X

Most banks, most financial institutions, have the ability for you to use two-factor authentication. You don't have to, but I think where two-factor authentication is available to you, you should do that. And I think just being vigilant, just checking your bank account.

Most financial institutions, if someone tries to take something from your account or from your credit card, they have hotlines, and then they also have the ability, just in your case as you mentioned earlier on, to quickly shut down accounts and shut down difficult situations for your safety.

My wife and I were talking about this a while back. I had done an episode on virtual kidnapping and how easy voice cloning was. Even having a two-factor authentication between my wife and I if we're on a phone call, and they're making an unusual request, or they're saying something odd.

Again, if something seems untoward, I would hang up and call them back. You might not be talking to your wife. You might be talking to some machine learning system or a fraudster on the other end of it. There's basic security hygiene, let me just put it that way.

With respect to the deep fakes, do you know how much audio or video is needed in order to create something that is maybe not machine learning-passable, but to the untrained eye is passable?

In what sense? There are tools out there. You can go do a search. You can go “top 10 deep fake software tools.” It will bring up these tools and it will tell you which ones are free. You can go to the website. As I said, within 10–15 minutes.

You don't have to be a programmer. Anybody with just good common sense can use these tools because the creator of the tools have made them so easy to use. Of course they think it's a game, but the fact of the matter is people are using them for nefarious activities, but it truly is very easy.

There are things you can do to spot it. For example, there's the shape of your head, and there's the shape of my head. The way these tools work, they will morph a different face on, but it turns out the shape of the outside of your head will be maintained.

There are techniques that we can use. Even though it's from a deep fake or face morph perspective, it's very difficult to tell us a deep fake or face morph. There are other techniques that we can use, such as the background, such as the shape of your head, that if we see repeat fraud, we can correlate the different images.

That's interesting. I imagine even if you're trying to do real-time deep fakes, that if the scammer has a substantially different size of head or proportions, that starts to make the software not work quite as well, I assume.

That's correct. Again, there are examples. If you go look up on the web, there are lots of examples of body-head ratios, because of the way that the software generates the image that you can actually spot it. It is a very high detail. Around your eyes, for example. In your case, the beard. Also, the lapels. There are areas that if you know what to look for, you can spot these aberrations.

There's nature in the way these models work as I mentioned earlier on. If you can see the earlobes and the lower jaw, you can spot the fact the model's not working properly. There's a synchronization issue. Same with, as I mentioned earlier on, lip movement and facial muscles. You don't need to spot deep fakes or face morphs.

Thinking back to some of the early Obama deep fakes. Sure, yeah, it looks like him, but the shading of the skin color as he's moving doesn't look right. I think almost every deep fake I've seen, the person is looking straight at the camera, doesn't turn from side to side, and doesn't have those natural head movements when they talk the way that you and I might.

Right. The earlier versions of deep fake software had these problems. More recently, they're doing a better job of pose, for example. Again, fraudsters make mistakes. One of the ways that we caught a fraudster was they created a deep fake, and then they took the deep fake image and put it into the holder image. Again if you've got an identical pose, that's almost an impossibility. There are security features on the document.

It's not just a case of taking a deep fake and putting it on the document. There's something called microprint where they put the microprint over the face. Some fraudsters forget to mimic the microprint. Fraudsters are very innovative. Some of them are incredibly creative, but they make mistakes, and that's what we have to rely on.

Hopefully they're making mistakes more frequently than they're not making mistakes.

Yup.

Is there anything about this cat-and-mouse game with the technology that keeps you awake at night?

Yeah, it keeps me awake both on the positive side and on the negative side. On the positive side, fraudsters, yes, they may work in gangs or in groups, but you've got whole nations creating security features or creating defenses that help us prevent bigger and bigger or high-velocity attacks.

On one side, the good news is whether it's the banking community or whether it's consortiums that specialize in fighting fraud, I think there's just a general community of people trying to stop fraud. Again, we're trying to protect whether it's an individual, a business, or something larger.

On the things that keep me awake, I see different fraud attacks, creative fraud attacks, pretty much on a daily basis. For the most part, we're able to figure out what's causing that fraud attack and build something to stop it. I think the thing that keeps me awake at night is where there's a fraud attack vector that is going to take us months and months to solve. That's what keeps me up at night.

Interesting. Is there a particular document that is used for fraud more often than other documents?

Obviously, there's a preference for hitting the UK, the United States. We see fraud all over the world, whether it's Brazil, Mexico, Colombia. Whether it's Hong Kong, we see fraud all over the world. I definitely would say that the US and the UK, we see a lot of.

The good news is the security features are a real challenge. We talk about fraud abuse. For example, the fraudsters will change PII fields on a document, or they'll change the holder image on a document. These particular images are protected by multiple security features.

For example, on a California driver's license, the holder image is protected by hologram, microprint, and tactile text. There are three security features that are on that document. Again on that document, your initials and your year of birth are repeated seven different times. Processors have to be pretty thorough to manipulate the entire document and the holder image. That takes a lot of science and knowledge. There are over 116 different security features in terms of the European documents. There are also mechanisms that we can use to spot fraudulent abuse of the documents.

It makes me start thinking of interesting ways to try to create security features. Cryptographic, where you're taking some field or some data, applying some cryptography to it, and printing that on the document, because then the person has to know how to reverse engineer the cryptography.

Right, assuming it can be. As you well know, there are cryptology techniques one way. Even if they could figure out that such a code was there, it would be almost impossible in some cases for them to break it. I find it fascinating. As you open the segment, thinking about this is a hundred years of securing feature evolution from banknotes to the ID documents that we see today.

Do you see much growth in biometrics in terms of retinal, fingerprints, and things like that on documents? Do you see it moving in that direction?

Yeah. Obviously, if it's just a face or a fingerprint, the term is it's a single biometric—unimodal. The industry trend is towards multimodal biometrics and then something called multimodal liveness. A simple example of multimodal biometrics would be to use the voice and the face, to use the blood vessels in your eye, the sclera, or to use the retina, although that requires a special device today.

The combination of biometrics and then coupled with liveness: is this a real human being? Can you see movements in the eye, movements in the face? Can you see texture movements? Can you see the blood flow, believe it or not, in your face? They're all techniques that you can put together to make it easier to verify an individual, easier and more reliable to spot that it's a live individual.

This is so fascinating. I can see why you like this field.

Yeah, I'm an engineer, a scientist. The processors are always challenging us to build better systems. That's what my team and I enjoy.

I'm thinking of building a better system. Is it resulting in new technology for creating the microprint for the new biometric technology in that sense of like, “Hey, let's just create a whole new way of providing some authentication”?

Most definitely. I think at least within Jumio, the thing that we're particularly proud of is we started with a notion that documents have an anatomy. Actually, if you look at the American Association of Motor Vehicles, they have a specification. It's well over a hundred pages long, and they talk about the layout of a document.

But it turns out that if you think about it, documents have an immune system. The immune system is the security features that are there to protect the document. -Stuart Wells Share on X

The layout is the anatomy of a document. Where's the holder image? Where's the ghost image? But it turns out that if you think about it, documents have an immune system. The immune system is the security features that are there to protect the document.

Documents have an anatomy. Documents have protection mechanisms. We're able to both verify the documents, but we're also able to detect literally hundreds, if not thousands, of different attack vectors. That's one thing that came from the document side.

On the authentication side, if you think about what constitutes an identity, there's obviously the information: your first name, your last name, your Social Security number, your address. But actually, you should think about it in a much broader sense. There are probably more indicators of who Chris Parker is by the use of your cell phone than your biometrics.

Combining the physical biometrics, which we're talking about now, behavioral biometrics—the way you lift your hand, the way you swipe the phone usage—if you combine all that information, that's more future proofing your identity. And then there's real-time tracking of your bank account. Combining all that information really provides a particularly unique way to identify Chris Parker, and that's how we can bear protection.

I've heard that someone had written an authentication methodology by everybody types differently. The system would prompt you to type a particular phrase, and it's going to be different every single time it asks you. It just knows by the way that you type who you are, which, to me, is interesting to think that everybody has a unique way of typing.

That's a class of behavioral biometrics. The same is true of the way you tap the digits on your phone. You swipe the phone, the way you use your right hand, bring the phone up. By collecting all that information and fusing that information together, you get a much more accurate behavioral representation of Chris. If you combine the behavioral biometrics with the physical biometrics with your device information, then other assets, it just gives you a particularly much higher confidence that you are who you say you are.

It becomes four- or five-factor authentication versus two.

Exactly. You're an expert in security. This is not a new concept. Most security systems are layered. The same thing is going to be needed to define and protect your identity.

If you were to say, “Hey, we're going to launch three-factor authentication,” what would be the third factor be for you? Not for you personally, but what would you see that could actually be implemented as a third-factor authentication?

Firstly, let's start with the user. Anything that you do that just adds friction to the user is going to be a non-starter, particularly for the younger generation. They want as fluid, smooth, and as fast an experience as possible.

We talked about multimodal biometrics, so you can use the voice, you can use the face, you can use the behavioral biometrics, and then you can use information associated with the phone. Assuming the user wants to allow you to use this information, you can use things like the location of the phone.

Again, there's something called continuous authentication, where actually it's not just an individual session. You're actually collecting that information as a continuous process. I wouldn't think of it as three-factor authentication. If you want to think of three factors, I would use a voice, I would use space, I would use something associated with behavioral, and then the device itself. Again, I think you're going to get a much more secure authentication experience.

The challenge is always going to be the friction, though.

Right, and everything I described can be done. People are very, very used to now doing a selfie. I personally prefer voice and face because it's so much more natural for me just to say “open sesame” or something similar, also to use the behavioral biometrics. There is a lot of accuracy associated with using these different biometrics coupled with a device.

Do you see a time horizon when this is going to shift from our current methodology of two-factor authentication?

It's happening today. I was watching a video on behavioral biometrics, and there are many companies selling behavioral biometric solutions today. The speed of innovation and the mobile devices between the accelerometer, the gyroscope, the magnetometer, these three things, the sensitivity and the amount of information you get with a phone is causing change as we speak.

I don't think it's just one instance in time, I just think it's continually changing. We have the benefits of being able to use all the information coming from the phone to basically protect ourselves.

Everyone's going to be required to have a smartphone then?

No, you don't have to have a smartphone, but most people do, as we all know.

I don't know that I see a person walking down the street that doesn't have a smartphone with them.

Yes. If you're in Japan, you walk down, it seems like everybody is walking down with the mobile phone to their ear.

Phones are very ubiquitous these days. As we wrap up here, is there any parting advice that you have for businesses and for consumers?

I think working with people who understand the threat vectors, the sophistication of the threat vectors, you've got your own business to run. Relying on people, whether it's Jumio or whether it's another vendor, getting experienced professionals to come in and help you build your security infrastructure and your customer onboarding, that would be one thing.

I think the second thing I would suggest is help and develop a culture that understands the security challenges to security threats. That's an ongoing educational endeavor. They would be the two things.

Awesome. Stuart, if people want to find you online, where can they find you?

My email is stuart.wells@gmail.com. If anybody wants to send me a question or a request, please do so.

Do you publish any white papers about the research that you're doing?

Yeah. I speak at various conferences. The research is generally either discussed at these conferences. We file patents, where the technologies that we've invented are available online through the simple Google patent search, too. Our marketing or our PR firm also make materials available.

Awesome. We'll make sure to link to those. Stuart, thank you so much for coming on the podcast today.

I really enjoyed talking to you, Chris. Thank you for having me.

 

Exit mobile version