With all of the AI advances and the metaverse, people will face the decision on how to embrace the technology because it will become unavoidable. Today’s guest is Dr. Mark van Rijmenam. Mark is The Digital Speaker. He is a leading strategic futurist who thinks about how technology changes organizations, society, and the metaverse. Mark is an international keynote speaker, five time author, and entrepreneur. He is the founder of Datafloq and the author of the book Step Into the Metaverse. His mission is to help organizations and governments benefit from the innovative emerging technology while ensuring that it is done ethically and responsibly. Recently he founded the Digital Futures Institute which focuses on ensuring a fair digital future for everyone.
“It’s a very abstract concept. First let’s start with what the metaverse is not. There are a lot of misconceptions about the metaverse, that the metaverse equals gaming, virtual reality. This is not true. It can be all of the… Share on XShow Notes:
- [1:03] – Mark shares his background and current endeavors in the field.
- [2:40] – Using ChatGPT, Mark wrote a book and you can’t really tell until the end.
- [5:02] – After a while, the patterns are noticeable.
- [6:49] – Start understanding the metaverse but understanding what it is not.
- [7:40] – “What the metaverse is is when the physical and the digital world converge. It’s nothing more than the next iteration of the internet. We move from a 2D internet to a 3D internet.”
- [8:49] – When the digital comes into the physical, we get augmented reality and it has very interesting uses.
- [10:37] – The problem with the next iteration of the internet is how to behave in this new world.
- [12:32] – Right now, the newer generation has less distinction between the digital and physical world.
- [15:03] – Some people’s behavior is unacceptable online and some don’t realize how public their actions are.
- [16:14] – We already see a problem with people not recognizing the distinction between digital and physical. Some are already addicted to social media.
- [19:11] – With the heavy use of digital content with children, it will become harder for the next generation to separate the two.
- [21:33] – The transition is so fast that we don’t know what the implications are.
- [23:18] – Education is crucial but no one is teaching us how to use this emerging technology.
- [25:18] – Children and adolescents are particularly impacted by this transition.
- [27:23] – When earning his doctorate, Mark had to have permission to do his research for ethical reasons.
- [28:57] – We don’t have to use social media and emerging technology as it becomes available.
- [31:20] – How can education keep up with the speed of change?
- [32:30] – ChatGPT shouldn’t be banned. You have to embrace and adopt new technology so we can teach students how to use it.
- [34:48] – We are moving into a voice-world. What does that mean and how is it different from how we interact with the internet now?
- [37:20] – Mark describes what is necessary for this to be successful. It’s possible but it requires a lot of work. We should act now.
- [38:49] – There are things that need to change, specifically in education, verification, and regulation.
Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.
Links and Resources:
- Podcast Web Page
- Facebook Page
- whatismyipaddress.com
- Easy Prey on Instagram
- Easy Prey on Twitter
- Easy Prey on LinkedIn
- Easy Prey on YouTube
- Easy Prey on Pinterest
- Datafloq
- Step Into the Metaverse by Mark van Rijmenam
Transcript:
Mark, thank you so much for coming on the Easy Prey Podcast today.
Thanks for having me, Chris. It's great to be here.
Can you give myself and the audience a little bit of background about who you are and what you do?
Sure. I am Mark van Rijmenam. I'm a strategic futurist, which means that I think about emerging technologies and how to change organizations and society. That means from big data to blockchain, to AI, to the metaverse, to generative AI, synthetic media, everything that's emerging and that's going to change our world. I'm also a keynote speaker, so I help Fortune 500 companies understand these difficult technologies and what they mean for them.
I've written five books of which my fourth book was called Step Into the Metaverse, which is all about the metaverse. My fifth book is called Future Visions, which was written in five days with ChatGPT. I think I was the first one in the world to publish the book with ChatGPT. By now there are dozens of them available, but I hold the claim to fame to have written the first.
I run a media platform called Datafloq, which is all around emerging technologies. I've just started a research institute focused around elevating the world's digital awareness to ensure a thriving digital future, because I'm actually quite worried about the digital future of where we're heading. I think education is required to do so based on in-depth research. I've been doing it myself for over the past decade. I practice what I preach. I try to use these technologies myself so I can help others understand them as well.
Awesome. I have to ask you, the book written by ChatGPT in five days, to anyone reading it, does it look like it was written by ChatGPT?
It doesn't. Until you reach the end, then you start to see the patterns in it. I think it's a good book. It's definitely not as good as my books that I wrote myself. It's a lot more flat. All in all, I think it's a pretty OK book.
The process was actually quite interesting to do so because ChatGPT came to market I think on December 1st. I started writing on December 5th, and I published on December 12th. A week, including editing and everything was a week's time.
What I did is I asked ChatGPT, “OK, I'm going to write a book. Tell me about technologies and how they will define our future. Which technologies are going to define our future?” I came back with several technologies. I added some of my own technologies.
For each technology, I asked, “Which questions do I need to ask you in order to answer how these technologies are going to define us?” I gave a list of questions, and I used those questions as prompts to have a conversation and sometimes ask follow-up questions. I used to return the answer of ChatGPT, and I put it into the manuscript.
I didn't change the wording. I didn't change anything related to that. I might have changed some sentences moving up or down. Of course, I removed the factual errors, which were in there quite a bit. I removed those, but that's about it.
I edited it with Grammarly, so I didn't do manual editing or human editing myself. I used AI to do that as well. When I was finished, I asked ChatGPT to give me a title, and I came up with Future Visions. I said, “Give me a subtitle, give me a description for the cover.” It gave me a description, which I entered into Stable Diffusion, which gave me the design for the cover of the book. Then I asked to finish off, “Now write me a review.” It wrote me a raving review of how good the book was. It's pretty good.
I would hope ChatGPT would give itself a good review.
Yeah, you would hope so. It did, I can tell you.
We'll get to the main topic here in a bit. Did you end up giving it any instructions on voice, tone, and personality?
No, none of that. We basically had a conversation about it. I didn't give any instructions based on tone of voice or to mimic my tone of voice. That was quite interesting. The more I did it, the more I started to see patterns. It used the same pattern to answer the questions but just replacing one technology for another and replacing some future outcomes for different future outcomes.
As mentioned, it came up with a lot of bollocks sometimes. At some point, I asked, “How did AI and robotics converge?” It stated, “Well, in the 1980s, robotics and AI converged to create self-driving cars that changed transportation. If only it happened, the world would have been so much better now. But unfortunately, that's not the case. We don't live in a parallel universe, and we still have to deal with traffic jams.”
Apparently, the universe actually works really well when discussing the metaverse, doesn't it?
It does, absolutely.
Let's talk about the metaverse and the risks that it presents to us. There are probably some risks today, but then they're probably different than future risks. Let's talk about what is the metaverse.
Sure. “What is the metaverse” is always a bit of a loaded question because everyone has a different perspective on what the metaverse is. For my book, Step Into the Metaverse, I interviewed about 250 people, in-depth interviews, 250 surveys, and I got 250 different definitions of what the metaverse is, which shows you that it is a difficult and a very abstract concept to grasp.
The metaverse, to me, is where the physical and the digital worlds converge. The physical moving into the digital, the digital moving into the physical, and creating this physical-digital experience, where we have an immersive… Share on XFor me, what the metaverse is—first, I'll say what the metaverse is not because I think there's a lot of misconceptions about the metaverse that the metaverse equals gaming, the metaverse equals virtual reality, or the metaverse equals Web3. I think this is not true. I think the metaverse can be all of the above, but it doesn't have to be.
What is the metaverse to me? The metaverse, to me, is where the physical and the digital worlds converge. The physical moving into the digital, the digital moving into the physical, and creating this physical-digital experience, where we have an immersive internet, basically. The metaverse is nothing more than just the next iteration of the internet.
The metaverse is nothing more than just the next iteration of the internet. -Mark van Rijmenam Share on XWe've moved from the very first web to the social web, to the mobile web. And now we move into the metaverse, which is where we move from a 2D internet to a 3D internet, where we move from having to make a conscious decision to go on the internet. You have to use your smart watch, you have to use your phone, or whatsoever, to an internet that's as pervasive as to the air we breathe, and we are part of the internet. We are in the internet. That's where we're moving.
There's a lot of information in this little description. To give you additional context, if you move from the physical into the digital, you could argue that that's what we're doing at the moment. You are physically on the West Coast of the United States. I am physically on the East Coast of Australia, and we are converging. We're meeting in the digital world to have this conversation.
You could argue this is a part of the narrative version of the metaverse. It also means, for example, that we have digital twins, where we have a digital replica, a physical asset, a system, or a system of systems, that we can interact with in the digital world, either just to monitor what's going on or actually to interact and make change in the digital world that will reflect in the physical world.
We can access the digital twins, for example, using a traditional tablet, a smartphone, a computer, or we can use virtual reality or augmented reality devices. They are just channels to access the metaverse. They're not the metaverse.
The other part is where the digital moves into the physical, which is of course very much related to augmented reality, where we add a layer on top of the physical reality. I actually think that that's a lot more interesting because we have a lot of opportunities to add nearly infinite layers.
We can use it for entertainment, where we have a flying dragon flying above the Sydney Opera House that you can only view with your phone or your smart glasses. We can use it for any enterprise metaverse to have an overlay of a machine that we need to fix while the mechanic is not there. We can fix it ourselves because the AR glasses will tell us exactly what to do. That's, in a nutshell, what the metaverse is—the convergence of the physical and the digital creating this 3D immersive internet.
I have always envisioned the day that I can put on a pair of glasses that look like regular reading glasses. As I walk up to people in the group, it will tell me who the person is, if I've had a conversation with them before, what that conversation was about, their spouse, their kids, and that can feed me information that I've already had that interaction. I'm not really interested in walking up to random people and I know everything about them. But the people that I've had interactions with, it will help me to stimulate my own memory recall.
I'm pretty sure it will happen. Whether that's good or bad, it remains to be seen. I'm pretty sure it will happen.
The reality is VR and the experiences that people are having in VR online with Facebook's metaverse—just some horror stories of how people are behaving on these platforms. Where do the risks, do you see, start falling into place?
The problem with the metaverse or the next iteration of the internet is we end up in a world where we ‘re creating a hundred times or a thousand times more data than we do today. We end up in a world that we do not necessarily understand how to behave in. We don't understand how to behave in today's digital world, let alone tomorrow's digital world. We can expect that all the problems and all the risks that we see in today's internet will be extrapolated in tomorrow's internet, plus a few more.
We don't understand how to behave in today's digital world, let alone tomorrow's digital world. We can expect that all the problems and all the risks that we see in today's internet will be extrapolated in tomorrow's internet, plus… Share on XI think that's the danger that I see happening. We are used to being able to think that we can shout everything we want on social media. We bring that not-so-good behavior to the metaverse, where we think we can harass a woman in virtual reality because we think that it doesn't have any effect on her or him, but it does because if virtual reality is done in the correct way with latency, fear, screen rates, and high-quality imagery, our brain can make a distinction between physical and digital, between real and fake. If a person is being harassed in a virtual reality, to that person, it feels as if he or she is being harassed in the physical world.
I think that's the problem that we see happening in the digital world. We humans think that they are two different things. We are invisible or not accountable for our actions if it happens in the digital world, which to me is, I think, a hell lot of bollocks.
It's actually quite interesting because the next generation already thinks that there is no distinction between the physical and the digital. To them, the physical is as important or less important than that digital world. For them, there is no distinction between the digital or real reality. Maybe we have a bit of an opportunity there to change how we behave in a digital world, but at the moment, unfortunately we don't. We still behave pretty poorly in the digital world.
I was wondering. You and I are of a certain age, and we have a perspective on this. Our view of the internet is that we grew up without the internet, at least in our “formative years.” It was not commonplace, yet those that are in their 20s or 30s now, the internet has always been there. Online has always been there. For some people, Facebook has always been there. Maybe they're not on Facebook anymore.
That must be so terrible.
I can see where you and I can have a very distinct line of this is online, this is offline, this is real world, this is the internet. But people growing up on the younger side of you or I, those things are a lot more blended.
Yes. I think that's definitely true. I think also for our generation, people who have seen the transition from analog to digital, they also don't know how to behave in a digital world. I recalled, a couple of years ago, someone published a pretty nasty review on my book on Amazon, which has nothing to do with my book. I've managed to find the person online because everything is public, as you know.
I sent the gentleman a very nice email like, “Why did he do that, blah, blah, blah.” He's like, “Oh, I'm so sorry. I didn't know this is public, blah, blah, blah,” and then he removed it. I was happier that he removed it because it didn't make sense. It's like, hello, which is the same thing if you're standing in front of me in the physical world and tell me that I'm whatever you wrote me. No, it wouldn't.
I think people just have this mask in front of them that they think that they are invisible and uncomfortable if it ever goes through the computer. For me, it just doesn't make sense, but that's the reality we live in.
To me, you always assume that whatever you write is going to be public, and people are going to see it. Whatever you say, whatever you do is going to be seen and whatnot, is generally the best way to behave in my mind. But if younger people are not seeing this difference between the physical and the internet, is that more of a problem for that generation or less of a problem?
For you or I, I'm just going to turn off my phone for a day or a couple of days. I'm not going to go on social media for a couple of days if I'm annoyed by something, but so much more of your life and your identity is who you are online.
It can be both ways. Yes, if you're in digital reality, your digital identity is as important or more important than your physical identity, you hope you would pay more attention to the reputation of your digital identity, and you would not necessarily shout everything and anything to anyone in the digital realm. At the same time, it's also very difficult to disconnect from the digital reality because it is their physical reality, and we can't disconnect from our physical reality.
That has also a downside. We see there is plenty of research on all the downsides of people being fully addicted to TikTok, Instagram, et cetera. I think there are a lot of problems there. It could mean that people are more aware of what they do and say online, although that doesn't necessarily be the case because people publish all kinds of stupid things online, which is fair, right, obviously, but I think it goes both ways.
You mentioned addictions. We definitely see a lot of good, quality studies coming out about the amount of time that teenagers spend on social media has a fairly direct correlation with depression, suicidal ideation, a lot of just negative real-world things. Do you see all these problems becoming exponential in the metaverse?
I think so. If we have an immersive Tiktok, I think that it totally deserves a hysteria, to have the equivalent of the TikTok. But then for the metaverse, I think that will be really, really problematic. TikTok is just absolute crap, pardon my French. There's so much stuff going on that is really affecting our children. They're not equipped to deal with it because their brains are still growing.
In the digital world, we don't have the warning signals that something is going on. They're just being dragged into this rabbit hole of what's happening and all the videos. You quickly go down a path that you really don't want to go to. I think that's really problematic. We should protect our children because they simply can't protect themselves, because their brains are not ready.
I think our brains become fully formalized by the age of 25 or so. How can we expect a 13-year-old in order to deal with the flood of TikTok movies that he or she has been dragged into by the algorithm, going down a path that you really don't want your 13-year-old kid to go down to?
That's a challenge I see with a lot of—it’s not even just social media, but just media in general, has become very much of an echo chamber and presenting a specific view and a direction of a specific presentation of like, “This is what life looks like,” when for 99% of us, that's not what life is.
Yeah, that's one thing. The other thing I would argue is that the parents are also preparing the children in a very negative way. The amount of time that I see a pram with a phone stuck in front of the eyes of a one-year-old watching a movie, and the one-year-old can't look anywhere else, can't do anything, and just, “Why can I only watch that stupid phone?” How do we expect that child to grow up and then not to be addicted to the phone?
It always breaks my heart when I see parents do that. Why did you get children in the first place if you just give them the phone so that they're quiet? I really don't understand it. Yes, children are hard work, but then you shouldn't have any children.
We do that because we are not fully aware of the implications of what we're doing, because nobody has told us what the implications can be if we go down that route. To a certain extent, you can't blame anyone because nobody has been taught. We have been sleepwalking into this digital age, helped by big tech who created really easy-to-use, seamless entertainment and tools because we humans tend to be lazy. If we have an easy to use tool, which happens to be free, then yes, we're going to use it.
We have been sleepwalking into this digital age, helped by big tech who created really easy-to-use, seamless entertainment and tools because we humans tend to be lazy. If we have an easy to use tool, which happens to be free, then… Share on XBig tech knows very well that their technology is bad for humanity because they prohibit their own children to use it. I do think there are tremendous risks here. I think we should have more education on how to deal with these risks.
Is some of this you think is our transition from horses to cars took a fair amount of time, and in my perception, the rate at which technology is increasing and implemented in our lives is just becoming faster, and faster, and faster, and faster? Is some of the reason why we haven't built that awareness is because we can't go to our parents and say, “Hey, how did you help your kid not be addicted to an iPhone,” when we didn't even have them as kids?
I think that's a very valid point. We only have to look at ChatGPT, a hundred million users in two months time. That's unheard of. All of a sudden, everyone is using this technology, which is a technology that's still being developed. It's not ready. We know it's not battle tested. It's being battle tested by a hundred million people, of which a large chunk is children.
I don't think that's necessarily a good idea. I read this morning that OpenAI now really wants to focus and go down the route of building artificial general intelligence, but will do so in a more secure way. OpenAI was meant to be open source and not for profits, but then Microsoft came by and offered them a big chunk of money. That's how it always goes.
We, most of the time, start with good intentions. We forget about the unintended consequences, then somebody comes by with a big fat paycheck, and then we happily accept that. We just ignore what potential monster we have built.
How do we go about building that digital awareness? First and foremost for ourselves, and then for the younger generation.
That's what I'm trying to do with my research institute that I founded, to focus on education, to help people understand what is happening. Nobody understands really how the metaverse will have an impact or what ChatGPT really does. We are all guinea pigs here trying to understand what this all means. Just as we have to learn to drive a car, I think it would be useful that we learn how to be online, but nobody's teaching our children.
Just as we have to learn to drive a car, I think it would be useful that we learn how to be online, but nobody's teaching our children. -Mark van Rijmenam Share on XI didn't have any classes about ethical behavior or how to behave, not even ethical behavior, how to behave in a digital world. I don't have school-going children at the moment, but I am not sure if it happens at the moment that people are being told how to behave in a digital world. You can't blame anyone because the parents don't know and the teachers don't know, but we really need to do that. I think it's problematic that we don't.
I think the challenge is you made the analogy of learning to drive a car. I think it's one level more complex than that is learning to drive a car while the engineers are still figuring out how to build the car and how it actually works. If you think it's annoying that Tesla changes the UI and your car behaves a little bit differently, this is an order of magnitude more complex than that might be.
You're driving, all of a sudden, your steering wheel moves to the right.
You turn left and the car goes right, all of a sudden. It's not like ChatGPT is a finished product. This is just research. We're just playing with this. It feels the same way with VR, AR, the metaverse, because they're emerging products or concepts that no one's thought about the rules yet because I'm just learning how to build the darn thing.
Yeah, and I think that's pretty problematic. It's weird that we allow this to happen. We shouldn't stifle innovation and we should embrace innovation. I'm all for that approach, but we still should think about what we're doing here.
A seasoned researcher who understands how to use new technology might be able to explore a new technology in a way that we can understand it, but a 13-year-old doesn't. He or she doesn't know how to deal with new technology. Should we give a chainsaw to a 13-year-old and not teach him or her how to use it? I don't think so.
Do you see a future where technology companies need to have—I hate the idea of government mandates in a sense—a certain amount of you need to have an ethicist involved in your product design to think about the consequences on society and the social aspect of it, not just the technical aspects of it?
I think so. I'm not a big fan of too much government influence as well. It's not the market that's going to do this. The market has zero incentive to do this. Big tech has zero incentive to incorporate blockchain technology, so we own our own data in order to have zero incentive to make their products less sticky.
If that's the case, then maybe we should not necessarily force organizations to do ABC, but at least to force organizations to have an ethicist, to have a review board, or to have their AI checked by an independent auditor whether it's biased or not. That, I think, is a good idea. We still have a free market. We still have those kinds of things, which I think are important because let's not forget that the government has no clue what's happening either. To have the government create mandates about a topic that they don't have a clue about doesn't seem to be a good idea to me.
We should require organizations to do more work with this. Let's give an example. I've done a PhD at the University of Technology here in Sydney. When I went to do my research, I had to get approval from the ethics board. That's a pretty rigorous process. Only when I had that approval, I could do my research. Why can't corporations have such a committee, but then a committee that has actual power, not that you just can ignore as well with Facebook and any of the other ones, but an actual committee that holds power?
That's something, I think, that the government can mandate, where we say, “You have to have an ethics committee, and the ethics committee has to have real power. If you don't do that, you get fined 5%, 10% of your annual revenue or whatever. We still have a free market. We still allow you to organize it yourself, but you have to have that.” There has to be a review process that's actually being followed up upon. I think that would be very, very important.
That's one angle. The other angle is education. You and I, the general public, are aware that we can also vote with our own data, that we are in control, though it's very, very difficult to be in control, but we are in control. We don't have to use Google. We don't have to use TikTok. Yes, they've made it super addictive and yes, nobody told us not to use it, but we don't have to use it. That requires a lot more education that there are also other things that we can do.
I can see some parallels between ethical product manufacturing in terms of testing on animals. Companies really didn't change that policy until they faced pressure from the public of, “We're not going to buy your products if you test on cute, little, furry rabbits.” We're starting to see that, I think, on the environmental level as well, like, “Hey, if you don't treat the environment well, if you're producing 10 billion pounds of paper trash every month when you can do everything electronically, we're just not going to buy your products anymore.”
Do you see that something like that is going to be the most pressure that people can do as, “If you don't have an ethicist on staff, you listen to them, and they're on your board of directors, we're just not going to buy your products”? There is always going to be someone who will, but will the squeaky wheel start talking about having ethicists on board?
I sincerely hope so. One of the things that, in my institute, I want to focus on is creating some kind of B-corp-style certification, but then a D-corp style—a digital corporation—that becomes interesting for an organization to become D-corp-style certified, which means that your AI is unbiased, you protect your data, and you don't build technology that's addictive in negative ways. As long as people don't know corporation A is certified in a digital responsible way. Corporation B is not. Let’s go for corporation A.
It is a very long-term game, obviously. I think that that will be a direction that would help, but then we still will need to educate people why it's better to go with cooperation A and not with cooperation B. It needs both worlds. That's why I'm trying to bring it all together to make that happen.
With this drive for digital awareness, how do you change with the technology? Education is one of those fields where it's not like everyone has a textbook that was printed last week. Education, obviously, usually has 10-year, 15-year lifespans of using particular sets of books and courses. How do you change that so that we're teaching digital awareness that's current, as opposed to, “OK, kids, there's this thing that's called Twitter. It’s brand new, and it allows you to tweet 140 characters, not 4000, not 288”? How do we keep the education current with what's actually happening in the real world?
By embracing the technology that it's educating about. We have all these fantastic technologies that we can also use to educate people about those fantastic technologies. Education can reduce these technologies. We don't have to be afraid of it.
I find that one of the most remarkable things is that there are so many schools around the world that say we are going to ban ChatGPT. First, you shouldn't ban it, you should embrace it. You should ask different questions, and you should change your educational system because the children are going to use it. You cannot ban the future. You have to adapt and adopt new technologies and adapt your curriculum so we teach our children how to use this technology.
You cannot ban the future. You have to adapt and adopt new technologies and adapt your curriculum so we teach our children how to use this technology. -Mark van Rijmenam Share on XThere are some schools in the world who do really well and try to embrace it, that you have to use ChatGPT, you have to say that you do so, and you have to be transparent in how you use it. It still has to be factually correct, so you still have to do your research. We created an assignment around that. If you don't use ChatGPT, you will fail. But if you use ChatGPT and you don't say it, you fail as well.
That's how you can change the education system. How do we educate about future technologies? By using future technologies. -Mark van Rijmenam Share on XYou get a different approach, and I think that is very relevant. I think that's very, very important. That's how you can change the education system. How do we educate about future technologies? By using future technologies.
Embrace them and figure out where they're good and where they're bad. I've run across the same thing playing around with ChatGPT. Some stuff looks good, until you start fact checking. You realize, “Oh, no. That's an outdated statement. Maybe that was true 10 years ago, but it's definitely not true now.”
It comes with complete nonsense. At some point, I asked about myself. I've written several academic papers and several books. I said, “How many academic papers has Mark written?” It came back, “Well, Mark has written two books,” which is not entirely correct. I've written five.
“Mark has done three academic papers,” which is correct. I have written academic papers that are published in well-known journals, but then I just came up with these three random academic papers, which were Blockchain and XYZ, AI and XYZ, Big data and XYZ. It's related to the technology that I researched, but it didn't exist. It's complete nonsense. It was frustrating.
You and I might be able to see that, but most people don't. There's one thing that I think is also important to mention. We are moving to a voice era. We move into a world where we use our voice to ask questions about the internet and you can answer back.
In today's world, if we ask a question, we go to Google, Bing, or whatever search engine. We post the question, we get a list of 10 results versus another 50 pages that we ignore, but at least we have 10 results. We can move through and have a very strong influence about the decision we make based on those 10 results.
Imagine a world where the ChatGPT is what replaced the search engine. We don't get those 10 anymore, you just get one answer. As I said, we are lazy, so 99% of the people will trust that one answer even if it's wrong. That's problematic for consumers. It's also problematic for brands because all of a sudden, instead of fighting for the top 10, you now have to fight for the top one.
You have zero understanding. It's very opaque of how that number one is being selected, even more vague this today with the search engine. We're moving to a world where big tech becomes even more powerful. As a brand, I really want to be brought forward. I'm probably willing to pay a ton of money so that when a consumer asks a question XYZ, I am being recommended and not my competitor. This is going to be all very opaque, very vague, and nobody knows what's happening behind the scenes. I think that's problematic.
We see that in the product review space. You now have companies that make products buying the companies that are reviewing the products. Some of them are rightly disclosed. “Hey, we're now owned by this company and we're recommending their product.” But a lot of them don't disclose those things, so how do you trust?
But even if you disclose it, it's often written in a micro form that you need a telescope to read.
It's in the About Us. “Hey, five years ago or last year, we were acquired by this company.” Of course, no one knows the company by the holding company name until you research it. If you're getting answers, if you're solely trusting those types of sources, then it becomes problematic.
Which a lot of people do because we're lazy as human beings. I think that's challenging.
Are we just in for a dark, horrific future? What's the bright point here?
I'm a very optimistic person, but I'm also really scared about the digital future. I will not deny that because I can think exponentially. I can see how technologies are converging. I can see what is required to ensure a thriving digital future. I can see, knowing what I look back in history, that what is required is very, very difficult to achieve and really requires a collective action that is very difficult to achieve because they don't really like that direction.
It requires a lot of work from you and me, from the general public. That's all possible, but it requires a lot of work. We need to start today to make that happen. It's not that the only future is a dystopian future. We still have a chance, but we really need to act now.
When you say people need to act now, what are three things that someone should or shouldn't do in their own life with respect to whether it's the metaverse or the speed at which technology is approaching them and how they handle it?
I'd like to shift the question slightly, if I may, because I think there are three things that we need to do as a society to make this happen. The first one is obviously education. We need to educate ourselves on this new technology. That means we need to experiment. We need to explore these new technologies. We need to think about what we're doing, what we're experimenting with, and we need to try to understand what happened. As I said, if I have to use the chainsaw, I'd rather first read the manual to understand how it works so I don't chop off my fingers.
A good example of this would be parents should install TikTok on their phone. Not that I'm saying you should install it, but install it, play with it, see what it is, see how it works, so you have an understanding of what your kid is going to face.
Yeah, and that's at bare minimum, I would argue. At least you understand what your kids are doing. That requires work from the parents. Yes, you don't always have the time to do that, or you don't always feel like doing that, but I think it will benefit you as well as your children to actually do so.
The second one is verification. We need to start to verify what we're doing. That means verification in terms of, are we dealing with an AI? Are we dealing with fake news? Are we dealing with AI-generated content? Is this AI biased or not?
Am I dealing with the person who he or she says that he or she is, which will become a lot more problematic in the metaverse than it is today already? Do we need NFTs for that? Do we need to use biometrics for that? How do we ensure accountability? This is partially consumer side, consumer requirements to verify, but also partially enterprise and government requirements to verify that the AI is unbiased and that we can trust we'll be dealing with. That's the second part.
The third part is regulation. Education, verification, and regulation. As we stated before, it's not regulation that stifles innovation, but regulation that requires organizations to have a board of ethics, to have an oversight board, or to have their AI checked by an independent auditor just like we check our finances by an independent auditor, and we hold the independent auditor accountable for the audits they are performing. I think that is something that's, I think, pulling out for over a decade now already.
We need that. I don't need to know how an AI works, I just need to know that I can trust that AI. I don't need to know if I'm going to invest in the public company or how their finances work. I just need to know that I can trust whatever the finances are being delivered. The same thing we need to get for anything digital.
I don't think that's too much to ask. We've been able to require accountancy firms to sign for the audits that they perform for public companies. Why can't we do that with tech companies?
I think those are all very reasonable first steps. I think there's a lot more we're going to have to do in the long run, but at least that's a great place to get started.
Yeah, that's what I think as well.
As we wrap up here, if people want to find you online, where can they find you?
I'm pretty visible online. I've written about a thousand articles, I think. You can all read those. You can find me on thedigitalspeaker.com, where you can find all my content. I'm on Twitter as well. I'm on LinkedIn. Feel free to reach out.
We'll make sure to link to all those in the show notes. Obviously, the book, Step Into the Metaverse, is available anywhere or any fine places where you can buy books, either digitally or electronically.
Absolutely, or physically.
Mark, thank you so much for coming on the Easy Prey Podcast today.
Thanks for having me, Chris. It's been an absolute joy.
Leave a Reply