Is this How the World Will End or How Smarter Business Begins?  A Practical Discussion about AI (Part 1)

Full Transcript

Scott Kinka:

Welcome to our first live podcast recording of The Bridge. Can I hear it? Lemme give it up. Thank you. And for those of you who are listening at home, we’d like to welcome you into our space. We are here in sunny and slightly warmer weather than we might’ve hoped. Palm Springs California at Bridgepoint Tech Summit. Tech Summit 2023 with 450 of our closest friends. One more time. 450 of our closest friends. Alright, so they are incredibly well-trained. For the listeners at home, we practiced the clapping. Now the noise. Okay, our topic today is an interesting one. We chatted about it time and time and time and time again on the pod and we felt we had an opportunity to ask a very simple question. Is this how the world will end or how smarter business begins, right? There’s a lot of hype, there’s a lot of crazy. We’re going to try to get through that today with some of the smartest people that we know in this space who are focusing on this. So I will not introduce them individually, I’ll let them do it for themselves. So aside from being an NFL coach, I’m going to let Dan O’Connell start here.

Dan O’Connell:

Yeah, so I’m Dan, pleasure to be here and I’m Dialpad’s Chief AI and Strategy Officer overseeing our product and ENG teams on the AI side. I joined Dialpad six years ago. They acquired my startup, which was a real-time speech recognition NLP startup, which is a fancy way of saying we can capture conversations, transcribe them, and tell you what’s happening with them.

Scott Kinka:

Fantastic. Thanks Dan. Dan was a late game change on the panel, so I had to make sure that I had it right there. My apologies on the kerfuffle. Karen, you’re up.

Karen Bowman:

Hi, I am Karen Bowman with Level AI. I run Global Channel and I just want to say thank you so much for having us here, Scott and Bridgepoint, Ohana family and wow, I mean AI sure has become an exciting topic. I’ve been at Level AI, like I said, just about a year. It was pretty exciting when I started looking at that. Technology was like, wow, this could really be something different and really change the game and then ChatGPT hit the world and we all kind of went nuts after that. So for those of you that don’t know Level AI, we’re a contact center intelligence platform and we use AI to enable analytics and all sorts of things, automate tools and that amount of automation drives back to us being able to really drill down to the true voice of the customer and solve root cause problems. I tend to focus more on the point solutions and the things that AI can do, but I think this is going to be a really fun panel.

Scott Kinka:

Fantastic. Karen, thank you. Tushar Shah.

Tushar Shah:

I’m Tushar Shah, the Chief Product Officer at Uniphore, having been there a little less than six months. Prior to that, ran a large technology at PayPal around machine learning, both in the risk compliance and ultimately also all of our CSS, so about 12,000 agents worldwide, BPO badged employees. And so actually am unique in the fact that I was also a buyer of this technology and so excited to be here and obviously bring my experience to this panel, but ultimately more broadly to the products and services that we offer.

Scott Kinka:

Tushar, that’s super interesting. You mentioned that you’re an IT buyer, so expect to get a phone call from about 40 people when this panel’s over. Raghu Ravinutala.

Raghu Ravinutala:

Hello everyone. I’m Raghu and CEO for Yellow AI. We are the world’s largest conversational AI company in terms of automated interactions. We’ve been so fortunate to partner with Bridgepoint at this point of time. The company automates 5 billion plus interactions for our customers every single year. 1500 of global enterprises that work with us and we help them put their customer service on autopilot, recognized on Gartner Magic Quadrant, backed by some of the best venture capital firms in the world. So excited to be here and share our learnings bringing AI to this market from 2016.

Scott Kinka:

Fantastic, thank you Raghu. Anytime you get into this conversation, you sort of have to start with a baseline before you can start poking at it. So we thought we would start with just some simple questions, pass them around the panel, try and get a shared understanding of what’s going on now and then we can get into the meat and potatoes and start throwing it around. So we’re going to start with just a simple question. There was a lot on this slide not asking you to read it. AI branching computer science involves building machines capable of performing tasks that typically require human intelligence and then a little bit about the importance. The trip to this slide is where we asked AI to create this slide and there’s more to that. We’re going to unpack that throughout the panel, but let me start there Dan, let’s just chat about how AI defines itself.

Dan O’Connell:

Honestly, I always think of how to describe this to my mom because my mom still doesn’t understand what exactly I do. And so the easiest way to think about it is how do you go and simulate human intelligence in a computer? And so how do you teach a computer to do things that we take advantage of and do today pretty seamlessly? So that might be how we transcribe a conversation. How do we teach a computer to compose music today? I think we’re all enamored by large language models, their abilities to go and generate the first draft of a blog post or perhaps compose a piece of music. So that’s really what the opportunity is for AI as we see it today in the technologies. But obviously we’re probably going to get into where did the advancements come from and where does this all get to in terms of singularity and so forth. And I think it’ll be pretty interesting.

Scott Kinka:

We’re certainly going to get there, but interesting actually, and the trick to the front of these slides is we didn’t design any of them, although the marketing department thought that I somehow fell over myself and no longer had PowerPoint skills. We asked AI to both write the slides and design them. So we will unpack that a little bit as we go forward. I’m going to go to Raghu on this one. We asked Chat GPT to describe the technologies that underlie what it does and this is what it gave us back. We don’t need to read, if you want to unpack any of them, we can chat through those individually if you would first and then is it missing anything?

Raghu Ravinutala:

I don’t think it is missing, but I would like to broadly categorize this entire technology around AI as learning at the core, just like humans from our childhood, we are fed with a lot of information content across different modalities and we keep learning and react to things based on that learning. At the core of AI, the computer is fed with information, knowledge, and patterns and the technology enables and creates either new content or makes decisions or predicts the answers based on all this learning. So that’s at the core of AI. Machine learning has been a part of AI where it is primarily focused on pattern recognition and predicting based on those pattern recognitions and natural language processing. Is AI applied to languages? So AI can be applied to vision. AI can be applied to various sets of data, but natural language processes, especially in understanding human language, are trained on textual data, trained on voice data and enabling predictions and enabling generation using natural languages. Deep learning is a sort of neural networks won’t get too deep into the technology, but you can kind of think about very, very deep multimodal information and data like images, videos and deep learning creates patterns around disjointed set of information and enables, again, prediction generation using completely disjoint set of data and learns from that pretty in an automatic way. And robotics is using all these technologies in enabling physical moving robots that can do things and make decisions and do physical things like humans. That’s broadly an unpack of how these technologies are.

 

AI and Bias: Unpacking Concerns and Challenges in Generative Models

Scott Kinka:

Good definition. You did mention, both of you mentioned large language models in there and generally speaking they feed machine learning and some of the other things here. But does somebody want to just take, explain a large language model to this team?

Dan O’Connell:

Think of a large language model. You can think of it again, I think of, how do I explain this to my mom? She asks, it is, just think of it as a text generator, a text predictor. So you can go to a large language model, you typically will interact with it. A lot of the user interface today is a text prompt so you can go ask it, as I said, to write a first draft of something for you. And so it essentially, it’s leveraging, think of it like a library. It’s got all of this really immense dataset. It can use that knowledge to then generate a first draft of something that you’re asking for. It has basically that knowledge, but just think of it as a text generator. And the beauty of it actually goes beyond the actual ability to provide the input, which you typically would see in a search where you would get a bunch of different websites that actually scrolls those websites and actually gives you an answer. So that’s why it’s been somewhat of a huge phenomenon for a lot of people, because essentially it has become democratized where everybody now is using it as an alternative to doing search because it ultimately also gives you the answer. You don’t actually have to go through various hyperlinks to get the information you’re looking for.

Scott Kinka:

And there’s a whole landscape of next step questions that creates and we will get to some of those in a couple of slides. I’m going to go to Tushar. This one I asked the AI and I love the picture. That’s what I guess. I love the picture I guess. I guess that’s what ChatGPT thinks. It looks like just setting some baselines. I’ll describe it and then I’m going to want you to sort of read back on a couple of these. Let’s be very clear about the end of the world here. A stands for artificial,  pretty clear. Second, and this gets back to the large language models, it needs data. Every AI is not connected to every bit of data that was ever created in the world. If it was, then maybe we’d be talking about Skynet, but it’s a slightly different thing. It needs a set of data. The third thing is you need to ask it to do something. It’s not just sitting around. It doesn’t wake up in the morning, have a cup of coffee and think about what to do for the day. It waits until you have something to do with it. I think these are the most important ones. And Tushar, I’m going to ask you to sort of play with this, but the work that’s being done to produce these outcomes is referential in nature. It’s got a data set that’s got a piece of data over here, it’s got a piece of data over here. It tries to equate them and come up with some kind of answer. But what is not designed to be, and we’ll have some fun with this question later, it’s not designed to be self-referential and that’s a lot of what’s going on right now and some of the chaos that we’re hearing. Tell us a little bit about that if you would like to share.

Tushar Shah:

Yeah, I think the biggest conversation we have when we talk to companies, especially at the enterprise level is one, how do I leverage AI in my ecosystem? And to your point, it all starts with the data. And so for many enterprises, they have unstructured structured data, they’ve got data that sits in spreadsheets, they’ve got data that sits in applications. And so clearly the value proposition is companies that can help them kind of bring that data to life and autotomize it so that it actually can be consumed. And I think that’s what really differentiates companies that are out there that are pure AI players, but ultimately can actually also solve the data problem, which is kind of the focus that we have at Uniphore. And so I think that, when we look at referenceability, it is important to actually put it in context of the problem you’re trying to solve. I equate it to if you need a hammer, don’t bring a sledgehammer. And I think that’s the typical when you don’t know what to do and how to get it done, your first response is, well, it must be the biggest tool in the shed that I need to be able to do what needs to get done. And I think understanding the business problem you’re trying to solve and understanding the context of that problem, then bringing reference to that allows you to actually create the better experience and the outcomes that you’re looking for, whether they be a financial outcome, a customer engagement outcome or a sales outcome and so on and so forth.  And so that referenceability is extremely important and that’s why the ability to understand and digest the data, break it down into its pieces, and ultimately even think about it from an industrial perspective, whether it be AIrlines, banking, financial services, and so on and so forth. Because today large language models, as we’ve been talking about, essentially consume the web. So it pretty much can answer any question. It could tell you how to cook chicken tikka or how to cook the best bowl of eggs. But if I’m in the airline industry, that’s not really what I’m trying to answer. I’m trying to answer the question of, Hey, my customer lost their bag, how do I self provide them the support they need? So I think the referenceability is extremely important and that starts with the data.

Scott Kinka:

I mean it’s a great call out. We, as you know, many of the people in this room consult with clients on a day-to-day basis. And of course to me AI is the new cloud, like the board level, go do AI. I don’t really know what that means, but okay, I’ll go figure that out. And we regularly get asked questions. Hey, the first place I want to go is I’m going to put bots in my contact center. That’s great. Are the policies for the live agents written down so it has something to learn from? No. Well, I mean there’s not a whole lot that the bots are going to do at that, but it’s going to write your process. It’s got to start from somewhere or at least have to your point, something to reference to start from connecting dots. Right? We will play a little bit with the self-referential thing later and a fun question, but I’m going to get to Karen right now. So that’s really funny. I hadn’t even noticed that we had the cute robot on the baselines and then we had the angry one on the considerations. Okay, so that’s perfect. A couple of considerations I asked about explicitly asked it, what are the risks, what are people concerned about around AI? And this is what it gave us back. Karen, how did Cha GPT describe itself in your mind on considerations?

Karen Bowman:

Yeah, well honestly when I look at that, I think that it probably did pretty well because it called out some negative stuff. It wasn’t just positive like the other slide.  And I think these are real concerns. I’d probably start with one and maybe four on this list and then have a really good time with the other ones on the science fiction side of the bar. But I think there are applications there. So when you think of a job or industry replacement that could happen. And to your point, like the cloud, a lot of people were afraid they were going to lose their jobs in the cloud instead. I’m sure there’s a lot of you making quite a good living on security in the cloud, optimizing it. How does all the networks work for that? I mean there’s a plethora of stuff in the cloud and I think it’s going to be the same in the CX space. You’re going to see mundane tasks like auto summaries and auto QA and things like that that we just do as standard today that just weren’t able to be done a short time ago, come to bear.  So I think that yes, jobs will switch fluxx, but I think that the opportunity for all of you as strategists is going to be huge because when customers are confused, that’s when we all win, right? Because we get to go be trusted advisors and help them. When I look at AI generated bias, that one, I think there would be so much fun to have a conversation deep down. I think societal bias is just there and it’s there in ways we wouldn’t even think. You think bias, you may think oh, bad things or ways, but it can be simple things too that create bias and it gets back tohar saying it’s the data. Where does the data come from? So if you’re pulling data from the web to answer a question, you’ve got to understand, did that data come from a reputable place? Is it even correct? Did the associations that the AI make, were they correct or relevant or did they cause what they might call hallucination where it very confidently gives you an answer that is wrong, but how can you tell whether it’s right or wrong and you don’t know what bias has been put in that? So from our perspective, we create each customer their own large language model and then we ingest all of their information, interaction information, whether it comes from a CRM, their CCAS, their UCaSS, a learning management system, snowflake, a database, whatever. But that allows us to have this deep analytics to go look through that. And the bias in that is there’s probably still some there, but it’s much more addressed because it’s what they’re feeding the system. It’s the company’s data. Again, you made a great point. The airline industry doesn’t care necessarily how Starbucks serves coffee, but they should.

Scott Kinka:

Karen, you brought up an interesting point, and I’ll open this up to everybody going off script, but it was a good point around learning. I mean, I don’t think people understand what that means, learning or training models and sort of how that goes. Anybody want to just speak, just explain the concept to the room where you’re training a model?

Karen Bowman:

I know I just spoke, but I’d love to do that because I am not at the same technical level as the folks here. I run a channel, I’ve been in technology my whole career, so I’ve been exposed to a lot of stuff. But here’s what resonated for me in trying to really understand when I got to the level AI, how is this different from anything else? And it gets down to a couple principles. One is before you had natural language understanding, which is understanding the intent that we have now, it was just looking for words, right? Keywords. And that’s what most searches were giving back to us as keywords. So when you think of it as understanding intent, you can boil it down to a simple analogy of a two year old. So if you have a two year old or you’ve had one, you see that they learn not just by you telling them, you don’t necessarily tell them, take that spoon, put it in the bowl, put it up to your mouth and eat. They get a vast array of input. They see people eating, they see you using a spoon and then they deduce like, oh, that’s how this works. When you move into generative AI and ChatGPT type of things, it’s the same thing. They’re correlating things. So in a simple sentence it can be like the sky is, if you asked an unintelligent platform, unless you had pre-programmed it with preset words to answer that it wouldn’t know what to do. But in a generative model that’s going to basically say the sky is blue because it’s going to pull in all this input that it’s blue. It may also say it’s above us or it may say it’s large, but you can then look at those and correlate. And if you put this into a contact center, an agent can look at that and say, this is the top three ways this has been solved before, two looks like exactly what I’m talking about. And they can thumbs up that and then it becomes a feeder into your knowledge base so that the applications are huge, but boiling it down to the simple concepts I think will help all of us. You need to understand it because your customers need you to help them navigate these pretty complex waters.

Scott Kinka:

Yeah, I think the key is that you mentioned, I’m just going to pull it out of there so that we make sure that everybody hears it, is another data point that becomes part of the dataset that AI is using to make references is the use of itself. The concept is that it improves over time because of the questions you ask it and how you respond and what your follow-on question is. And the questions that you ask become part of the dataset that are used. So it gets to the point where it starts to learn, hey, this is generally what happens after that, even if it wasn’t explicitly written in the initial model that it was used, it’ll figure that out over time. Anybody else want to tuck in on that?

Dan O’Connell:

I was going to say that made a really nice point, which is if we’re talking in the reference of large language models, so ChatGPT, AndroInflection, Vertex, there’s a bunch of ’em. You can think of ’em like the Library of Congress. And so they’re the biggest tools. They’ve got this, they’re trained on essentially the entirety of the internet up until 2021. Save all of this knowledge, but not all of that knowledge you need. So if you needed to go and summarize a conversation, well, I don’t need to have all of this other extraneous knowledge that’s there in that dataset. And if you think about building a dataset, you need a bunch of storage for it and you need a bunch of GPU power. So the bigger these models are, these large foundational models, they need the biggest, fastest GPUs from NVIDIA called a one hundred today. And NVIDIA can’t make ’em fast enough. There’s not enough silicone, there’s too much demand. And so what you’ll see businesses do is level AI, and I’m sure the others are trying to build smaller models, and if you build these smaller models, one, they’re better, more accurate, you can leverage the customer data. And I say this, you strip it of all personal identifiable information, you make it anonymous, but the use space becomes much more specific. It also becomes faster, better, cheaper for the business and just the user experience. So that’s just another kind of reference point for you to think about as you engage with your customers and talk about this is understanding there’s these really large, really big tools that are part of a system, these large foundational models. And then you’ll also see, I think many businesses realize that those potentially have some challenges at scale. Cost may not be as accurate. And you’ll see some businesses focus on building their own models.

 

Democratization of AI: Bridging the Digital Gap in the Workforce

Scott Kinka:

If one of those challenges is you’re an AI company and you need data center space that’s supportive of those GPUs, we know a guy who can help. He’s in this room, talking to Mel Malara. Okay, pushing this forward. While AI has the potential to create new jobs, there are concerns that the rapid pace of automation could lead to widespread job displacement, particularly for low-skill workers. So we’re going to tuck into each one of these concerns for a moment. I’ll give you one additional data point. Boston Consulting Group shows a significant gap in understanding and experimentation of IT tools. So 36% of employees, 36% believe that their job is likely at some point in the future to be eliminated by AI. So the question now to the panel here is, let’s take it from the HR perspective for a moment. We’re moving really fast. Are we inadvertently widening the digital gap between the executives and technical information workers here? We’re doing testing and playing and then the frontline employee base. Is this going to complicate this process? Tushar, go ahead.

Tushar Shah:

I was going to say that the likelihood would’ve been yes, but I think by the fact that now everybody is leveraging some type of ChatGPT type or large language model based solution. I mean my sister the other day was can you download an app for me that I can actually leverage? I’ve heard all about this and I don’t want to be left behind. And so I think if I look at where AI and machine learning was even just four or five years ago, it was in the annals of fighting fraud compliance. We were using it in some of the CSS spaces, so no one really understood it. Now obviously when you go onto Netflix and you see, you’re like, well, how did they know that? These are the things I like, right? So people started to get a sense of it. So now that we’ve actually democratized the use of it in a true kind of consumer app that people can now use, I think people are beginning to embrace it and understand it because it’s just, like I said, just Google or people paying on Venmo or whatever. It just becomes part of as you start to weave it into the consumer ecosystem, You start to gain that level of adoption and understanding and people then begin to say, okay, I’m no longer no not in understanding it. I’m actually using it. I understand exactly what’s happening now. Do I become part of it versus it being done to me? I’m actually now part of the process.

Scott Kinka:

Democratization creates exposure, which helps you limit that you were going to.

Raghu Ravinutala:

Yeah, I have a different take on this and also looking at a more meta level on how we all need to think about the jobs threat or opportunity with AI. And I always get back to my own dad who’s working probably with not a lot of leverage of technology 40 years back. And with all this technology changes that have happened, it’s not like I’m working any less harder, I’m probably working more harder. But what changes is the outcome for unit level of work? It’s essentially the amount of wealth, amount of production that people can do for a unit level of work amplifies significantly. And we all can see and believe that AI is going to amplify it by, I dunno, several tens and hundreds of multiples for everyone. What’s most likely going to happen is probably not loss of jobs, but it is essentially redefining what the jobs are going to be because there is an unprecedented wealth creation opportunity because there are different sets of tasks that all of us can do. And fortunately in this world, there is no limit for wealth creation. Everybody wants to make as much as possible. So I believe this is a phenomenal opportunity, right, from technology creators, suppliers, sellers to get onto this bandwagon and see this as an unprecedented wealth creation opportunity. And another bit of information is that always with the advent of new technology, the wealth creation that happens over a decade or two decades completely supersedes all the wealth creation that’s been historically done to that point. Like the internet has probably created more wealth in the last 30 years than the entire industrial economy and history of the world in that point of time. And AI is clearly, I believe that point where it’s going to supersede the wealth creation historically at this point of time and everyone is going to have to contribute to make that happen.

Scott Kinka:

Dan, you wanted to add on?

Dan O’Connell:

I was just going to say, yeah, I agree with all of the points. I think there’s a real risk for automation of mundane tasks for school workers for sure. I think there’s going to be an upleveling. I personally think you’re much more likely to be replaced by somebody that knows how to leverage AI as opposed to somebody that doesn’t. I always tease our marketing team when they ask me to proofread something, I’m like, just put it through ChatGPT. These are the tools. They’re amazing at being able to help you just speed up work in first drafts. I think there’s really a lot of focus on the businesses for all of our companies to actually help people understand how to leverage these technologies. As a technologist, this is just my personal belief as opposed to fighting against it. I find it somewhat amusing when I hear teachers not letting their students use ChatGPT because as they progress through their life, these tools are going to be more prevalent and better and smarter than what they’re capable of today. So I really focus on how do you help people understand how to leverage them to fight against them because fighting against technology tends not to win. And then the last piece, yeah, as I said, I think it’s honest to really take it to the next level and really dive into understanding what are the capabilities and how can we leverage this?

 

The Singularity Debate: Discussing the Possibility of AI Superintelligence

Scott Kinka:

Yeah, we’re going to move on to the next topic, but there’s one thing to add on that and we could probably talk about that topic for the next 15 minutes, which we won’t do. But just to put the thought in everybody’s mind, I think the one challenge then back to corporate America and I, I’m thinking about Liz who runs HR for us, is that our measurements of busy need to change in the model though that you were talking about Raghu, that’s the one thing to think about. If you have a worker in debt and development, let’s say, who can produce 14 hours of code in two hours and then goes to the beach, are they busy? And how do we get into this butts in seats model for productivity, right? I see you, I’m busy. Remember that commercial with the guy? I’m busy. Something we’re going to have to think about as we go through this.  Alright, let’s move on to the next topic. So AI systems can amplify the biases of developers and data sources leading to discriminatory outcomes. This can have serious consequences for marginalized groups. So I’m going to throw this one out there to the overall group, but before I do that, I’m going to give you an example that sort of popped off the page as we were pulling these slides together. So you all recall this slide, I’m sorry, we talked about this one earlier and this was, we asked ChatGPT to tell us what are the technologies that you use and here’s what it came back with. I then asked to share the actual sources of those statements. And when you look at it, the words were pretty much copied exactly right to left, but it omitted one thing around machine learning and it just didn’t share with us in the pretty definition without being explicitly programmed. Now maybe it thought it wasn’t important, but maybe just the AI saying, I don’t need to take a shot at myself.  Here’s another one on the definition slide with the crazy, I don’t even know what that is, I was probably like a neural network where I don’t know what it thought it looked like, but it gave a definition and an important statement. Now you all went to school and said grammar 101 and you wrote your first paper and it said, if you’re going to define something, then tell us what’s good and tell us what’s bad. It has the same language learning models that we have in the way that we learned it should write the way we write, except it didn’t say anything negative, it just gave us a good story. Now these are skinny, they’re small, and this is about what ChatGPT is choosing and there’s the whole other part of how you inform it with biased information of course, but just a little bit of an example. So let’s just chat about AI bias for a minute. Karen, you started on that earlier, but Raghu, go ahead.

Raghu Ravinutala:

Yeah, so again, I have a completely different take on it. If I look at BAAs, we need to look at whether we are moving the needle in the right direction or in the wrong direction? Technology by default is biased because all the software that’s written is explicitly written code by an individual and it is prone to bias of that particular individual. And if you just see the worldwide web content, it’s all in English, 99.99% is in English. So it is biased towards the English speaking audience already. What AI I believe is doing is actually moving this bias in a positive frame because it is a core set of algorithms. It’s not dependent on somebody’s explicitly written code, but it’s taking a broad level of data and reacting dynamically to user inputs. I’m not saying the bias is zero, but I clearly see that it is much less biased than explicitly written code that’s today.

Scott Kinka:

Got it. So it’s not the Google algorithm, it’s taking the broad approach and saying, I might say something crazy, but this is a pretty good cross swatch of what the world thinks because I’ve got, I’m reading true social and I’m reading Twitter at this. It’s not right.

Tushar Shah:

But I would say that what we’ve discovered in our conversations with different industries is they do have to be able to now start to explain actually what are the actual end results of what they’re getting. From their models. Obviously we see it more regularly in regulated environments such as financial services, and I think the AI act in Europe will be just a game changer, just like GDPR was a game changer when they deployed that as well. So the fundamental fact is as you go out to customers, it is important to talk and position the conversation about, listen, we all know you’re leveraging open AI and or open models and all of that, but as you think about true customer engagement and interaction where customer information might be shared account information, privileged and confidential information that is unique between you as the enterprise to your end customer, that’s extremely important that you do think about how you’re leveraging open source capability. And so as we start to talk to companies, you want to make sure that the companies you engage with to provide the capability have the size and scale of 1500 customers. For example, global AI has a global footprint, to your point, can understand multimodal type situations because now vision and tone all get incorporated into actually the overall expression. And I think for us at Uniphore, privacy, trust and safety are the most critical things that we are forward leading with. And we actually are saying we need to embrace the regulatory aspects of what’s out there because it’s going to keep people safe and hopefully not create a bigger divide or some type of bias in the end. But I think it’s extremely important that people recognize that those companies that actually can bring wealth of data to the actual enterprise solution, I think similar to what you called out, those are the types of companies that should be engaged in those sales opportunities because they have an understanding of both, not only the bias, potential impact, but also the downstream regulatory or compliance challenges that a company might face that they want to be able to manage

Scott Kinka:

A hundred percent. We’re going to get into regulation in one minute, so hold on that. I was thinking to myself, do I jump ahead? But let’s have a fun one. We’re going to jump the Matrix for a minute. All right. We said in the title this is how the world ends. So let’s just talk about that for a minute. Some experts warn that AI systems could become so advanced that they surpass human intelligence and become impossible to control. This scenario is known as super intelligence and could pose an existential risk to humanity. Alright, so depending on how you feel about examples and bias in conversation, we’re crossing over, we’re into sci-fi, we could spend an hour on this one, but let’s just do rapid fire, is the singularity near?