podcasts Market Intelligence /marketintelligence/en/news-insights/podcasts/next-in-tech-episode-125 content esgSubNav
In This List
Podcast

Next in Tech | Episode 125: Attitudes About AI

Podcast

MediaTalk | Season 2 | Ep. 27 - College Football Preview & Venu Injunction

Podcast

Next in Tech | Ep. 181: Lighting up Fiber

Podcast

MediaTalk | Season 2 | Ep. 26 - Premier League Kicks Off

Podcast

Next in Tech | Ep. 180 - Datacenters and Energy Utilities

Listen: Next in Tech | Episode 125: Attitudes About AI

We’ve been tracking attitudes about AI since well before the current generative AI explosion and the shift in expectations is dramatic. Returning guests Sheryl Kingstone and Alex Johnston join host Eric Hanselman to explore differences by demographic and discuss practical aspects of AI that will have significant impacts. Model coordination is key and organizations have to consider where their training data is coming from and what they need to accomplish while addressing societal concerns.

Subscribe to Next in Tech
Subscribe

Eric Hanselman

Welcome to Next in Tech, an S&P Global Market Intelligence podcast where the world of emerging tech lives. I'm your host, Eric Hanselman, Chief Analyst for Technology, Media & Telecom at S&P Global Market Intelligence. And today, we're going to be discussing attitudes about artificial intelligence, especially in light of all of the craziness that's going on with generative AI, we've actually been looking at attitudes around AI for some time in our regular research. And I want to bring on 2 returning guests, Sheryl Kingstone, Head of our Customer Experience Practice; and Alex Johnston of the Data & Analytics team, welcome back to you both. Great to have you here.

Sheryl Kingstone

Great to be here. It's a great topic.

Alex Johnston

Thanks so much, Eric.


 

Question and Answer

Eric Hanselman

And you 2 have been looking at some of the historical data we've had contrasting it with more recent data. What are your analysis showing? And what are people thinking about AI? Is it the next big thing? Is it going to take over the world? Are we headed towards Skynet? Where are we headed?

Sheryl Kingstone

Okay. Well, before we jump into whether it's going to be the next big thing or not, let's just give some context on some of the data that we've been tracking for a very long time, right? So prior to 2019, it's one of these things that when nothing happens, it's boring and then when something hits, it shows change. And so when we do look at the data, we actually asked it in a couple of different ways. Over the next 2 years, how much of an impact, if any, do you think it would have? And we look at it from a standpoint of their career, their personal life and society. And we look at them, is it a significant impact to no impact? And then from there, is it going to be a positive impact or a negative impact? And we do this semiannually. So Q2 and Q4 for when we've been trying to do it. And it was relatively stable, of course, as expected with a lot of people to still somewhat neutral with it. And then all of a sudden the issues around ChatGPT hit. And so what was really interesting is the data caught right before the launch of that and then right after all the hype in the press.

Eric Hanselman

Right, because it's Q4, right before the ChatGPT launch.

Sheryl Kingstone

Exactly.

Eric Hanselman

We've caught the curve.

Sheryl Kingstone

And I immediately said, "Okay, Alex and Nick, let's take a look at this and let's do some analysis." What did you think, Alex, when you looked at it?

Alex Johnston

I think what was interesting, you mentioned these sort of 2 aspects we are tracking. So the scale of impact and sort of how positive or negative that impact was being perceived. And I think what was interesting was that particularly the scale of impact that had been trending upwards since 2019, but only slightly, I think, less than a percentage point a year, really, percentage predicting a significant impact on their jobs, their personal lives, society. But actually, that escalated or accelerated quite significantly. It was a massive leap, particularly in the bunch of respondents expecting a significant impact on society.

Sheryl Kingstone

Yes.

Alex Johnston

But the positive and negative attitudes, I mean that was a complete pace change. There's been a huge shift with quite a lot of the groups who are tracking towards more negative positions on AI, which is quite stark.

Sheryl Kingstone

Yes. And when we're talking a jump, we're publishing a report and you can see the graphic, and it was like a 9-point jump, which is tremendous. And what's really interesting is if you take a look at it from different generations, which we'll get into, different genders, different regions, whether they're self-employed or their employment status, how educated they are, you could see some differences in attitudes. And if you put it in context to a lot of the concerns in the industry about AI being the extinction of the human world. I mean it's understand that there is concern.

Eric Hanselman

A certain amount of doom saying, right, yes. Well, once people are out there basically calling for this to be the end of the planet, and you'll pick your particular quote, it does change people's thinking about this. although it is interesting...

Sheryl Kingstone

But Eric, we've been saying that for years, anytime something comes out, it's always the end of the world, robotics, right? Even movies from the 1960s. Think about it.

Eric Hanselman

So what is it that changed? I guess, is this something where the ideas of what artificial intelligence couldn't do hadn't hit the public broadly enough for people to have an understanding of this? Was this something where AI was maybe this sort of idea that was still hovering far enough in the future that it wasn't going to impact us, but then come fall of 2020, there were actually real things that people could use. And maybe it was an issue that this wasn't something that actually anybody touched prior to them. Is that it? Or what's your feel?

Alex Johnston

Well, I think there is some hints to that in the data, Eric, as you mentioned AI doing things AI didn't do before. I guess the shift from AI being a tool for pattern recognition to this perceived view that it will be having more of an impact on things like knowledge management and creative content generation, all those sorts of things. Because we see some segments moving more negatively than others, some at quite a lot of pace. Like particularly higher educated respondents were increasingly concerned about the impact of AI on their career. And that might be a reflection on how people are seeing the kind of roles AI might start to automate. Knowledge roles, for example, which was quite a notable shift.

Sheryl Kingstone

And historically, it really hasn't. If we look back to the roles and responsibilities of where things have changed in the past, robotics really took away from a lot of the less-skilled knowledge workers, right? It was automating the manufacturing floors. This is the first time we're really automating some of the knowledge and the information. However, we've got to understand that my biggest concern isn't necessarily what we're doing around these knowledge workers and content generation and image generation as of right now, right? Because I played around with some of it. And even on the creative side, it's not perfect, right? You still need a human-in-the-loop. Even on the sales content side, it's not perfect. You still need human-in-the-loop. Where I'm most concerned about has to do with if we're just using the public data and the public tools that are out there, I'm more concerned about the bias and the toxicity of it because we've already been pummeled with misinformation with the changes in our social networking feeds and where that's trying to go. Now we add in a lot of these tools and it's great to do a human-in-the-loop, and I'll tell you where this younger generation is doing it.

I blindsided my son during my husband's 60th Birthday Party and said, "Oh, by the way, Robin recommended that you do the speech", and he looked to me and he goes, "You're telling me this now", he disappears for 10 minutes. He goes back into the room and I'm like, "What are you doing?" He's like, "I'm using ChatGPT to write Dad's speech." I'm like, "You are not." And he showed me. But what he did was he used the prompts to help flower it, right? And he knew what he wanted to say, but it helped him create a speech, and it was delivered wonderfully, right? And it helped him. Now are we going to wind up doing that in our content today? No, we need to make sure that we're not using copyright infringement, for a 20-something speech to his dad, it worked perfectly fine. So they're using this in their daily lives but we have to understand how is it being used, what is being used? How are we going to make sure that it doesn't have toxicity elements of it, biased elements in it? How are we going to do it at a corporate level, right, at the knowledge worker level? And that's really where the vendors need to step up and build in more trust and privacy concerns, which they're starting to do.

Eric Hanselman

Well, I mean, these are things that we've been talking about around AI and machine learning models for a long time, which is ensuring that you understand what that training data looks like so that you have an understanding of what those potential outcomes are. So you're identifying, being able to ensure that we're countering bias in data sets for training, we know where this is headed and that, hey, on a consumer level, are there broad concerns about this? Maybe not, but we start getting into one of the things we keep talking about, about generative AI, especially when we get into a commercial and business context of a lot of these concerns around intellectual property, in the training data, all these next steps that are really those cautionary aspects that you've got to be able to deal with. But in terms of the attitudes, what expectations are? I mean Alex mentioned some of the differences in those with higher education levels, what do you see? Are there gender and age differences? Are there other aspects of this in terms of what people are expecting and how those attitudes have changed over time?

Alex Johnston

Age is certainly something that really stood out. So younger generations tend to see more impact but also more of a favorable impact of AI on careers. And in fact, that difference is quite stark. We mentioned that almost all segments we were assessing were trending negatively in terms of their perceptions, that's actually more pronounced the older the generation demographic. So baby boomers already slightly more negative are now even more negative, particularly around societal impact of AI. There's significantly more alignment around areas like personal life, though, a large amount of alignment around societal impact. What I find quite interesting, I think this is common across quite a lot of different areas, is people tend to see more of an impact on society and slightly less around their personal life and even less on their careers over the next 2 years, there's almost a degree of separation, which I think is quite interesting.

Eric Hanselman

Oh, wow. So businesses is one of the least of the 3?

Sheryl Kingstone

Well, if you think about over the next 2 years, it isn't. They're absolutely true, and everyone is concerned about the society today. That's why we're hearing in the news, we need to have more regulations and policies and bias because that's the broader impact on society. But the business impact today could be very potentially positive if you really take a look at using it correctly. So that's really where they may not be seeing it directly over the next 2 years, but there is concern. As we said earlier, it's more of the knowledge workers like is this going to replace my job, right? I was in meetings with other companies that they're saying, well, can I replace 80% of my customer service reps with intelligent chat bots, right? So there is that concern in some of the knowledge workers that we stated earlier, but the biggest impact is society.

Alex Johnston

I was on a panel recently, questions from -- internally within a company that we were doing work with. And almost all the questions came back focusing on these major existential societal impacts was all things about are we going to lose the bid, do you think critically. And it was all these big sort of social philosophical questions, which I found fascinating. I've got in sort of prep the tea some questions about privacy and things like that. But actually, the questions you will have were far more philosophical and far wide ranging, which I thought was an interesting conversation.

Sheryl Kingstone

Meaning, are we going to just stop thinking and let everyone else think for me or stop doing some of these tasks and let the AI bot do it for us? And is that going to lead to a generation of complacency to the point where we do lose control. And that's where a lot of the concern is.

Eric Hanselman

Yes, it is that issue of just because the machine said so, is it true and a little of that softening of the critical thinking capabilities.

Alex Johnston

One of the other concerns which really came out not just off that session, but a few other follow-up ones we have had with other organizations. The idea of the kind of cyclical problem of generative AI. And it learns from a training set, but then it sort of pollutes that training set by producing content that then gets fed back into the model and becomes very cyclical. And that's a concern that I don't think is being as widely spoken about. I mean, I suppose the first model ever was trained on a unpolluted data set, but it's now sort of self-cannibalizing, it's kind of moving further and further away from the truth by producing content based on the sort of prior data it's already been given, which I think is quite interesting.

Eric Hanselman

Well, this is one of those things that we've been looking at with models for ages, right? It's taking a look at model drift based on self-reinforcement that you do more of the thing that you continue to give answers for and that, that tends to go bias that data based on what you've already done, which, again, in terms of what you have to do to be able to manage the AI, it's the same thing we get in with the tech all the time, right?

Sheryl Kingstone

Absolutely.

Eric Hanselman

You've got to get good at managing the abstractions that now get created with this new level of capability.

Sheryl Kingstone

And that's why human-in-the-loop right now in the next 2 years is absolutely critical. And that is why it's so important. A lot of the research I have been doing up until this date has to do with improving and reinforcing your own first-party data. So if businesses are really looking to take advantage of some of these new efficiencies in these models. And if you really look at where the unemployment rate is and skill set is of businesses look at customer service, it's right for reinvention here from a positive standpoint using the first-party data so that you are using your data to train it accurately, right? You don't need to worry about the massive bias in a public data set when you're really just trying to use and improve your self-service capabilities and contextual relevancy of the data for a customer service and support app. That's where a bias actually adds value if you're able to train it properly using your own first-party data.

Eric Hanselman

It seems like these are things that we're already starting to see in the market, which is models going astray because the training data sets weren't well controlled. And you wind up having chat bots talking about things that they shouldn't be referencing because they wound up being trained on a much broader corpus than the focus area of what they should be talking about. But I think that's exactly what you're heading toward, Sheryl, which is that in terms of businesses understanding how they leverage this capability, I mean, ChatGPT is great, but it's trained on the whole of the Internet-ish sort of, and that you need to understand that if you're going to put this to work, targeted, focused model development is really the key to making this more usable and to ensure that it doesn't go far astray.

Sheryl Kingstone

Correct. And that's why we're seeing partnerships between like Open AI and a lot of these tools with some of the existing vendors in the customer experience and commerce space because the 2 combined could be very effective.

Eric Hanselman

So how should businesses be thinking about this? We talked about consumer perspectives on all this, but how do you really ensure that from a business perspective, you're looking towards being able to take these capabilities. You've talked about customer service pieces. Where does it start to go? And what should businesses be thinking about how they actually leverage these capabilities?

Alex Johnston

Well, I think what's exciting about these large language models initially was the fact they could be applied to so many tasks, they play very broad. But I think now we're going to go through, Sheryl has already touched upon the sort of tuning around specific challenges. I think a conversation topic was a major concern initially but since new enterprise-grade solutions have sort of emerged around privacy, it was another area as well, which had held a lot of businesses back, but maybe is now being addressed. I think what we're probably seeing is more of a focus around solutions or integrating this technology into existing software and less around the kind of broader peccability of these large language models. So I think we'll start to see smaller language models start to form. I think we'll start to see more focus on fine-tuning and that's really where I think a lot of businesses are looking at the moment.

Sheryl Kingstone

Absolutely. So what we're seeing is a combination of vendor support, like I said, on those partnerships so that they can use some of the advancements with large language models and in the small language models for some of these improved use cases. And that's where customer service and support are top of mind. Everyone is taking a look at what we can do there. But we're also taking a look at what we can do around sales and sales engagement and guiding and consultative guidance here. So how can we help sales scale their outreach with content. And we've been doing this for a long time. This isn't something new, right? But now we're potentially just moving it forward and making more advancements. And so where we've seen most of the interest has to do around yes, customer service and support, yes, sales, some marketing content. Think about it also from operational improvements of product tagging and scale that you can do and really the low-hanging fruit is trying to do some of the tasks that are really radiating on human today, right?

It's some of the low-hanging tasks of product tagging or maybe generating some first copy so that you can iterate it and get it out there or something that I said back in 2016 at a Dreamforce event, half a decade ago, which is turning a story, right, and taking all the leverage of where we were telling instead of having one story to millions of people, you're having millions of stories to one. And that's really where we're concerned about the bias. But if we can do it accurately and we can start using it from the formulation of where we were going with ad tech and martech and sales and commerce and service, it does add some value. And then lastly, the other use case is just take a look at what Microsoft is doing. This is really a human-in-the-loop with Copilot. And so this is where the general workforce productivity around prescriptive insight generation based on guidance to make the actual individual more effective. And so there's a lot more we can do there around these consultative guidance type models.

Eric Hanselman

Well, as we look, I think to Alex's point, we've come to a new environment, a new set of capabilities with large language models. And for those of our listeners who didn't catch the large language model introductory podcast a few episodes back with Chris Tanner and Peter Licursi, I will point them back to that. But we have gotten ourselves into a new and different place. And especially when you look at Copilot, I mean this is not the Clippy from Microsoft or...

Sheryl Kingstone

Oh, God, Clippy, Clippy, right.

Eric Hanselman

We're in a very different set of capabilities today and a set of things that can take us to some new places.

Sheryl Kingstone

Long live Clippy.

Alex Johnston

I think what's quite exciting about these technologies as well is thinking about them from a workflow automation perspective, branched isolated tasks. And there's been some steps already as an open source project that tries to sort of apply multiple models to workflows, for example. But if you think of the power of a process automation tool, which has generative AI capabilities, it is quite significant. Perhaps you could be parsing LinkedIn defined people who fit a certain role profile, you could then have another model summarize that role profile, you could then have perhaps even the same model splitting out an e-mail or template to reach out those people from a sales perspective. I think when you start to think about multistep processes, these tools become far more viable in terms of how we use them with enterprises, which I think is quite a fascinating direction.

Eric Hanselman

Yes, so you get to put process pieces in place in the if then this person is inquiring about this. You've already talked about that, put those pieces together. They are probably -- this would be information that would be useful for them, get them the information on their order on or what they could order, things that are taking sort of the recommender model and now taking that up a level by being able to build enough history to now having that context to be much more engaged and to make the predictive things that you could potentially do much more accurate potentially, right?

Alex Johnston

Something which has really stood out, so initially when a lot of these live language models and these large foundation models became sort of available, there's a huge swathe of new exciting start-ups that joined. But actually, the value proposition they had built on top of these foundation models was quite narrow. And now in particular, when you look at companies like Microsoft and Google, all the things they're building around these tools has become far less viable for them to sort of make a case as having sort of distinct capabilities or suitable add-on features. So a lot of them are looking at these effectively agent architectures, which kind of use multiple models to service different use cases and potentially, as mentioned, this kind of workflow automation piece and feeding into lots of different use cases. But I think what's a really interesting step then is thinking about actually, maybe that's how we start to use these models rather than just using a ChatGPT and talking into space like that, you're using a kind of a software provider that effectively moves you between different language models to service different use cases, for example, which I think is something we'll see in the next couple of years.

Sheryl Kingstone

Yes. We're already starting to see that now. So for instance, at the AI CloudWorld, and I am not trying to be advertorial for one vendor or another, but an example of that was Salesforce's bring-your-own-model along with their use of third-party large language model integrations, along with their own, with their own development within their Einstein GPT Trust Layer. So that the data remains within the customers' trust boundaries themselves. But yet you can also use AWS and Anthropic and Cohere along with SageMaker and Google Vortex. And so you don't have to worry about going across and using only one, but you can combine whatever is appropriate and then build your own or use your own or bring your own.

Eric Hanselman

And not necessarily have to cross data boundaries in terms of what you were training those models with.

Sheryl Kingstone

No, it all stays within your existing Salesforce architecture.

Eric Hanselman

So now you've got the generative AI to manage the use of the various other generative AI models.

Sheryl Kingstone

I know.

Eric Hanselman

Well, and hey, it's that point of being able to get to that level of recursion to now help to understand where this fits, but it does get us back into that environment of, there is no one model to rule them all. There is no overarching Skynet-ish thing for office productivity. It is, in fact, something that is a coordinating capability among lots of specialized models that's coming in and do more underneath the hood.

Sheryl Kingstone

Yes, absolutely.

Eric Hanselman

Interesting. Well, the 2 of you both have researches coming out on this. I will point our listeners to that. There is so much more that we can talk about. So many things and places that we can go with this, but we are at time for today. I hope to be able to get you back when we've got more coming out on this. It has been a pleasure having you both back. Thanks for being on the podcast.

Sheryl Kingstone

Thank you for having us.

Alex Johnston

Thanks, Eric.

Eric Hanselman

And that's it for this episode of Next in Tech. Thanks for staying with us. And thanks to our production to you, including Caroline Wright, Ethan Zimman, on the marketing and events teams and our agency partner, the 199. I hope you'll join us for our next episode where we're going to be digging into some of the things that are actually driving a lot of the generative AI large language models and looking at data center technologies and markets in Latin America, a burgeoning market, awful lot going on. I hope you'll join us then because there is always something, Next in Tech.

No content (including ratings, credit-related analyses and data, valuations, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of Standard & Poor's Financial Services LLC or its affiliates (collectively, S&P).