ChatGPT’s arrival was met with a mix of awe at its technological advancement, and anger at the blazé nature of its launch.
Some felt there had been little consideration for how under-prepared society was for this innovation. Students would cheat on essays. Disinformation would spread like wildfire, and it would produce toxic, racist, and sexist content and we would all suffer. Dooom!
In response, exploring the implications of AI is undoubtedly one of the many logical choices for product managers aiming to generate innovations. And this course of action aligns sensibly with the broader global society.
According to theWrap.com, Google Trends data indicated a notable increase in global web searches for the term “ChatGPT” in mid-December 2022, followed by a surge starting from early 2023 — months after ChatGPT’s November 2022 launch. The search interest peaked around March 2023, coinciding with Google’s launch of Bard, its response to the OpenAI tool.
Almost immediately there were cries for it to be put back in its box. Instead, the EU, acting as the world’s moral compass — sprang into action and drafted the beginnings of what today is the EU Artificial Intelligence Act. The world’s first comprehensive regulatory framework for AI.
Recently, I met with a group of fellow tech leaders to discuss the impact AI is having on our respective organisations. By AI, we were specifically referring to “Generative AI”. At the Harbour Hotel in central Bristol, we gathered around a long rectangular table with a crisp white tablecloth.
As the courses came and went, we discussed coping with the high velocity of change driven by AI, balancing disparities between leadership and individual contributors, and the environmental, social, and economic impacts of AI.
In the room, we had a good range of organisations, from large and well established to smaller scaleups with a good mix of experience when it came to AI. When answering the question:
“On a scale of 1–10, how dependent is your business on AI?”
There was a clear divide in the room, about two thirds of the room were on the lower end of the scale 1–3, and the other third on the higher end 6–8.
The haves and the have nots. So what’s the story?
Organisational size does not seem to be a factor in predicting the propensity to innovate using AI. Instead, it is driven by the organisation’s urgency to solve a problem that AI could address. The greater the problem, the more likely it is that they will have attempted to use AI to solve it.
For instance, despite being part of a large and highly regulated organisation, a Fire Safety Engineering team successfully used AI to accurately search for and retrieve specific sections of documents to help them complete their assessments.
When it comes to generating content, the consensus is that AI is not quite there yet. Despite this, many disciplines are using it to speed up their content generation workflows. AI is great for sparking ideas, generating outlines, and providing feedback on initial drafts. As an assistive technology, overseen and refined by humans, AI has been welcomed.
Some still think it’s a fad. I was very surprised to hear one member describe how they had to demonstrate the technology to the board.
According to the member, they had to open ChatGPT, put in a prompt and had to explain that the response that came back was not pre-written. Other members of the board had been under the impression that ChatGPT was just a new form of search.
On the other end of the spectrum are those board members that see it as a silver bullet. With all the hype, you cannot curb their enthusiasm! In this case, some members of the room felt they had developed dual personalities; to the board they underplay AI’s potential and highlight its risks, but outside are one of its biggest advocates.
Most round the table were cautiously optimistic which I think is exactly the right place to be.
For some in the room, AI is central to their product. In fact, one product was designed specifically to mitigate bias introduced by AI — AI inception.
Many were introducing AI by running controlled prototype experiments to demonstrate potential applications and justify funding. Others used AI tools but had not directly implemented AI into their products or services — I fall into this group.
As a Head of Product, the pressure to add AI to your product is immense. Everywhere you look, products, both old and new, are jumping on the bandwagon. One person described how a competitor now advertises their tool as “powered by AI,” when in fact, it just has a couple of IF statements here and there.
Today, “Powered by AI” seems to instantly make a product more desirable, but the group pondered ideas that challenge this notion. In fact, what made the most sense was starting to develop tiered pricing, with the cheapest tier using AI where the highest risk lies. If you want the work to be fully created by a human, you have to pay more.
There’s a lot of hidden risk that is not being talked about. When AI is introduced into sensitive fields it can perpetuate biases, compromise privacy, and erode essential human skills in decision-making.
In HR, AI can lead to discriminatory practices if not properly managed. In therapy, the lack of human empathy and the potential for data breaches pose significant risks. In the justice system, AI’s use in predictive policing and risk assessment could result in a future reality so vividly depicted by the 2002 movie Minority Report.
It was also mentioned that the NHS is VERY excited about AI. This should come as no surprise — the service is stretched to its limits and looking to AI for help. It will be imperative that they get the balance between risk and reward right.
In a world where AI is only just beginning to be introduced, OpenAI is like dawn’s first light, illuminating the path ahead with the promise of new beginnings. Everything outside of that path is unchartered territory. Inevitably OpenAI has to step into those shadows in order to strengthen and broaden the beam’s reach.
This has left some feeling like they are in a constant state of anxiety as they navigate the unknown — so much so, that it prompted an open letter from Open AI employees to warn of a culture of risk.
Improper use of AI has already resulted in several court cases, but the real change will happen when there is a Titanic level disaster that drives a step change in regulation. As a PM, it’s like walking on thin ice; yet, the FOMO is still very real.
As desserts were cleared away, and we approached the two hour lunch mark, we ended the session with some final thoughts from each person. The two that stuck in my mind:
“We’re not as backwards as we thought we were.”
“It’s been good to see where we are all at.”
If you’re a product manager who has been feeling under pressure to add AI to your product, fear not, there are lots of us out there who are biding our time, watching it closely as it evolves, looking for the right use cases and feeling quietly confident that when the time does come, we will be implementing AI to solve a real problem and not just for the sake of it.
Here are some AI tools and resources I use and can highly recommend for your product management toolkit:
The group has also recommended several valuable AI resources:
If you’re a product manager feeling the pressure to add AI to your product, rest assured you’re not alone. Many of us are carefully watching AI’s progress, looking for the right use cases, and feeling quietly confident that when the time is right, we’ll implement AI to solve real problems, not just for the sake of it. Let’s ensure we’re well-prepared for the future of AI.
I’d like to thank Tremis Skeete, Executive Editor of Product Coalition, for his valuable contributions to the editing of this article.
I also thank Product Coalition founder Jay Stansell, who has provided a collaborative product management education environment.