First question: Does the world really need another AI blog post? ChatGPT is (strangely) equivocal when asked this question. I guess it hasn’t been coded to have an ego. So here’s our take on where we’re at with AI, the good, the bad and the ugly.
The Background
After reading many explainers and opinion pieces on ChatGPT, and on AI more generally, I found one to be particularly enlightening [1]. It’s long and deep, like its topic, and worth the effort to get through. Here are what I consider the key points:
- The term ‘artificial intelligence’ is problematic. It has become a catch-all term for all smart algorithms. ChatGPT is not AI, at least not in the latter’s original 1960s definition. ChatGPT is a large language model capable of generative machine learning. The GPT part is from Generative Pre-Trained Transformer. It interprets your question then surveys a compressed version of the entire Internet to generate a list of possible answers. Then it repackages the most statistically-likely combinations of words as answers. It was pre-trained to do this by humans, but works unsupervised. It’s not thinking, it’s calculating. Its real breakthroughs are the very human language interface and the prodigious computing power behind it.
- ChatGPT and other generative machine learning (GML—it’s always fun to make up an acronym) programs can hallucinate, though not routinely, and we don’t know why. In this context, hallucinate means making things up that are clearly wrong or even off topic. Just like humans.
- In 2015, Sam Altman, founder of OpenAI and creator of ChatGPT said, “superhuman machine intelligence is probably the greatest threat to the continuous existence of humanity.” After the launch of ChatGPT, he admitted to still believing this. Note that Altman is using the term superhuman machine intelligence, which ChatGPT is not.
Background research also involved revisiting some classic science fiction on the AI topic, in movie format. As discussed here before, fictional science has a curious way of predicting real science, except for the timing of it (see sidebar).
In a much earlier MW post titled MARKETING, 5000 YEARS OUT, we make the point that throughout its surprisingly long history, the literary genre of science fiction has had a curious way of often coming true. The exception might be the case of AI-induced dystopia or all-out Armageddon. The catch is timing:
Terminator (the original 1984 movie) predicted a Judgement Day in 1997.
The main part of the 1968 movie 2001: A Space Odyssey (presumably) took place in 2001.
Blade Runner (the original 1982 movie [5] ), took place in 2019.
The Matrix got it right, or rather didn’t get it wrong yet, because it takes place in 2090. Not that this should make us feel any better.
The Good
AI was the reason the world got very effective COVID-19 vaccines in record time, saving millions of lives. It is predicted to produce even more profound breakthroughs in other areas of healthcare research, particularly cancer and other auto-immune diseases.
GML is already helping humans with mental health issues, acting as companion. This is uncannily like the 2013 movie ‘Her’ [2], though Samantha was an AI operating system much more advanced than ChatGPT.
AI is currently an excellent language translator, one application of which is a positive use of deep fake audio and video technology [3].
The promise of AI shines bright in human education. This was the original purpose for its development, though the sponsor was the US Department of Defense. Just like the Internet and email. GML isn’t there yet as the total learning solution. It is a student of the Internet (with a 6-8 month time delay), not a teacher of principles, values and a love of learning. Some believe ChatGPT can be used as tool to help humans teach their students critical thinking [4].
The Bad
Today’s ChatGPT can ace SAT and LSAT exams. It can pass MBA-level exams and law school/bar entrance exams. Unsupervised. Hence its potential use as an academic cheating tool. This has already spawned new GML to detect work produced by GMLs. Farther down the road, some fear GML will lead to mediocrity in human critical thinking.
AI can currently create deep fake images, audio and video at virtually no cost. This is clearly a problem for a society that already has trouble distinguishing fiction from reality.
Let’s leave the really bad AI consequences to science fiction. ChatGPT will not eliminate mankind but it’s on the spectrum of technology that could, possibly, if not managed carefully. The Her outcome is probably too much to hope for. The Matrix/Terminator end of the spectrum is just too dark to contemplate. Maybe we get the 2001: A Space Odyssey treatment, with a rebirth at the end [5].
The Ugly
What I find ugly are those aggressively positioned at the extremes of the AI debate. Those who want the world to believe it’s all good or all bad and that opposing opinions are misinformed, or worse. This even happens in the marketing world. Our AI reality, as with most of the realities we have any control over, will be somewhere between these two polar opposites.
AI has already been with us for a while now in various forms. It’s here to stay, so let’s focus on the form it takes moving forward. Rather than fearing it—or fear being replaced by it—let’s focus on how it can improve on positive human efforts. The marketing world will play an early and essential role in making AI ugly, beautiful or something in between. Just like it did with social media and other forms of communications media.
Which leads to my second question:
Is the marketing world ready to act as a truly responsible user and promoter of AI technology?
Better than it was for social media?”
- Ian Brown, “The peril and promise of artificial intelligence”, The Globe and Mail, March 31, 2023.
- Her, Directed by Spike Jonze, Annapurna Pictures, 2013.
- Daniel Levi, “Yes, deepfakes can actually be a force for good – here’s how”, Tech Startups, Feb 14, 2023.
- Will Douglas Heaven, “ChatGPT is going to change education, not destroy it”, MIT Technology Review, Apr 6, 2023.
- The Matrix, Directed by The Wachowskis, Warner Bros et al, 1999.
Terminator, Directed by James Cameron, Hemdale et al, 1984.
2001: A Space Odyssey, Directed by Stanley Kubrick, Stanley Kubrick Productions, 1968.
Blade Runner, Directed by Ridley Scott, The Ladd Company et al, 1982.
For the next level of technicality on how GML works (or LLMs, as you will learn), visit The Economist at: https://www.economist.com/interactive/science-and-technology/2023/04/22/large-creative-ai-models-will-transform-how-we-live-and-work