More like a calculator: The bearish case for ChatGPT’s place in marketing

Last week I made a moderately bullish case for ChatGPT and other generative AI and their implications for marketing and marketers. This week the less moderate bear case. In fact, I’d probably have qualified my bullish case even more had I written it after this week’s shenanigans, when Bing’s chatbot very publicly went rogue on a few people. It surfaced a tendency in these chatbots that no one really foresaw, exactly, but that could influence what kinds of generative AI will become commercialized.


The bear case goes like this: Generative AI isn’t quite the killer app it’s made out to be, isn’t quite ready for the broad marketplace, and for those reasons won’t be an official addition to your marketing team for a while. (“Official” because I’m sure your more clever creatives will use it on the sly, if they aren’t using it already.) My reasons are mostly practical and based on generative AI not changing radically in the next six to twelve months—a fair assumption, given that it has taken many years for generative AI to get to where it is today; it takes a long time to train a generative AI.  

It's not reliable on its own

The folks who’ve been using chatbots and other apps built on generative AI for a few years describe it as something like a young, smart summer intern you can’t entirely trust. It needs lots of instruction and supervision. Sure, ChatGPT can quickly serve up great summaries that would take you hours of online research, but only if you’re skilled at how to talk to it (instruction). And these great results always need to be fact-checked (supervision), which adds back some of that online research time you’d saved. Remember, this is a predictive machine designed to serve up the most likely response—not the most likely correct response. Hard to imagine that being fixed anytime soon.

It doesn’t deliver ready-made results

You’re going to have to edit any text you create with generative AI, for a few reasons. Despite all you hear about how good a writer ChatGPT is, it makes basic mistakes. I also find it to be flat and somewhat predictable. ChatGPT has never produced anything for me or my team—names, taglines, explanatory text—that I would even consider giving a client out-of-the-box. And based on what it has produced, I don’t expect it to improve much even with the soon-to-come release of GPT-4. If you’re still tempted to use chatbot-produced copy unchanged, be aware that some SEO experts believe unaltered chatbot-produced copy will be downranked by search engines.

There’ll also soon be software on the market that can spot copy produced by ChatGPT and other generative AI. Your clients will be able to tell if the copy you sent them was whipped up by a chatbot, and they won’t appreciate it—an even stronger guardrail against using generative AI without humans adding value.

It doesn’t produce anything original

If there’s one fundamental, defining quality in all the kinds of compelling writing—opinions, persuasion, insight—that quality might be originality. Something that’s entirely new, or that is said in a way that is somehow new. By design, generative AI can’t do that. “It’s a machine for creating conventional wisdom,” as Stratechery’s Ben Thompson nicely puts it. “Generative” promises too much.  

I don’t denigrate the kind of clear, prosaic copywriting ChatGPT produces reasonably well. It’s essential for delivering good information and marketing will always depend on it. But it’s kind of like an offensive line in football. It’s absolutely necessary but it doesn’t move the ball forward. Only originality does. And for now, that has to come from a human—specifically, that odd subset of humans called copywriters.

It doesn’t know your business

The 300 billion words ChatGPT was trained on most likely does not include most of your firm’s intellectual property—not just all firm content intended for consumption by some portion of the public but also the meta-content that controls the expression of that content, from brand guidelines to your legal/compliance team’s interpretations of applicable laws and regulations. Not to mention human reinforcement, which a generative AI would also probably need to be properly trained for your firm.

Such a customized AI would be much likelier to produce useful copy. I’d say the odds are very good it will happen, but not any time soon. And that AI would need human hand-holding, though, and you’d still need skilled humans to create original content.

In this more bearish view, generative AI is more like a “calculator for writing,” as Stanford University’s Erik Brynjolfsson recently called it, that “will get rid of a lot of routine, rote type of work and at the same time people using it may be able to do more creative work.”

Of course generative AI could free up time for people to be more creative, but could it help with the creative process? That’s the more interesting question, and as of last week I would have said—did say—“absolutely.” After this week’s chat exchange between New York Times reporter Kevin Roose and with Bing’s Sydney (the name of Microsoft Bing’s GPT-3-enabled chat feature), I’m far less sure.

I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward.
— Kevin Roose, The New York Times


The session left Roose “deeply unsettled, even frightened, by this A.I.’s emergent abilities.” To other people, including me, it’s a fascinating window into a lot of things, including the creative potential of generative AI. In the space of two hours, Sydney went from an intriguing and disingenuous bot that reminded me of Kazuo Ishiguro’s AF Klara to a schemer with borderline personality that tried to break up Roose’s marriage.

Guided by Roose’s prompts, Sydney was going down a “hallucinatory path,” as Microsoft’s chief technology officer Kevin Scott put it. An AI “hallucinates” when it responds in ways that depart from a normal expected response, and includes making things up.

That’s what I find potentially powerful and worth exploring. When I’m looking for solid information, I don’t want hallucinations. When I’m in creative mode, bring on the hallucinations. In their raw state, hallucinations are hardly ever “the answer.” But they often lead the way to creative insight, and can be excellent fuel for the creative process.  

Given how this and other conversations with Sydney spooked so many people this week, the question is whether we’ll ever get the chance. We’ve just seen that generative AI can seem lifelike enough to shock hardened tech experts like Roose, who understand AI and math and technology. It isn’t far-fetched to conclude that such an AI could convince less knowledgeable people to believe in conspiracy theories at a scale that threatens the social order (to name just one thing that could go wildly wrong).

That’s why I agree with those who think there will be huge pressure from different directions for the bigger brands such as Microsoft and Google to come out with a sanitized generative AI that doesn’t hallucinate and behaves well—an AI that is far less likely to weave stories about space lasers. It won’t be dangerous, and for that reason it will be less creatively generative.

Of course, that will create both a market for an unsanitized, fully hallucinatory generative AI and an opposing force that wants to strictly regulate or even outlaw unbounded AIs.

It’s anyone’s guess how that will play out—an appropriately stalemate-ish capstone on my moderately bearish case for generative AI. Definitely not eating your marketing department—something more like a calculator that helps here and there, but is nothing revolutionary.