It’s time to get real about what AI can and can’t do

The hottest technology now, a type of artificial intelligence that generates humanlike text or images, can sometimes give you that immediate “aha” jolt.

It feels delightful when technology helps you do something useful or cool.

It’s fantastic to point your phone camera at an unfamiliar tree and see that it’s an eastern redbud. Maybe you remember the first time you pulled up Google Maps directions or summoned an Uber, and it seemed like wizardry.

Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post.

The hottest technology now, a type of artificial intelligence that generates humanlike text or images, can sometimes give you that immediate “aha” jolt. But often it doesn’t.

I’ve had the experience - maybe you have, too - of feeling underwhelmed when I’ve asked ChatGPT for help with vacation planning. I recently couldn’t remember the name of a pastry and tried describing it to three different AI chatbots. It didn’t work. (The pastry was a frangipane tart.)

Disappointments like those don’t mean AI is useless. It can be handy. But there is often a mismatch between the reality of AI and how companies encourage you to think of their AI as magical brains that know and do everything.

The truth is that AI is fundamentally bad at many tasks. It requires you learn just the right words to coax the best out of it. Like all computers, AI will make different mistakes than people do, but it will make mistakes. And the AI that’s foisted on you is sometimes just broken.

The lesson is to get comfortable with what AI can and can’t do, so you’re not disappointed.

And it helps to see the pattern of companies backtracking when AI doesn’t work nearly as well as they had promised. They know AI is not magic and you should, too. Here are some examples:

-Nine months ago, Amazon said it overhauled its Alexa voice assistant with new AI. But the AI Alexa repeatedly botched questions in demonstrations and still isn’t available on Alexa home devices. (Amazon founder Jeff Bezos owns The Washington Post.)

Some people at Amazon are worried the new Alexa won’t be ready for a planned September launch because the voice assistant is still giving unpredictable responses, my colleague Caroline O’Donovan recently reported.

Amazon said it has been testing the new Alexa with small groups of customers and is working hard to expand it to more people.

(Apple said this month that its AI-overhauled Siri would start to appear on some newer iPhone models this year. We’ll see how it goes.)

-Microsoft made a big deal last month about a new line of AI personal computers, including a time machine feature for everything you’ve ever done on your PC. The company then turned off the feature by default after some researchers said it was a security nightmare.

Microsoft also scaled back a much-touted AI keyboard button and stopped weaving its chatbot into Windows PCs for chores like changing your screen brightness. (Microsoft said it responded to feedback that people didn’t like commingling a chatbot with other tasks like settings.)

-Google in the past month scaled back AI-generated answers at the top of web search results after some of the AI responses were nonsensical or dangerous. That was at least the third time Google acknowledged its AI didn’t work so well, including a now-suspended AI feature that generated images that defy history, such as a female pope.

The pattern of AI retreats and frangipane-like disappointments call for a redo in how companies pitch their technology to you, and in your expectations.

First, companies should be upfront that AI can be useful for some things but not for everything - particularly for factual information. AI, for example, shouldn’t be used to look up the height of the Eiffel Tower or who won the 2020 U.S. presidential election.

Second, when companies publicly demonstrate their AI, they should tell you the error rates or back up assertions that sound amazing.

Independent researchers have recently questioned OpenAI’s claim that its ChatGPT can pass the legal bar exam with a higher score than nearly all test takers and challenged the idea that the chatbot can write software code like expert humans. (OpenAI didn’t respond to my request for comment.)

Lastly, it’s important to recognize that your imagination may outpace the capabilities of AI.

In a recent interview with NPR affiliate KCRW, the host Madeleine Brand asked me about a chore she’d love AI to do: Could AI know from your calendar that your mom’s birthday is coming up and order flowers to be delivered to her home?

Sorry, I’m not aware of any widely available AI that is capable of this task today.

You know from your life experience that technology can be amazing, but it’s rarely a silver bullet. And AI isn’t, either.

Related Content

He wanted to throw an Idaho town’s first Pride. Angry residents had other ideas.

The pope’s right-hand man is reshaping the church, becoming a target

How the migrant crisis tested schools 2,000 miles from the southern border