Consumer AI Agents: Ready for take off?

AI Agents show promise but face adoption challenges, with success likely first in simple, routine tasks where downsides are minimal compared to the hassle of doing it yourself. Joe's blog explores the spectrum from AI Assistants to true Agents, examining when consumers will trust AI to act independently rather than just assist.

Consumer AI Agents: Ready for take off?

Joe Seager-Dupuy / Insight / 25 Feb 2025

Introduction

When OpenAI launched Operator in January, it didn’t make as many headlines as it might have, being overshadowed by DeepSeek’s LLM launch and the early signs of what Trump’s comeback means for the world. But beneath the radar, it was a significant step forward for consumer AI Agents, an area where many startups and investors are betting big.

It seems clear that AI will fundamentally reshape the way we all live our lives, but ‘AI’ is such a broad and generic term that it’s almost meaningless when it comes to deeply thinking through what those changes might look like and what opportunities it might create for startups. The problem, though, is that the more specific you dare to be, the more likely you are to be completely off the mark. As the saying goes: “It is better to be vaguely right than exactly wrong.”

We disagree. When things are changing so quickly and we’re continuously developing our thesis on how things might play out, we think it’s better for our work-in-progress opinions to be “exactly wrong rather than vaguely right.” Teasing out the nuance helps us ask better questions of ourselves and founders without getting deafened by generic narrative and buzzwords. We believe coupling a comfort with being “exactly wrong” and a willingness to evolve our views based on strong counterarguments helps us iterate more quickly towards a better perspective.

With that said, we wanted to lay out some initial thinking on AI Agents in consumer journeys to prompt discussion.

What even is an Agent?

To start, we should define our terms.

Our oversimplified (work-in-progress) categorisation is this:

  • When AI is finding, curating and synthesising options as part of a human-led journey, we consider that to be an AI Assistant.

  • When AI is independently taking an action on a consumer’s behalf, we call that an AI Agent.

Already, that’s clearly flawed. The reality is that there is a spectrum of Assistant to Agent within different products and consumer journeys.

On the Assistant end of the spectrum, we think about something like ChatGPT. To be overly reductionist, AI Assistants are really about better Search. This is already a massive deal: evolving from “a list of potentially useful links for people like you” to “the most likely best answer for you” is a stepchange - 400m weekly users in just over two years from launch isn’t bad! But it remains a fundamentally user-directed journey.

At the opposite end of the spectrum sits some of the ‘futurecasting’ speculation you hear from the most excited advocates, which broadly boils down to: “AI will know everything about you and solve everything for you without lifting a finger.” Do your groceries. Book your holidays. Fill your wardrobe with things that make you look more like Chris Hemsworth. And so on to utopia.

In the messy middle sits Operator’s current apparent level of sophistication. It can take action on your behalf (e.g. pre-filling your basket on Instacart, or finding the highest-rated tour in Rome) but it doesn’t (yet) complete the end-to-end transaction. It’s an Assistant with some agency, so definitely more agentic than ChatGPT, but not an Agent in its strictest sense. At least for now.

Will Agents take off?

A key question we’re debating is: when will consumers be more likely to trust an AI Agent to take action on their behalf, rather than just assist? As exciting as the prospect of outsourcing large portions of our lives to AI Agents might be, there are clear reasons why adoption may be slower than some expect. Here are some key factors that could hold them back:

1. If It’s Hard to Describe

Not all decisions can be boiled down to clear, structured instructions. For visual categories like furniture, apparel and art, the experience of looking through the options isn’t just about finding a match within a well-defined criteria, it’s about creating the criteria to begin with. If we don’t know what we want until we see it, can an AI Agent truly deliver the best outcome?

2. If It Takes the Fun Away

Some types of shopping are part function, part enjoyment. Browsing for travel destinations, discovering new fashion brands, or exploring niche wines isn’t just about finding the “best” option - it’s about the process of exploration and discovery. AI Assistants can help narrow down choices, but will users want to miss out on big parts of the fun?

3. If There’s a High FOMO Factor

Too much curation can backfire. Consumers want guidance, but not at the cost of feeling like they’re missing out on options they would have explored themselves, particularly within categories they know well or have strong opinions on. If I trust my own research and instincts, will I really delegate an important choice to an AI Agent?

4. If There’s a High FOFU (Fear of Frustrated User)

The more complex the request, the more opportunities for an AI Agent to get it wrong. Reorder washing powder? Easy. Do the weekly grocery shop? Much harder. Over time, AI Agents will improve at understanding user preferences, but will consumers tolerate the inevitable mistakes along the way or just opt to do it themselves?

5. If “It Depends”

Decision-making isn’t always consistent. A business traveler might prioritise comfort and convenience over cost, while the same person might be price-sensitive when booking a family holiday. Will AI Agents have enough context to accurately adjust for these shifting priorities without needing constant micromanagement?

6. If It’s Subjective

Some choices rely on emotional or subconscious factors - things that are hard for AI to quantify. The difference between two restaurants with similar ratings might come down to ambiance, personal memories, or a brand’s identity. Can AI really make taste-based judgment calls that feel right as well as being the logically ‘optimal’ choice?

7. If I Don’t Trust Its Motives

Trust is a major hurdle. Today, we know search engines are ad-driven, but we also know we can scroll past the paid results. AI Agents shift the paradigm from “Here are some options” to “Here’s what you should do.” Without transparency on who is paying and how, will users believe the AI Agent is truly working in their best interest rather than selling them to the highest bidder?

8. If The Stakes Are High

As in any context, there’s a risk of over-delegation for stuff that really matters. Taking the plunge on an AI Agent completing tasks where the monetary, time and/or emotional stakes are low makes sense. But when it comes to more important things like legal matters or personal finances, having the human in the loop as the ultimate decision-maker is a feature, not a bug.

Where will they succeed first?

Many of the above factors boil down to a simple equation inspired by Scott Galloway’s “algebra of disincentives”. In short, true agency is more likely if:

Probability of Bad Outcome x Severity of Bad Outcome < Hassle Factor of DIY 

In other words, the adoption of Agents will be limited to contexts where the downside risk is less than the hassle of a user doing it themselves. The ‘Probability of Bad Outcome’ will drop over time as technology and context improves, but we’re skeptical it will reach zero across the board.

To illustrate, here’s a rough framework we’ve been playing with for where AI is more (or less) likely to be truly agentic for certain tasks:

Table explaining where AI is more (or less) likely to be truly agentic for certain tasks:

It’s not perfect, by any stretch: there is likely plenty of ‘exactly wrong’ in here. But as a general framework to kickstart our thinking, we’re finding it pretty useful.

The Big Picture

AI Agents are coming, but their success will depend on more than just capability. Consumer behavior is driven by psychology, trust, and habit. The best early use cases will be simple, transactional, and low-risk. Over time, as trust builds and models improve, we may see more consumers hand off complex, higher-stakes decisions. But we’re not there yet.

Where do you think AI Agents will break through first? Tell us where we’re exactly wrong.

And if you’re a founder tackling these problems, we’d love to hear from you!

relevant stories

Insight

New Consumer markets with AI

AI is transforming consumer markets by making high-cost services more accessible and affordable. This parallels past tech revolutions, like personal computing and the gig economy, where innovation and cost reduction spurred market expansion. AI's rapid, scalable advancements, championed by major companies, are creating massive new demand. The market potential is vast, with history hinting at billion-dollar opportunities—and we're excited to support founders shaping this future.

View Story

Insight

The AI Revolution in Customer Contact

The customer contact landscape has undergone a substantial transformation. The rapid development of artificial intelligence (AI), particularly in natural language processing (NLP) and generative AI, has reshaped the customer contact landscape in ways that seemed futuristic just one year ago. This prompted us to revisit and update our research, resulting in our new Customer Contact 3.0 report. 

View Story