The AI Glow-Up We Actually Need: Less Hype, More Real Work

14 December 2025 | 3 min read | Written by Jenny Starmer

Open LinkedIn on any given morning and you’ll see it: a parade of shiny features, dramatic “AI breakthroughs,” and teams claiming they revolutionized their entire workflow over the weekend. But behind all the noise, most leaders are wrestling with a more grounding reality — AI still isn’t making work easier. It’s often just adding complexity, uncertainty, and yet another tool to evaluate.

The gap between hype and helpfulness is exactly what Andrew Reid, Founder and CEO of Rival Technologies, unpacked in his recent article for Entrepreneur.

Cut the AI theater and solve real problems

The thing about AI right now is that it’s moving so quickly you barely finish a thought before it’s outdated. Or as Andrew puts it, “anything you say feels outdated almost immediately.” And yet, despite that pace, so many teams are still focused on looking innovative instead of asking a much simpler question: Is this actually making anything easier?

Too many features exist only because they look exciting. As Andrew points out, they “don’t actually make anyone’s life easier.” And this isn’t just a vibes-based problem. McKinsey’s research on the state of AI shows that while most companies now use AI somewhere in the business, very few have scaled it in ways that deliver real, enterprise-level impact — often because they haven’t tied their initiatives to clear, high-value use cases.

The fix is simple: start where AI removes drudgery, not where it creates fanfare.

“Humans own the loop” — Trust is the new KPI

Andrew raises a truth that the insights industry needs to hear more often: AI is only useful when people trust its outputs.

Our clients want to know: Can I trust this? Will it help me move faster? Will it make better decisions easier, not harder? 

That trust depends on keeping humans firmly in the loop. Review chains, explainability, and clear workflows matter just as much as sophistication.

If you want a practical framework for what trustworthy AI looks like, Andrew and Dale Evernden, EVP and Founding Partner at Rival, broke it down during Quirk’s Virtual - AI & Innovation summit.

When AI and agents can show their reasoning, people stay confident — and in control.

If the AI gets it wrong, even once, it can send you down the wrong path.

From tools to agents to smarter systems

Most companies begin their AI journey with isolated helpers: a summarizer here, a sentiment scorer there. Useful, but disconnected. Andrew argues that the real shift happens when these capabilities link together into agents that understand your goal and take on the messy work.

[Agents] take on the repetitive, noisy work so people can focus on thinking.

That’s the same philosophy behind our Unstructured Data Agent, which brings structure, context, and linkage to open ends, transcripts, chat logs, and video feedback.

Unstructured data has always contained the emotional and contextual heart of research — now we finally have tools built to surface it.

And for those wanting a digestible walkthrough of what “agentic AI” actually means (minus the jargon), our Demystifying Agentic AI webinar is a great primer.

Experimentation is now a leadership skill

Here’s a line that stuck with me:

You don’t accidentally build a good AI workflow between back-to-back Zoom calls.

Exactly. Good AI doesn’t emerge from pressure or panic — it comes from intentional space to experiment. Andrew’s point is a reminder that organizations need room to test ideas, break things safely, and learn what actually works for their teams.

That’s why practices like Rival’s dedicated “AI Days” resonate so much. They give people permission to explore without the immediate expectation of delivery or perfection. And frankly, more companies should build this kind of structured play into their operating rhythm.

Curiosity isn't a soft skill anymoreit’s an operational advantage. And if you’re curious how experimentation can translate into more flexible, less burdensome research workflows, we shared a few recent examples here.

What leaders should do next

  • Start small, where AI adds genuine value. Speed matters more than spectacle.
  • Keep humans in control. Trust beats automation.
  • Connect your tools. Agentic workflows scale; isolated features don’t.
  • Make time to experiment. Innovation doesn’t happen in the margins.
  • Lean into your expertise. AI amplifies what you already know.

Andrew ends with a reminder:

If you keep the focus on solving real problems, not just selling the AI story, you’ll be in a better position when the next big change drops.

And in this landscape, the next big change is probably already loading (or is scheduled to post on someone’s LinkedIn feed tomorrow at 8:30 a.m.)

author image
Written by Jenny Starmer

Senior Events & Marketing Specialist

Talk to an expert
TALK TO AN EXPERT

Talk to an expert

Got questions about insight communities and mobile research?Chat with one of our experts

GET STARTED
MTK789tQ

SUBSCRIBE Sign up to get new resources from Rival.

Subscribe by Email

No Comments Yet

Let us know what you think