Agentic AI: Why Researchers Should Lean In, Not Look Away

21 November 2025 | 5 min read | Written by Jenny Starmer

There’s been a lot of talk about agents lately. Not the kind that call you about your car’s extended warranty, nor the secret kind I moonlight as when doing a deep dive on a friend’s first date prospect — the other kind. The AI kind.

If you’ve scrolled LinkedIn in the past few months, you’ve probably seen Agentic AI pop up alongside words like “autonomous,” “reasoning,” and “revolutionary.” And if you’re anything like me, you’ve wondered: Okay, but what does that actually mean? And more importantly, what does it mean for researchers?

That’s exactly where Dale Evernden (EVP of Design and Innovation here at Rival Technologies) started in our recent webinar, Demystifying Agentic AI for Research. Mike Stevens, Founder of Insight Platforms, facilitated the session. A recording of the webinar is available, and I’ve shared some key takeaways below.

So… What makes an AI “agentic”?

As someone who, not all that long ago, literally typed into ChatGPT “explain agentic AI to me like you would a child” (and, possibly more embarrassingly, “is it different from making a custom GPT?”), I appreciated that Mike kicked things off with the same question many of us are quietly asking: is agentic AI really something new — or just a smarter chatbot?

Dale explained it this way: while large language models (LLMs) like ChatGPT sparked the generative AI wave, agents represent the next step. Instead of waiting for a command, they can reason through a sequence of actions, make decisions, and use connected tools to get the job done.

He broke it down into three “classes” of AI tools:

  • Chatbots – trained models that generate responses.
  • AI-powered workflows – where AI is built into a process, like automating data cleaning or coding open ends.
  • Agents – which plan, reason, and act autonomously using external tools or APIs.

Agents are essentially reasoning language models. Give them the right tools and context, and they can work through complex tasks — not just one-off outputs.

That distinction echoes what we presented at IIEX AI 2025: the real breakthrough in AI isn’t faster automation — it’s systems that can make contextual, autonomous decisions.

From sidekick to superpower

Dale shared a framework (and it’s one we’ve been using internally at Rival) that breaks AI’s role into three categories:

  • Assist – AI that helps with the heavy lifting.
  • Amplify – AI that makes the work better.
  • Unlock – AI that opens doors we couldn’t even knock on before.

AI won’t replace researchers, but researchers who use AI will replace those who don’t.

That line stuck. Not because it’s alarmist, but because it reframes adoption.

That collaborative mindset — augmentation over automation — is at the core of our human-in-the-loop approach at Rival. The human element isn’t going anywhere; it just needs to find a new rhythm alongside agentic tools.

Proof in practice: Smart Probe and Insight Reels

If you’ve seen any of Rival’s recent launches, you know we’ve been testing and building in the agentic AI space for a while. During the session, Dale showcased two tools that capture what agentic looks like in practice.

Smart Probe: Listening That Learns

Traditional survey logic follows a script: if X, then Y. Smart Probe doesn’t need that rigidity. It listens, interprets, and decides whether a response deserves a follow-up — like a thoughtful moderator might in a live conversation.

If a respondent says, “The drink was refreshing,” Smart Probe might ask, “Can you tell me more about what you meant by refreshing?” And if the answer shows enough depth, it moves on.

Behind the curtain, it’s powered by Thoughtfulness Scoring — a model that evaluates responses across ten dimensions, from clarity to specificity. The result? More natural dialogue and richer data.

This kind of dynamic, context-aware probing has huge potential across industries — from retail to fintech to QSR — as seen in Reach3 Insights’ Gen Z + AI in QSR research and Rival’s AI-enabled insight communities. It’s not just about speed; it’s about relevance — asking better questions because the system actually understands the answers.

Insight Reels: Turning insights into powerful stories

From my perspective, Insight Reels might be the clearest example of agentic AI amplification in action.

The tool takes text and video feedback, runs thematic analysis, identifies key themes, and automatically stitches together short, shareable videos to bring insights to life. It’s AI-powered storytelling with researchers still in the loop — a blend of automation and artistry that actually feels useful.

If you missed Dale’s walkthrough during our AI in market research webinar, it perfectly captured why storytelling and design still matter — even when machines are doing the stitching.

It’s about augmentation, not automation.

From assistants to colleagues

Mike made a great point: the term “agent” is being used everywhere — and often inconsistently. Some people are slapping the label on any old semi-automated workflow.

Clarifying that true agentic systems reason across tools, Dale explained: they don’t just execute one action at a time; they plan multi-step processes (what he called multi-agent pipelines) where each agent handles a different part of the research journey, from analysis to storytelling.

 Rather than one AI doing everything, we’re building systems of agents, each designed for a specific job to be done.

We’ve started bringing that framework to life through Rival's Unstructured Data Agent. It’s built to help insight teams make sense of qualitative feedback at scale. By combining a few familiar capabilities — Thoughtfulness Scoring, AI Probing, AI Summarization, and Insight Reels — the Unstructured Data Agent connect the dots between what people say and what they mean.

The goal isn’t to replace qualitative judgment; it’s to handle the heavy lifting so researchers can focus on interpretation and impact.

It’s a natural extension of how researchers already think — but now we can orchestrate instead of just execute.

Keeping Humans in the Loop

Of course, none of this comes without caution (anyone else still unable to get Skynet off their mind when we talk about AI? Just me? 😅)

Dale was honest about the realities: hallucinations, bias, and privacy are still real concerns. The solution isn’t to remove humans — it’s to keep them strategically in the loop.

A few of Dale's Agentic AI best practices that are worth remembering:

  • Use explicit reasoning chains - choose agents that show the how and why behind their actions so researchers can review and trust their logic.
  • Build multi-agent pipelines - let different agents handle different steps — drafting, testing, analyzing, reporting — while researchers stay in charge of the overall story.
  • Integrate gradually - start with augmentation, not automation. Let agents draft, summarize, or suggest — and keep researchers reviewing before anything goes live.

And, of course, the fundamentals of responsible adoption still stand:

  • Keep researchers involved at every stage.
  • Be transparent when AI contributes to your work.
  • Protect respondent data at all costs.
  • Test tools gradually; don’t outsource your craft to novelty.

Agentic AI in 2026: Vertical agents and research innovation

Toward the end of the session, Dale offered a glimpse of what’s next — and it’s easy to see why 2026 could be the year agentic AI really goes mainstream.

The next wave will be vertical-specific agents — purpose-built systems for industries like healthcare, finance, or CPG, trained on their unique data and decision logic. That’s where general-purpose AI starts to feel less like a tool and more like a true collaborator.

It’s a natural evolution for researchers too — from experimenting with AI to embedding it directly into how insights are discovered, tested, and shared.

Leaning in

If there’s one key takeaway from Demystifying Agentic AI for Research for me to leave you with, it’s this: the researchers who thrive in this new landscape won’t be the ones who resist change — they’ll be the ones who get curious about it.

Agentic AI isn’t here to replace human thinking; it’s here to stretch it; to handle the mechanics so we can spend more time on meaning.

author image
Written by Jenny Starmer

Senior Events & Marketing Specialist

Talk to an expert
TALK TO AN EXPERT

Talk to an expert

Got questions about insight communities and mobile research?Chat with one of our experts

GET STARTED
MTK789tQ

SUBSCRIBE Sign up to get new resources from Rival.

Subscribe by Email

No Comments Yet

Let us know what you think