Yeah, but what’s the agent?

Chris Noessel
3 min readJan 15, 2025

--

One of the threads I’m participating in at my work is clarifying vocabulary around #agenticAI. Conversations have often felt like people are talking at cross-purposes, and just this week I’ve realized something important about the messiness of language and how it’s contributing to this mini tower of Babel.

Wait. These aren’t towers of anything. Read on. Royalty-free image courtesy of PickPik.

“Agentic” builds on the notion of an “agent,” and “agent” has three related but distinct meanings that pertain.

  1. In the broadest sense, an agent is anything that exhibits agency, or some degree of self-determination. In this sense people and animals and plants are all agents in the world. Philosophy uses this sense of agency most often.
  2. In a narrower sense, an agent is anything that exhibits some degree of self-determination in the pursuit of someone else’s goals. A talent agent, for example, is free to negotiate contracts on behalf of a sports figure or artist. The speculative trucker I wrote about in the post “Robot Keggers and Roomba Spies” is an agent working on behalf of the trucking company. I think this sense is the most common one to economics, as it talks about the principle-agent alignment problem, and is the sense that I wrote about years ago in Designing Agentive Technology: AI That Works for People (Rosenfeld Media, 2017).
  3. A last narrow sense is the person assisting me in the use of some service. When people call customer support representatives “agents,” this is the sense they mean. I don’t think this is a common usage in tech circles, but it is one that users may bring with them.

The problem is: What you think makes agentic AI agentic depends on which of these senses you have foremost in your mind.

To illustrate this, think of agentic AI as [a supervisor AI] working with [things that execute specialist steps] in a prompted plan to produce a result. Those specialist things could be basic functions, classic AI, or generative AI also displaying a degree of self-determination.

  • Someone with the [agent == self-determination] frame thinks of the “agent” as the whole thing; the supervisor and its specialist things, because it generates a plan about how to respond to a prompt: self-determination. In this frame, only the step-doers that are themselves generative AI also count as agents. The other things aren’t.
  • Someone with the [agent == subordinate worker] frame thinks of the supervisor AI as an assistant, and the things it invokes are the agents, regardless of whether they are functions, classic AI, or generative AI, because they’re working on behalf of the supervisor.
  • Someone with the [agent == assistant] frame would think of the supervisor AI as the agent. None of the step-doers would count as “agents” because they’re out of sight, not directly providing the assistance. Also note that this usage dilutes important distinctions between assistant and agent, and I’m against it.

Using one implicit frame while talking to other people operating with a different frame will cause everyone involved to scratch their heads.

“What are they talking about? That’s not an agent!”

My sense is that the industry as a whole is leaning towards [agent == thing with a degree of self-determination]. The combination of the supervisor+specialists is the agent and the thing that makes agentic AI agentic. This agent involves a supervisor AI that can call on other agents and can get deeply nested like a Matryoshka doll. (Aah, there it is.)

I suspect I’ve carried the [agent==subordinate work] frame, since I first began thinking and writing about agents before large language models were a thing. Self-determination in pre-LLM days was a very limited affair. But now we can ask LLM models to do things like “make a plan for…” and that is a capability that most people today, even if new to tech agents, will bring with them.

For the purpose of grounding conversations in your team, and being able to speak apples-to-apples, get clear on which sense you mean.

As I continue to write about agentive tech and related topics, I myself will be keeping in mind that this is the sense of agent that is gaining popularity as agentic AI continues its rise.

Language weird.

--

--

Chris Noessel
Chris Noessel

Written by Chris Noessel

Chris is a 20+ year UX veteran, author, and public speaker. He delights in finding truffles in oubliettes. Tip me in coffee at ko-fi.com/chris_noessel.

No responses yet