Related Posts
Popular Tags

The fallacy of treating AI agents as fellow employees

The fallacy of treating AI agents as fellow employees

Should we treat AI agents as fellow employees? No.

How’s that for clarity? I do wonder how those who claim we should actually think about how human employees should be treated.

It is easy to think of all the ways in which advice advocating for equal treatment of humans and agents is not right: Don’t expect the AI agent embedded in your marketing workflow to buy Girl Scout cookies. They aren’t going to care if you forget their birthday. They aren’t going to bet on the March Madness pool (and if they want to, don’t let them, as they may be very good). They don’t care what we think about them. We are not dealing with Commander Data on Star Trek here.

I understand most of those making the claim to treat AI agents as employees do not mean it completely literally—or, at least, I hope they don’t. Even the analogy is weak, but let’s think if any aspects of it sound right.

The right way to think about AI agents

AI agents, like the currently best-known ChatGPT, are different from other tools in important ways—but they are similar in most. They can do lots of things, just like a computer can, which is a much better analogy. Where AI agents differ from a computer is that it is much simpler to get them to do the one thing you want done. We are using them in our search engines like Google, Edge or Mozilla all the time now. Put in a search term, and you’re likely to get a response generated from their respective AI agent, whether you want it or not.

The right way to think about AI agents in the workplace is that they make us—the employees who work with them—something like a computer programmer, with the big plus that we don’t need to know any programming skills. Here’s an example.

Suppose you work in the marketing department, and one of your tasks is to find out what the ads being used by your client’s competitors look like. A generation ago, you’d have to look through the media to see—TV, print, radio and so forth. When the internet arrived and search engines with it, that task became far easier, but still not perfectly easy. You probably need to find the names of the client’s competitors and then search for their ads online.

With an AI agent, you can just ask it to find the advertisements for trucks sold by competitors to Ford, and you will get a response. The problem is that you might not get a good response at least on the first pass. That is true for virtually any tool. The difference is that with other tools, including software, if our tool wasn’t giving us what we want, we would raise our hand, call the IT department and have someone fix it. That is not what is happening with agents. For most agents, you fix it yourself.

There are some tasks for which setting up an agent requires a lot of data—if, for example, the answers you want from it are clearly identifiable as “right” or “wrong.” That requires a lot of work that someone else will probably do for you. The latest tools also let you set it up yourself if you have the data.

But if it is similar to the advertising example above, the responses will be on some continuum of good to bad, rather than clearly right or wrong. That is where employees have to deal with the agent to figure out what is wrong with the answer it has given you and how to describe it in a clear way. It may need a clearer definition of what counts as an advertisement; for example, maybe it was missing internet ads. We hope the employee can change the prompt to get a better answer. Over time, the quality of answers may change because the outside information changes (e.g., what counts as a “truck” now) or what we need changes. The employee has to keep monitoring the output and improving it.

Don’t invite your AI agent to happy hour

To the extent that there is anything to the idea of treating an agent like an employee, it is only this dimension that is true: The agent has to be supervised. But the agent is like an extremely dumb employee that will not know if it is producing something good or bad, or how to make it better, unless the employee takes that task on (agentic machine-learning models can figure this out but those are not the agents being placed in most jobs).

So, in what sense is an AI agent like an employee? Only in the sense that its output needs to be monitored and corrected. We aren’t setting “goals” for the agent, but we might be for the employee using it. We aren’t doing performance appraisals on the agent, although we might be for the employee using them. We shouldn’t expect it to get better on its own. We can’t switch it to another task without a ton of work. We shouldn’t get angry at it.

And we shouldn’t invite it to happy hour.

Source – https://hrexecutive.com/the-fallacy-of-treating-ai-agents-as-fellow-employees/

Leave a Reply