

A digital native is a person raised on the internet. A digital nomad is a person who moves around doing a computer job. And a digital laborer is not a person at all.
With the rise of agentic AI, or generative artificial intelligence tools that can operate without explicit instructions, tech enthusiasts have started promoting the idea of a workforce made up of human laborers working alongside AI tools that serve as “digital laborers.” AI agents, in this still largely speculative vision of the future, will be promoted to full employees.
In its most basic definition, “digital labor is computers doing the work traditionally done by human beings,” Marc Benioff, CEO of the business software giant Salesforce, said in an interview. As for its origins, he said, “I think I made it up, but I’m not sure.” Benioff noted that already, Salesforce is using AI agents for customer service tasks, driving down the total cost of customer support by 17% over nine months. He claimed earlier this year that he would be “the last CEO of Salesforce who only managed humans.”
If this sounds like mere automation or an efficiency tool, proponents insist that it’s different. Like a human employee, these tools would work independently with a bit of management, said Jen Stave, a director of Harvard’s Digital Data Design Institute. (The meaning of “digital labor” has changed from a few years ago, Stave said, when it described humans whose work relied on algorithms and platforms, especially gig workers.)
How the fruits of digital labor will be treated in economic terms is still unsettled, Stave said. As the usage of agentic tools spreads, big questions will emerge around who captures their value. If Company A “hires” a digital laborer made by Company B, for example, and Company A helps the tool grow and mature with its data, who should get the credit (and profits) for making the AI agent work that much better?
Another big question is who will be accountable when the tools mess up, asked Stephan Meier, a business professor at Columbia University. Will it be the team that created the bot? Or the human who “hired” it and assigned it a task?
For now, some human guardrails are in place: Salesforce customers unhappy with a digital agent can escalate to a human. That’s important because “agents are built on large language models which are inherently not accurate,” Benioff said. He said that these AI tools would fill in cracks in the labor force, adding that “I don’t see any potential risks.”
Meier sees a few: One is that some organizations might simply ax human workers and replace them with the tools, an approach he said would be “a mistake.” He thinks the AI tools can best operate as assistants to human workers. In the long term, Meier predicts, new jobs will open up for humans displaced by AI. But that transition, in the short term, will not be easy.
The good news for concerned humans is that agentic tools are not really in wide use yet. Some large tech and finance firms are incorporating autonomous AI products to complete suites of tasks previously done by humans. But, at least for now, anthropomorphic programs are not popping up on mainstream organizational charts.
This article originally appeared in The New York Times.
Oman Observer is now on the WhatsApp channel. Click here