Buried in the story was a deceptively simple question: does your AI agent count as an employee?
At a recent conference, Microsoft executive Rajesh Jha floated a provocative idea. In a future where companies deploy fleets of AI agents, those agents may need their own identities — logins, inboxes, and even seats inside software systems. If so, AI wouldn’t shrink software revenue. It could expand it.


If the AI Agent counts as an employee then the company “employing” it is liable for what it does.
My guess is the argument will be that “it’s a tool”, not an employee, and therefore they take no responsibility. Though I’m sure that argument is not going to fly for very long. If your air hammer harms someone because the person operating it wasn’t using it correctly, you’re still liable.
What? Companies aren’t liable if the user doesn’t follow the instructions or warnings and hurts themselves.
DeWalt isn’t liable because I was using their mini chainsaw while holding a branch with my bare hand and the saw bounced and cut me. I’m liable for being stupid.
I don’t think you understand the context of the situation I was proposing. I am not supposing that DeWalt would be liable. But let’s say we work in a shop together and I’m using an air hammer to I dunno. Punch rivets. If I as an employee of that shop use the air hammer and something involving the air hammer happens to my coworker or a customer or whatever, it is extremely likely that the company I work for would be on the hook. Could they try to penalize me personally? Yes. Could the person who was injured sue me personally? Certainly. Would the company be off the hook if the air hammer malfunctioned causing injury? Maybe - And at that point I would expect the manufacturer to be liable. But my comment never mentioned the manufacturer.
The context was companies using AI as a tool not companies manufacturing AI.
Chain fraud activities are being carried out in chain systems like n8n, where AI agents are used together. It didn’t take them long to create systems that generate deepfake voices to sound like real people, directing users to buy a product or deposit money into an account. Many videos on this topic have surfaced in Türkiye, particularly on YouTube. If the users and system creators are to be penalized, then of course, information logs regarding these agents can be used.
However, if this is being done to keep some agents out of the system using user license fees, it will completely backfire.
I don’t see how this distinction affects the question of responsibility at all. If anything, “it’s an employee” gives the company more room for deniability.
Lol. Ask Uber how the actions of their employees and contractors aren’t their responsibility.
https://www.bbc.com/news/articles/cq5y5w148p5o
And those are for contracted workers, the ones Uber specifically tries to use these loopholes for!
Facedeer is a well-known AI activist troll, his deflections can generally be ignored
Sheesh, you’re still obsessing over me? What a sad and pointless life you lead.
“More room for deniability” doesn’t mean “perfect universal deniability.”
I have questions about where I said that, but okay.
Emphasis added.