Buried in the story was a deceptively simple question: does your AI agent count as an employee?
At a recent conference, Microsoft executive Rajesh Jha floated a provocative idea. In a future where companies deploy fleets of AI agents, those agents may need their own identities — logins, inboxes, and even seats inside software systems. If so, AI wouldn’t shrink software revenue. It could expand it.


I look forward to your vibe coded copy of photoshop, I assume you’ll have it whipped up lickity split?
Someone out there will
Cool, so until that point in time, my point still stands. You can’t just hand waive and say “it’ll happen eventually” and be expected to be taken seriously.
You picked one arbitrary example and hold it up as proof that no one can build complex apps with AI? You know there is more than one example of a complex app. Apple has reported an 84% increase in App Store submissions. That’s pretty much all AI driven.
Right right. There’s an AI bubble, and there are AI scams. Of course people will ride the bubble, and scammers will always be with us. Doesn’t mean any of that work is quality, or that it will edge out the other work.
We are debating whether AI can write a complex app. I don’t know what scams has to do with anything. You’re simply asserting that all AI code is a scam. That’s odd because major companies that don’t scam their customers are shipping AI generated code into production everyday now. For many companies, no humans are writing code anymore. Must not be terrible code then. In my experience, it’s better than what most humans write. Humans are sloppy and take shortcuts.