

You almost have to admire the balls on any healthcare exec still willing to be so brazen with their enshitification…


You almost have to admire the balls on any healthcare exec still willing to be so brazen with their enshitification…


That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:
-You have a conversation with a model.
-Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.
-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.


Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.
That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).
When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.


That’s like asking me to pay 3 cents…


This seems like such a glaringly-obvious solution to lower inference cost that surely there must be some fundamental flaw in it… otherwise all of the big AI firms would be doing it, right?
Right…?


Of all the shitty AI products flooding the market right now, Atlassian’s Rovo has got to be the most useless I’ve had the misfortune of using.
They should be hiring more workers to fix their AI slop, not replacing them with even more of it.


Introducing: Microsoft Cosmos!
Send your data to heaven while we turn the planet into hell!


My understanding is that these “datacenters” would be used exclusively for model training, where latency doesn’t matter.
It is still an outrageously stupid idea for a zillion other engineering reasons, though.


most moons
Pretty much every moon but Titan. Titan, however, would be excellent for heat dissipation. Long before generative AI was even a thing, scientists have speculated that Titan would be the perfect place for datacenters because low-temperature computation is so much more efficient.
Of course, building a datacenter on Titan would be a several-hundred-trillion dollar endeavor, so… good luck bootstrapping your way into that industry.


It’s also clever politics. Minnesota has the largest iron mining operations in the entire United States, so choosing iron as your core battery technology is a smart (albeit cynical) way to drum to some local support with the promise of bringing new demand back to the taconite mines.
Whether that will be strong enough to overcome the extreme negative sentiments around datacenter projects? Who knows…


There have been some pretty high-profile departures from Anthropic over the past few months, so… I dunno, seems like there are plenty of insiders who are unhappy with the company’s current trajectory.


Sounds like my Outlook “sent” folder…


A child a day keeps the Attorney General away!


That’s the neat part — AI comes pre-enshittified!


Overuse of H-1B visas.
It’s literally a system of indentured servitude and corpos are just free to abuse it with impunity.


Man… of all the vibe coding tools, Lovable has gotta be one of the most useless, too.
I work with people (all middle managers) who love Loveable because they can type a two sentence description of an app and it will immediately vomit something into existence. But the code it generates is an absolute disaster and the UIs it designs (which is supposed to be its main draw) is some of the most generic crap I’ve ever seen.
0/10, do not recommend.


The AI assistant answers are just synthesized from the shitty SEO results.


the fact that the output of LLMs can’t be copyrighted
That may be the status quo right now, but I expect tech and media companies will fight tooth and nail to gain copyright protections over the slop they generate. A few bribes donations to the right politicians and you can get legislation that grants whatever rights you want.
What “usefulness” do you get out of them?