• 0 Posts
  • 65 Comments
Joined 5 months ago
cake
Cake day: November 1st, 2025

help-circle
  • I think the key here is that this market hasn’t been affected yet but it will be more than likely. Also just because you can get the card doesn’t mean you can get all the rest of the components to build a PC. I definitely don’t have the newest latest or greatest. No DDR 5 RAM, no fancy current gen or even previous gen video card. I own a PS5 and a gaming handheld with Linux on it. And those two pieces of hardware have gone up in price instead of down in price and will more than likely continue to rise.

    Last gen hardware will see a price increase it it hasn’t already the longer the AI bubble BS goes on. Because you’re right. people will opt for that if they can’t afford the latest and greatest. Are they going to pay $650 for that same 3080? Harddrives are twice as expensive and not something that last forever. Mobos can die and the fabs that make them may in fact be switching to making other components to fuel production of AI chips. RAM isn’t a magical component that never goes bad and it’s going to dwindle in availability for DDR4 and possibly DDR3 when businesses realize they can’t get the new stuff for their business suite computers. They’re already salvaging RAM and in some cases Harddrives from their used computers before they sell them off.

    AI is a symptom but it is making things worse and has the trajectory to continue that trend until it crashes and dies.



  • It matters because every time we anthropomorphize Generative AI LLM’S we re-enforce peoples belief in their ability to tell lies or truths.

    People’s belief is what leads to trust in LLM’S and things like AI psychosis.

    An interesting way to look at it is AI also can’t tell the truth.

    What it does is generate the next likely word or words based on its most significant statistical positive in its database. So it doesn’t know anything. It doesn’t tell truth. It doesn’t tell lies. It isn’t an entity. The people behind it are allowing it to present information as factual and we have no reason to trust them.


  • I understand what you’re both trying to say and I think you’re talking past each other.

    A new person entering the market doesn’t have the option to rely on a backlog of games and if a component in your rig failed tomorrow it would be exhorbitantly expensive to replace it even if it were a budget rig.

    The more budget you go, the harder it is to replace existing components because lots of things are incorporated and solder together as a cost cutting measure.

    So say you’re a new gamer hoping to buy a rig. New or used you’re screwed right now even if all you want to play are AA or indie games and you never touch a AAA game. Buying used is going to be expensive because theres now a high demand for used hardware because new hardware is exhorbitantly expensive.

    So even if what you say is true and we can all just get by with budget hardware, that hardware is still going to be prohibitively expensive and the reason is Generative AI and capitalism.










  • Have you read the whistle blower’s book? Or even just the exerpts from it that have been floating around for ages?

    I’m curious, because it’s clear to me that the C-Suit c-suite at Meta and companies like it absolutely do employ some really shitty people, but at the same time, that doesn’t mean you can paint the janitor with the same brush as the lean in woman who made her personal assistant but lingerie and model it in her home for her. Or tried to force another woman to cuddle with her while she was pregnant.

    So what I’m saying is, I don’t agree with the sentiment that everyone who works there is a power mad executive intent on algorithmic domination of the internet, and for at least some of the programmers in question a job is a job.

    I will say that is different if they know what’s going on and have the proper ability to make the decision to fight against such a thing.

    But I question where your line of complicity starts and ends here.

    I guess I’m also pointing out that part of what makes meta properties particularly attractive to pedophiles is the same thing that makes it attractive to other online criminals and it’s the encryption.


  • I think this might ignore something else video image generation is good for which is propaganda.

    Fake or highly edited video of strikes in Iran, random video circulating online proporting that the Netanyahu hand videos, and random videos of Israeli strikes on Palestine (which I assume are to discredit actual video of the atrocities happening there), have been going viral for awhile now.

    Advertising is probably one of the few industries that can use image generation and video generation via AI LLM in a way that would actually cut costs but the downside is people are increasingly militantly against ads and they are against AI generated content including ads, so this isn’t likely to become the reality any time soon.

    If the McDonald’s ad and others like it had been better vetted for AI uncanny valley aspects and hallucinations that cause trucks to transform into short bus versions of themselves mid ad spot etc, the public might not have paid attention at all.

    And lots of those same advertising firms are using AI to their benefit behind the scenes to purchase ad space. But using AI in ads in a public facing way is a dream out of reach for them for now because they bungled it so bad.


  • Are you suggesting that we should be able to criminally prosecute people who build end to end encryption software and tools? Or algorithms that find people you may know? Because that seems to be key to the Meta lawsuit as far as they are involved. That and the fact that Meta deliberately mislead the public about the safety of the website for kids. Because social media as it exists today isn’t really safe for children and a best the people responsible for that are the executives who made the decision to lie accountable.

    But your average programmer isn’t designing tools for the purpose of making kids less safe. They aren’t designing tools for the purpose of being addictive. And they aren’t designing tools for predators. They happen to have designed tools used by predators because of the flaws in the design and the fact that their executives found those flaws to be advantageous to their bottom line so they played them up. Leaned in if you will.

    It was literally part of the leak in 2021 that they had discovered that their algorithm had certain effects and the C-Suit literally went about making sure they could use that for monetary gain to keep people on the site and scrolling. Not just young users, but users of all ages.

    The main thing is that it’s really easy to social engineer on a social media website where people are encouraged to give out all kinds of information that can be used against them in social engineering attacks. That, combined with the addiction fostered there and the encrypted chat methods owned by Meta and used by quite a bit of the world en masse is what created this situation.


  • I just want to point out one thing.

    It’s pretty difficult to on one hand be like “we should all adopt electric cars” and on the other hand also be “against the state or private entities tracking the citizenry”. If you don’t know that all the new cars including the new electric vehicles are spying on their occupants you haven’t been paying attention.

    On top of that a lot of Americans are realizing they can’t afford a vehicle at all. The subsidies for buying a new electric vehicle have gone up in smoke. So people who already can’t afford a vehicle aren’t gonna be able to buy an EV without the tax credits.

    Combine the two problems and you’re just not going to get the results you want.

    You might be able to sell me on a dumb electric vehicle. No manufacturer is selling that in the US, and even if they did try, the safety features required by law make it basically an impossibility.


  • You appear to have gone completely around the twist.

    You haven’t shown a logical progression of anything you claim. You don’t point to any current legal precedent, clearly aren’t paying attention to the actual wording being used to draft this bill/law proposal, and are spreading what amounts to FUD.

    About the only truthful logical statement you’ve made is that it’s not about whether you like or dislike these companies.

    Companies are considered a lawful entity with rights. The supreme Court literally just ruled that LLM’s do not count as the same kind of legal entity because if they did they’d be able to copyright their “work”. So I really do question how you think we go from that to “nobody has free speech because the LLM can’t give legal advice”.

    Speech that causes harm has pretty much never been a protected form of speech in the US, even if I were to humor you and assume that an LLM could have the rights to it.

    And you mean the “bad these companies have wrought”.


  • Who’s speach is being limited by limiting LLM’S? Because as a legal entity their speech cannot be infringed because the LLM doesn’t have basic rights in the way that a human does.

    So what you’re saying is that you don’t want these companies to be held to any legal standard for the information they output (which is different from reddit because the companies can’t be held responsible in the US under section 230 for what their users write).

    The chatbot is the output of the company’s data set and somehow you’re saying the company can’t be held responsible for what that output is and if it’s dangerous because it’s curtailing free speech?

    That’s such an interesting take.


  • In your example, say you go to a lawyer and ask legal questions. If the lawyer is not providing legal advise (I. e. taking on the role of being your lawyer and representing you in that matter), they are required by law to express that at the begining so that they will not be held liable because they are a legal professional.

    Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.

    There is also no human entity to hold legally responsible if the LLM hallucinates or sites a source that is not factual (satire for instance).

    We also know that the vast majority of people who use chatbots do not get the sources they come from.

    So. When Wikipedia presents information it is not giving legal advice. That is born out in case law.

    The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.

    No lawyers are going to reddit to get help writing legal briefs. We have seen lawyers using LLM’S for that though.