• 0 Posts
  • 17 Comments
Joined 3 years ago
cake
Cake day: July 13th, 2023

help-circle
  • Currently my favorite piece of art is accidentally transgressive. It is a kinetic sculpture, and fairly unimpressive for what it is. Apparently it is a rip-off of someone else’s style. It is owned by a hotel that put it on the sidewalk corner for whatever reason. Generally unimpressive on its own.

    The thing that makes it interesting is that, unlike most sculptures I have seen, this one locks as in: the spinning parts are prevented from moving unless a special key is used to unlock them. And the hotel locks the sculpture during non-business hours. If you want to see the sculpture move, you have to visit it between the hours of 9-5 on work days. Presumably when the hotel staff can monitor your presence on the public sidewalk just outside their hotel. Otherwise it is locked. Something about putting a derivative sculpture in a public space, then taking steps to prevent the public from enjoying it is fascinating to me. Although I feel like a speech from a hotel lawyer about potential liability or whatever reason they so vigilantly lock it is an essential part of the art that I am missing.




  • At work I do not think their use is ever justifiable because the rapidly increase the amount of satisficing behavior from my colleagues. I have had many experiences where obviously bad work was submitted that was clearly llm generated and it was clear that the person submitting it just generated the output and handed it over. People turn their brains off.

    The other thing I have noticed about their use is, once you start caring about the quality of your work, their value plummets. If I were to use one for my work I would need to check its output by experimenting with code, doing research, thoroughly considering both sides of an argument, etc. But if I were not going to use one I would do my own work by experimenting with code, doing research, thoroughly considering both sides of an argument, etc. So what is the advantage to using one? Either way I am still putting forth the effort to ensure my work-product is high quality. Going clackity-clack on the keyboard is not the hard part of my job, all the other stuff is.


  • I got my current job a few years back. I made an account thinking it would help. It was basically useless for finding a job. The folks on there that were hiring would all demand you engage with their posts (I guess as a way of increasing “influence”) but would not actually hire.

    It is possible that prospective employers might look at your account during a job interview process which is why I have kept my account. But it did not help me find a job.


  • There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.

    To me this seems like an obvious problem with the chat bots. These things are marketed as “PhD level experts” and so advanced that they are about to change the nature or work as we know it.

    I don’t think the companies or their supporters can make these claims, then turn around and say “well obviously you shouldn’t take its output seriously” when a delusional person is tricked by one into doing something bad.


  • (1) Network effects. People want to use social media that everyone else is using. Once a site achieves a critical mass of users it becomes the obvious choice to join. It also becomes difficult to leave because if you have built up a personal network on most sites, you can’t take it with you.

    (2) Convenience. Most sites don’t require a lot of effort to use. In the past few years this one has surprised me a bit. The level of effort most people are willing to put in to trying a new site is basically 0. Using something like lemmy requires you to read a few paragraphs and make a decision about a home instance. That is too much effort for a lot of people.


  • In Texas they are using personal data collected from ALPRs to accuse women of getting abortions. There were also concerns that personal data collected by period tracker apps would be used to accuse women of getting abortions. You could be doing something that suddenly becomes illegal and then those data could be used to harm you

    ICE is using facial recognition and a database of questionable veracity to accuse legal residents of being illegal immigrants. They are collecting facial data of protestors and, apparently, using it compile of list of domestic “terrorists”. You could be doing absolutely nothing illegal and the state could use your personal data to harm you.

    Social media companies use data they collect about you to try to get you addicted to their products because you are easier to manipulate when you are addicted. They know a lot of their products have harmful impacts on people, but they don’t care because they make more money that way.


  • Last attempt, I swear.

    By digressing to abstraction, good people can and do justify building tech for immoral purposes. It is irrelevant that tech is not inherently good or bad in cases where it is built to do bad things. Talking about potential alternate uses in cases where tech is being used to do bad is just a way of avoiding the issues.

    I have no problem calling flock or facebooks tech stack bad because the intentions behind the tech are immoral. The application of the tech by those organizations is for immoral purposes (making people addicted, invading their privacy etc). The tech is an extension of bad people trying to do bad things. Commentary about tech’s abstract nature is irrelevant at that point. Yeah, it could be used to do good. But it’s not. Yeah, it is not in-and of-itself good or bad. Who cares? This instantiation of the tech is immoral, because it’s purposes are immoral.

    The engineers who help make immoral things possible should think about that, rather than the abstract nature of their technology. In these cases, saying technology is neutral is to invite the listener to consider a world that doesn’t exist instead of the one that does.




  • I don’t see how that is the case.

    It is literally the case. People who have literally made tools to do bad things justified it by claiming that tech is neutral in an abstract sense. Find an engineer who is building a tool to do something they think is bad, they will tell you that bromide.

    OpenCV is not, in itself, immoral. But openCV is, once again, actual tech that exists in the actual world. In fact, that is how I know it is not bad, I use the context of reality—rather than hypotheticals or abstractions—to assess the morality of the tech. The tech stack that makes up Flock is bad, once again I make that determination by using the actual world as a reference point. It does not matter that some of the tech could be used to do good. In the case of Flock, it is not, so it’s bad.


  • As I said before: In a conversation about technology as it actually exists, talking about potentials is not interesting. Yes all technology has the potential to be good or bad. The massive surveillance tech is actually bad right now in the real world

    This issue with asserting that technology is neutral is it lets the people who develop it ignore the impacts of their work. The engineers that make surveillance tech make it, ultimately, for immoral purposes. When they are confronted with the effects of their work on society they avoid according with the ethics of what it is that they are doing by deploying bromides like “technology is neutral.”

    Example: Building an operant conditioning feedback system into a social media app or video game is not inherently bad, you could use it to reinforce good behaviors and deploy it ethically by obtaining the consent of the people you use on. But the operant conditioning tech in social media apps and video games that actually exists is very clearly and unambiguously bad. It exists to get people addicted to a game or media app, so that they can be more easily exploited. Engineers built that tech stack out for the purpose of exploiting people. The tech, as it exists in the real world, is bad. When these folks were confronted with what they had done, they responded by claiming that tech is not inherently good or bad. (This is a real thing social media engineers really said) They ignored the tech—as it actually exists—in favor of an abstract conversation about some potential alternative tech that does not exist. The effect of which is the people doing harm built a terrible system without ever confronting what it was they were doing.


  • “Technology is neutral” is a bromide engineers use to avoid thinking about how their work impacts people. If you are an engineer working for flock or a similar company, you are harming people. You are doing harm through the technology you help to develop.

    The massive surveillance systems that currently exist were built by engineers who advanced technology for that purpose. The scale and totality of the resulting surveillance states are simply not possible without the tech. The closest alternatives are stasi-like systems that are nowhere near as vast or continuous. In the actual world the actual tech is immoral. Because it was created for immoral purposes and because it is used for immoral purposes.


  • If you are in a discussion about the development and deployment of technology to facilitate a surveillance state, then saying “technology is neutral” is the least interesting thing you could possibly say on the subject.

    In a completely abstract, disconnected-from-society-and-current-events sense it is correct to say technology is amoral. But we live in a world where surveillance technology is developed to make it easier for corporations and the state to invade the privacy of individuals. We live in a world where legal rights are being eroded by the use of this technology. We live in a world where this technology is profitable because it helps organizations violate individual rights. If you live in the US, as I do, then you live in a world where federal law enforcement agencies have become completely contemptuous of the law and are literally abducting innocent people off the street. They use the technology under discussion here to help them do that.

    That a piece of tech might potentially be used for a not-immoral purpose is completely irrelevant to how it is actually being used in the real world.