• MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

    What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?

    A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged

    So they did. Why are we talking about ChatGPT then? You could just leave that part out. It’s useless. Obviously a fake photo has been manipulated. Why bother asking?

      • plantfanatic@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        Wait, you’re surprised it did what you asked of it?

        There’s a massive difference between asking if something is fake, and telling it it is and asking why.

        A person would make the same type of guesses and explanations if given the same task.

        All this is showing is, you and ALOT of other people just don’t know enough about AI to be able to have a conversation about it.

        It even says “suggests” in it, it’s making no claim that it’s real or fake. The lack of basic comprehension is the issue here.

        • Weslee@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I think if a person were asked to do the same they would actually look at the image and make genuine remarks, look at the points it has highlighted, the boxes are placed around random points and the references to those boxes are unrelated (ie. yellow talks about branches when there are no branches near the yellow box, red talks about bent guardrail when the red box on the guardrail is of an undamaged section)

          It has just made up points that “sound correct”, anyone actually looking at this can tell there is no intelligence behind this

          • plantfanatic@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            4 months ago

            Why would it have to? It and the person doing the task already knows to do any task put in front of it. It’s one of a hundred photos for all it and the person knows.

            You are extending context and instructions that doesn’t exist. The situation would be, both are doing whatever task is presented to them. A human asking would fail and be removed. They failed order number one.

            You could also setup a situation where the ai and human were both capable of asking. The ai won’t do what it’s not asked, that’s the comprehension lacking.

            • sem@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              When people use a conversational tool, they expect it to act human, which it INTENTIONALLY DOES but without the sanity of a real human.

              • plantfanatic@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                4 months ago

                It’s not a conversation tool when you present it with a specific task….

                Do you not understand even the basic premise of how ai works?

                • sem@piefed.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 months ago

                  When we are talking about LLM chat bots, they have a conversational interface. I am not talking about other types of machine learning. I don’t have time to keep responding.