• cherrari@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I don’t think so. All AI needs now is formal specs of some technical subject, not even human readable docs, let alone translations to other languages. In some ways, this is really beautiful.

    • 123@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Technical specs don’t capture the bugs, edge cases and workarounds needed for technical subjects like software.

      • cherrari@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        I can only speak for myself obviously, and my context here is some very recent and very extensive experience of applying AI to some new software developed internally in the org where I participate. So far, AI eliminated any need for any kind of assistance with understanding and it was definitely not trained on this particular software, obviously. Hard to imagine why I’d ever go to SO to ask questions about this software, even if I could. And if it works so well on such a tiny edge case, I can’t imagine it will do a bad job on something used at scale.

        • 123@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          If we go by personal experience, we recently had the time of several people wasted troubleshooting an issue for a very well known commercial Java app server. The AI overview hallucinated a fake system property for addressing an issue we had.

          The person that proposed the change neglected to mention they got it from AI until someone noticed the setting did not appear anywhere in the official system properties documented by the vendor. Now their personal reputation is that they should not be trusted and they seem lazy on top of it because they could not use their eyes to read a one page document.

          • cherrari@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            That’s a very interesting insight. Maybe the amount of hallucination depends on whether the “knowledge” was loaded in form of a prompt vs training data? In the experience I’m talking about there’s no hallucination at all, but there are wrong conclusions and hypotheses sometimes, especially with really tricky bugs. But that’s normal, the really tricky edge cases is probably not something I’d expect to find on SO anyway…

    • SoftestSapphic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      Lol no, AI can’t do a single thing without humans who have already done it hundreds of thousands of times feeding it their data

      • okmko@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I used to push back but now I just ignore it when people think that these models have cognition because companies have pushed so hard to call it AI.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      The whole point of StackExchange is that it contained everything that isn’t in the docs.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      It can’t handle things it’s not trained on very well, or at least not anything substantially different from what it was trained on.

      It can usually apply rules it’s trained on to a small corpus of data in its training data. Give me a list of female YA authors. But when you ask it for something more general (how many R’s there are in certain words) it often fails.