• fuzzzerd@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Let’s be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn’t consider a negative response to its actions due to its training and context being limited?

    Sure it gives the llm a more human like persona, but so far I’ve yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.

    • neclimdul@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.