Did nobody really question the usability of language models in designing war strategies?

  • Blueberrydreamer@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    9 months ago

    How is that structurally different from how a human answers a question? We repeat an answer we “know” if possible, assemble something from fragments of knowledge if not, and just make something up from basically nothing if needed. The main difference I see is a small degree of self reflection, the ability to estimate how ‘good or bad’ the answer likely is, and frankly plenty of humans are terrible at that too.

    • EvolvedTurtle@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I would argue that a decent portion of humans are usually ok with admitting they don’t know something

      Unless they are in a situation where they will be punished for not knowing

      My favorite doctor claimed he didn’t know something and at first I was thinking “Man that’s weird” but then I thought about all the times I’ve personally had or heard stories of doctors that bullshited their way into something like how I couldn’t possibly be diagnosed with ADHD at 18

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      I dare say that if you ask a human “Why should I not stick my hand in a fire?” their process for answering the question is going to be very different from an LLM.

      ETA: Also, working in software development, I’ll tell ya… Most of the time, when people ask me a question, it’s the wrong question and they just didn’t know to ask a different question instead. LLMs don’t handle that scenario.

      I’ve tried asking ChatGPT “How do I get the relative path from a string that might be either an absolute URI or a relative path?” It spat out 15 lines of code for doing it manually. I ain’t gonna throw that maintenance burden into my codebase. So I clarified: “I want a library that does this in a single line.” And it found one.

      An LLM can be a handy tool, but you have to remember that it’s also a plagiarizing, shameless bullshitter of a monkey paw.

      • Blueberrydreamer@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        “Most of the time, when people ask me a question, it’s the wrong question and they just didn’t know to ask a different question instead.”

        “I’ve tried asking ChatGPT “How do I get the relative path from a string that might be either an absolute URI or a relative path?” It spat out 15 lines of code for doing it manually. I ain’t gonna throw that maintenance burden into my codebase. So I clarified: “I want a library that does this in a single line.” And it found one.”

        You see the irony right? I genuinely can’t fathom your intent when telling this story, but it is an absolutely stellar example.

        You can’t give a good answer when people don’t ask the right questions. ChatGPT answers are only as good as the prompts. As far as being a “plagiarizing, shameless bullshitter of a monkey paw” I still don’t think it’s all that different from the results you get from people. If you ask a coworker the same question you asked chatGPT, you’re probably going to get a line copied from a Google search that may or may not work.

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          You see the irony right? I genuinely can’t fathom your intent when telling this story, but it is an absolutely stellar example.

          Yes, I did mean for it to be an example.

          And yes, I do think that correctly framing a question is crucial whether you’re dealing with a person or an LLM. But I was elaborating on whether a person’s process of answering a question is fundamentally similar to an LLM’s process. And this is one way that it’s noticeably different. A person will size up who is asking, what they’re asking, and how they’re asking it… and consider whether they should actually answer the exact question that was asked or suggest a better question instead.

          You can certainly work around it, as the asker, but it does require deliberate disambiguation. I think programmers are used to doing that, so it may feel like not that big of a deal, but if you start paying attention to how often people are tossing around half-formed questions or statements and just expecting the recipient to fill in the gaps… It’s basically 100% of the time.

          We’re fundamentally social creatures first, and intelligent creatures second. (Or third, or not at all, depending.) We think better as groups. If you give 10 individuals a set of difficult questions, they’ll bomb almost all of them. If you give the questions to a group of 10, they’ll get almost all of them right. (There’s several You Are Not So Smart episodes on this, but the main one is 111.)

          Asking a question to an LLM is just completely different from asking a person. We’re not optimized for correctly filling out scantron sheets as individuals, we’re optimized for brainstorming ideas and pruning them as a group.

          • Lmaydev@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            If you fed that information into one I bet you would get different answers.

            That is information that isn’t available to it generally.

      • fishos@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        9 months ago

        Yeah, and a car uses more energy than me. It still goes faster. What’s your point? The debate isn’t input vs output. It’s only about output(the ability of the AI).