The actor told an audience in London that AI was a “burning issue” for actors.

      • Lmaydev@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        9
        ·
        1 year ago

        Some AIs are more intelligent than the average person.

        Ask a normal person to do the tasks ChatGPT can and I bet the results would be even worse.

        • 42Firehawk@lemmy.zip
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          1 year ago

          Ask chatGPT to do things a normal person can, and it also fails. ChatGPT is a tool, a particularly dangerous swiss army chainsaw.

          • Lmaydev@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            7
            ·
            1 year ago

            I use it all the time at work.

            Getting it to summerize articles is a really useful way to use it.

            It’s also great at explaining concepts.

            • abbotsbury@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              ·
              1 year ago

              It’s also great at explaining concepts.

              Is it? Or is it just great at making you think that? I’ve seen many ChatGPT outputs “explaining” something I’m knowledgeable of and it being deliriously wrong.

              • gedaliyah@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 year ago

                I agree. I have a very specialized knowledge in certain areas, and when I’ve tried to use chat GPT to supplement my work, it often misses key points or gets them completely wrong. If it can’t process the information, it will err on the side of creating an answer whether it is correct or not, and whether it is real or not. The creators call this “hallucination.”

              • Lmaydev@programming.dev
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                5
                ·
                1 year ago

                Yeah it is if you prompt it correctly.

                I basically use it instead of reading the docs when learning new programming languages and Frameworks.

                • abbotsbury@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  1 year ago

                  That’s great, it works until it doesn’t and you won’t know when unless you already are knowledgeable from a real source.

                  • Lmaydev@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    1 year ago

                    You know it doesn’t work when you try and tell it doesn’t work and it’ll usually correct itself.

                • nickwitha_k (he/him)@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  A coworker tried to use it with a well-established Python library and it responded with a solution involving a Class that did not exist.

                  LLMs can be useful tools but, be careful in trusting them too much - they are great at what I’d say is best described as “bullshitting”. It’s not even “trust but verify” it’s more “be skeptical of anything that it says”. I’d encourage you to actually read the docs, especially those for libraries as it will give you a deeper understanding of what’s actually happening and make debugging and innovating easier.

                  • Lmaydev@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    1 year ago

                    Ive had no problem using them. The more specific you get the more likely they are to do that. You just have to learn how to use them.

                    I use them daily for refactoring and things like that without issue.

      • R0cket_M00se@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        6
        ·
        1 year ago

        That’s why QA will still exist.

        Plus when I say “AI will kill data entry jobs” I don’t mean ChatGPT3.5/4.0, I’m talking about either a dedicated Saas offering or a future LLM model intended for individual Enterprise environment deployment and trained specifically on company data alongside Cloud and Data engineering.

        Keep downvoting the guy who literally works in IT and is seeing these changes happen in real time, I’m sure you all know better than I do.

      • Lmaydev@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Literally what computer programmes are. A large part of development is making sure end users do things correctly.

        It’s a perfect task for AI. In fact most of it is achievable with standard coding.

        • R0cket_M00se@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          1 year ago

          These troglodytes probably couldn’t even find their way around a terminal, don’t worry about what they think can and cant be done with LLM’s.

            • Lmaydev@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              1 year ago

              Not “knowing” doesn’t have anything to do with AI performance. That’s a very human centric view.

            • R0cket_M00se@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              9
              ·
              1 year ago

              I work in enterprise IT networking and systems and don’t give a fuck about your shitty home server.

              AI will do the bulk of the work, and humans will QA it. It’s not that fucking hard to understand. No one here except you is focusing on the fact that it can’t actually think for itself, no one ever said it was going to do its job without any kind of oversight.

              Go back to being a hobbyist and let us professionals decide what can and can’t be done.

              • jaek@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 year ago

                I work in enterprise IT networking and systems

                I’ll bet you’re an MSP monkey or a DC tech

                • R0cket_M00se@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 year ago

                  I’m actually the architect at a network operations center for a company that supports over 3000+ users across the entire US, but I can see that I hurt a bunch of self hosters feelings around here by claiming that AI will take over the majority of unskilled computer labor.