• hokage@lemmy.world
    link
    fedilink
    arrow-up
    241
    arrow-down
    3
    ·
    1 year ago

    What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS’s LLM play at this point.

      • Altima NEO@lemmy.zip
        link
        fedilink
        English
        arrow-up
        35
        ·
        1 year ago

        Yeah where the hell do these posters find these articles anyway? It’s always from blogs that repost stuff from somewhere else

    • Wats0ns@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      42
      ·
      1 year ago

      Openai biggest spending is infrastructure, Whis is rented from… Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested

      • fidodo@lemm.ee
        link
        fedilink
        English
        arrow-up
        25
        ·
        1 year ago

        MS is basically getting a ton of equity in exchange for cloud credits. That’s a ridiculously good deal for MS.

    • monobot@lemmy.ml
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      1 year ago

      While title is click bite, they do say right at the beginning:

      *Right now, it is pulling through only because of Microsoft’s $10 billion funding *

      Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 year ago

      I mean, you’re correct in the sense Microsoft basically owns their ass at this point, and that Microsoft doesn’t care if they make a loss because it’s sitting on a mountain of cash. So one way or another Microsoft is getting something cool out of it. But at the same time it’s still true that OpenAI’s business plan was unsustainable hyped hogwash.

      • chiliedogg@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 year ago

        Their business plan got Microsoft to drop 10 billion dollars on them.

        None of my shitty plans have pulled that off.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          If they got any of that into their own pockets kudos to them.

          Mainly they used it to pay for the tech and research and it’s all reverting back to Microsoft eventually. Going bankrupt is not quite the same as being acquired.

      • fidodo@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Also, their biggest expenses are cloud expenses, and they use the MS cloud, so that basically means that Microsoft is getting a ton of equity in a hot startup in exchange for cloud credits which is a ridiculously good deal for MS. Zero chance MS would let them fail.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Almost every company uses either Google or Microsoft Office products and we already know that they’re working on an AI offering/solution for O365 integration, they can see the writing on the wall here and are going to profit massively as they include it in their E5 license structure or invent a new one that includes AI. Then they’ll recoup that investment in months.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    144
    arrow-down
    3
    ·
    1 year ago

    There’s no way Microsoft is going to let it go bankrupt.

    • jmcs@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      68
      arrow-down
      2
      ·
      1 year ago

      If there’s no path to make it profitable, they will buy all the useful assets and let the rest go bankrupt.

      • JeffCraig@citizensgaming.com
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        1 year ago

        Microsoft reported profitability in their AI products last quarter, with a substantial gain in revenue from it.

        It won’t take long for them to recoup their investment in OpenAI.

        If OpenAI has been more responsible in how they released ChatGPT, they wouldn’t be facing this problem. Just completely opening Pandora’s box because they were racing to beat everyone else out was extremely irresponsible and if they go bankrupt because of it then whatever.

        There’s plenty of money to be made in AI without everyone just fighting over how to do it in the most dangerous way possible.

        I’m also not sure nVidia is making the right decision trying their company to AI hardware. Sure, they’re making mad money right now, but just like the crypto space that can dry up instantly.

        • dartos@reddthat.com
          link
          fedilink
          English
          arrow-up
          14
          ·
          1 year ago

          I don’t think you’re right about nvidia. Their hardware is used for SO much more than AI. They’re fine.

          Plus their own AI products are popping off rn. DLSS and their frame generation one (I forget the name) are really popular in the gaming space.

          I think they also have a new DL-based process for creating stencils for silicon photolithography which, in my limited knowledge, seems like a huge deal.

      • Gond0r@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Couldn’t they charge a subscription? Or sell credits?

        Genuine question.

    • Tigbitties@kbin.social
      link
      fedilink
      arrow-up
      25
      ·
      1 year ago

      That’s $260 million .There are 360 million paid seats of MS360. So they’d have to raise their prices $0.73 per year to cover the cost.

  • Elderos@lemmings.world
    link
    fedilink
    English
    arrow-up
    100
    arrow-down
    5
    ·
    1 year ago

    That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.

    • glockenspiel@lemmy.world
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      3
      ·
      1 year ago

      I don’t think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.

      What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn’t supposed to do.

      I’ve noticed it has become worse at rubber ducking non-trivial coding prompts. I’ve noticed that my juniors have a hell of a time functioning without access to it, and they’d rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.

      A good tool for getting people on ramped if they’ve never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.

      • Windex007@lemmy.world
        link
        fedilink
        English
        arrow-up
        69
        arrow-down
        5
        ·
        1 year ago

        As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.

        It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.

        Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

        ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          35
          arrow-down
          2
          ·
          1 year ago

          Search engines aren’t truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it’s just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.

          • Windex007@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            1 year ago

            I agree with everything you just said, but i think that without greater context it’s maybe still unclear to some why I still place chatGPT in a league of it’s own.

            I guess I’m maybe some kind of relic from a bygone era, because tbh I just can’t relate to the “I copied and pasted this from stack overflow and it just worked” memes. Maybe I underestimate how many people in the industry are that fundamentally different from how we work.

            Google is not for obtaining code snippets. It’s for finding docs, for troubleshooting error messages, etc.

            If you have like… Design or patterning questions, bring that to the team. We’ll run through it together with the benefits of having the contextual knowledge of our problem domain, internal code references, and our deployment architecture. We’ll all come out of the conversation smarter, and we’re less likely to end up needing to make avoidable pivots later on.

            The additional time required to validate a chatGPT generated piece of code could have instead been spent invested in the dev to just do it right and to properly fit within our context the first time, and the dev will be smarter for it and that investment in the dev will pay out every moment forward.

            • JackbyDev@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I guess I see your point. I haven’t asked ChatGPT to generate code and tried to use it except for once ages ago but even then I didn’t really check it and it was a niche piece of software without many examples online.

        • SupraMario@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          2
          ·
          1 year ago

          Don’t underestimate C levels who read a Bloomberg article about AI to try and run their entire company off of it…then wonder why everything is on fire.

        • flameguy21@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Honestly once ChatGPT started giving answers that consistently don’t work I just started googling stuff again because it was quicker and easier than getting the AI to regurgitate stack overflow answers.

        • ewe@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

          Not me, but my boss would… wait a minute…

      • bmovement@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.

        Edit: shit, maybe I’m too dependent on it.

        • JimmyMcGill@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          1 year ago

          I’m also having a good time with copilot

          Considering asking my company to pay for the subscription as I can justify that it’s worth it.

          Yes many times it is wrong but even if it it’s only 80% correct at least I get a suggestion on how to solve an issue. Many times it suggest a function and the code snippet has something missing but I can easily fix it or improve it. Without I would probably not know about that function at all.

          I also want to start using it for documentation and unit tests. I think there it’s where it will really be useful.

          Btw if you aren’t in the chat beta I really recommend it

          • Jerkface@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Just started using it for documentation, really impressed so far. Produced better docstrings for my functions than I ever do in a fraction of the time. So far all valid, thorough and on point. I’m looking forward to asking it to help write unit tests.

            • JimmyMcGill@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              it honestly seems better suited for those tasks because it really doesn’t need to know anything that you’d have to tell it otherwise.

              The code is already there, so it can get literally all the info that it needs, and it is quite good at grasping what the function does, even if sometimes it lacks the context of the why. But that’s not relevant for unit tests, and for documentation that’s where the user comes in. It’s also why it’s called copilot, you still make the decisions.

    • Gsus4@feddit.nl
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      edit-2
      1 year ago

      But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to survey-farm usage to figure out what the most popular use cases were and then sell that information or repackage use-cases as an individual added-value service.

    • Immersive_Matthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      I am unsure about the free version, but I really am very surprised by how good the paid version with the code interpreter has gotten in the last 4-6weeks. Feels like I have a c# syntax guru on 24/7 access. Used to make lots of mistakes a couple months ago, but rarely does now and if it does it almost always fixes in in the next code edit. It has saved my untold hours.

  • merthyr1831@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    1
    ·
    1 year ago

    I mean apart from the fact it’s not sourced or whatever, it’s standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).

    Once everyone’s using your product over competitors who couldn’t afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you’re the biggest player in the market.

    It’s just Uber’s business model.

    • some_guy@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      1 year ago

      The difference is that the VC bubble has mostly ended. There isn’t “free money” to keep throwing at a problem post-pan. That’s why there’s an increased focus on Uber (and others) making a profit.

      • FlumPHP@programming.dev
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 year ago

        In this case, Microsoft owns 49% of OpenAI, so they’re the ones subsidizing it. They can also offer at-cost hosting and in-roads into enterprise sales. Probably a better deal at this point than VC cash.

      • yiliu@informis.land
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        This is what caused spez at Reddit and Musk at Twitter to go into desperation mode and start flipping tables over. Their investors are starting to want results now, not sometime in the distant future.

      • voluble@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I don’t know anything about anything, but part of me suspects that lots of good funding is still out there, it’s just being used more quietly and more scrupulously, & not being thrown at the first microdosing tech wanker with a great elevator pitch on how they’re going to make “the Tesla of dental floss”.

    • nodimetotie@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      Speaking of Uber, I believe it turned a profit the first time this year. That is, it never made any profit since its creation in whenever it was created.

      • ineedaunion @lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        4
        ·
        1 year ago

        All it’s every done is rob from it’s employees so it can give money to stockholders. Just like every corporation.

  • Billy_Gnosis@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    8
    ·
    1 year ago

    If AI was so great, it would find a solution to operate at fraction of the cost it does now

    • Death_Equity@lemmy.world
      link
      fedilink
      English
      arrow-up
      71
      arrow-down
      1
      ·
      1 year ago

      Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          OP might have been intending it as a joke, but self-improvement is a very real subject of AI research so if that’s the case he accidentally said something about a serious topic.

          • Buddahriffic@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            It’s an essential part of the idea of the technological singularity. An AI iterates itself and the systems it runs on, becoming more efficient, powerful, and effective at a rate that makes all of human progress up to that point look like nothing.

            • MajorHavoc@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              While I’m inclined to believe the singularity is achievable, it’s important to remember that there’s no evidence today that it will ever be reached.

              Our hope for it, and the good than can come with it, can’t pull it into the realm of things we will see in our lifetimes. It could emerge soon, but it’s at least as likely to stay science fiction for another millennia.

              • Buddahriffic@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Yeah, when chat gpt 4 first came out, I thought we might be close. But as it’s capabilities and limitations became more clear, it doesn’t look like we’re close at all. I mean, it’s hard to say for sure since an LLM will just make up a part of an AI and maybe the other pieces are farther along but just not getting as much attention because there’s value in not making those things public.

                But as someone who works in one of the fields that would be involved in the technological singularity, no one really knows good ways to apply AI to the work we do and the best initiatives I’ve seen come out of the corporate drive to leverage AI aren’t actually AI, but just smarter automation tools.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        9
        ·
        1 year ago

        if we don’t know, it doesn’t know.

        If we know, but there’s no public text about it, it doesn’t know either.

        it is trained off of stuff that has already been written, and trained to emulate the statistical properties of those words. It cannot and will not tell us anything new

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          15
          arrow-down
          1
          ·
          1 year ago

          That’s not true. These models aren’t just regurgitating text that they were trained on. They learn the patterns and concepts in that text, and they’re able to use those to infer things that weren’t explicitly present in the training data.

          I read recently about some researchers who were experimenting with ChatGPT’s ability to do basic arithmetic. It’s not great at it, but it’s definitely figured out some techniques that allow it to answer math problems that were not in its training set. It gets them wrong sometimes, but it’s like a human doing math in its head rather than a calculator using rigorous algorithms so that’s to be expected.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 year ago

            they learn statistical correlations between words. given the last 5000 (or however large the context is) words, and absolutely no other information besides that, what is the most likely word to appear next? It’s a glorified order 5000 markov chain.

            The reason it can “do” some math is that there are tons of examples in the training set using small numbers usually used as examples. it can do basic arithmetic because it has seen “2+2=4” and other examples with simple numbers like that. The studies used test basic arithmetic. The same things that it had millions of pre-worked examples of. And it still gets those wrong, with astonishing frequency. those studies aren’t talking about asking it “what is the square root of pi” or stuff like that. but stuff such as “is 7 greater than 4?”, “what is 10 + 3?”, “is 97 prime?” stuff it has most definitely seen the answers to. ask it about some large prime, and it’ll nay no, and be probably right, because most numbers are composite

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              1 year ago

              those studies aren’t talking about asking it “what is the square root of pi” or stuff like that. but stuff such as “is 7 greater than 4?”, “what is 10 + 3?”, “is 97 prime?” stuff it has most definitely seen the answers to.

              No, they very explicitly checked to see whether the training set contains the literal math problem that they asked it for the answer to. ChatGPT is able to answer math questions that it has never seen before. I believe this is the article (though I had to go searching, it’s been a while).

              When people dismiss LLMs as “just prediction engines” they’re really missing the point. Of course they’re prediction engines, that’s not in dispute. The question is about how they go about making those predictions. When I show you the string “18 + 10 =” you can predict what comes next, yes? Well, how did you predict it? Did you memorize that particular specific string, or have you developed heuristics for how to do simple addition problems when you see them?

              • MajorHavoc@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                1 year ago

                These things are currently infamously bad at math, though.

                I won’t argue that it’ll never get there. I’m confident it will, - though with a lot more perl hacks than elegant emergence.

                But today, these things have an astonishingly high ‘appearance of intelligence’ to ‘incredible stupidity’ ratio.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  1 year ago

                  Humans are also not particularly well known for their math skills. Ask a random stranger to do simple arithmetic in their head, with only a few seconds to think and no outside help, and I wouldn’t expect particularly reliable results.

            • MajorHavoc@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 year ago

              It gives me the giggles that folks speculating are getting more upvotes than your post that demonstrates actual understanding of the implementation details.

              If I were the type to sell sizzle hype and snake oil, now would be the time to do it. The venture capitalists may have learned their lesson, but the general public haven’t.

      • discodoubloon@kbin.social
        link
        fedilink
        arrow-up
        10
        arrow-down
        4
        ·
        1 year ago

        It doesn’t “know” anything. It can’t solve that problem. It’s trained on humans so it’s limited to what we have written down.

        I love ChatGPT but if it’s creative it’s because you asked it the right questions and found an oblique answer yourself.

      • Buffaloaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I asked Bard how ChatGPT can fix itself, here’s what it said:

        ChatGPT can fix itself in a number of ways, including:

        Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.

        Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.

        Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.

        Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.

        Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT’s performance.

        • lagomorphlecture@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          “Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”

          So basically create its own Fox News and see how that goes.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            The full suggestion includes “This would allow ChatGPT to learn from its own mistakes”, which implies that the text it generated would be evaluated and curated before being sent back into it for training. That, as well as including non-AI-generated text along with the AI generated stuff, should stop model collapse.

            Model collapse is basically inbreeding, with similar causes and similar solutions. A little inbreeding is not inherently bad, indeed it’s used frequently when you’re trying to breed an organism to have specific desirable characteristics.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          If having an AI tell researchers that they should base its next iteration off of Megatron isn’t the plot of a Michael Bay Transformers movie already, it should have been.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      Deepmind is actually working on an AI that improve performances of low level programs. It started with improving sorting algorithm.

      It’s an RL algorithm.

      Main issue is that everything takes time, and expectations on current AI are artificially inflated.

      It will reach the point most are discussing now, it’ll simply take a bit longer than people expect

      Source: https://www.nature.com/articles/d41586-023-01883-4

    • pachrist@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      1 year ago

      ChatGPT has the potential to make Bing relevant and unseat Google. No way Microsoft pulls funding. Sure, they might screw it up, but they’ll absolutely keep throwing cash at it.

      • XTornado@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        1 year ago

        They seems to be killing Cortana… So I expect a new assistant at least based partially on this tbh.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      edit-2
      1 year ago

      It is clearly no sense. But it satisfies the irrational needs of the masses to hate on AI.

      Tbf I have no idea why. Why do people hate a extremely clever family of mathematical methods, which highlights the brilliance of human minds. But here we are. Casually shitting on one of the highest peak humanity has ever reached

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        It seems to be a common thing. I gave up on /r/futurology and /r/technology over on Reddit long ago because it was filled with an endless stream of links to cool new things with comment sections filled with nothing but negativity about those cool new things. Even /r/singularity is drifting that way. And so it is here on the Fediverse too, the various “technology” communities are attracting a similar userbase.

        Sure, not everything pans out. But that’s no excuse for making all of these communities into reflections of /r/nothingeverhappens. Technology does change, sometimes in revolutionary ways. It’d be nice if there was a community that was more upbeat about that.

      • MajorHavoc@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I probably sound like I hate it, but I’m just giving my annual “this new tech isn’t the miracle it’s being sold as” warning, before I go back to charging folks good money to clean up the mess they made going “all in” on the last one.

      • BetaDoggo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        People are scared because it will make consolidation of power much easier, and make many of the comfyer jobs irrelevant. You can’t strike for better wages when your employer is already trying to get rid of you.

        The idealist solution is UBI but that will never work in a country where corporations have a stranglehold on the means of production.

        Hunger shouldn’t be a problem in a world where we produce more food with less labor than anytime in history, but it still is, because everything must have a monetary value, and not everyone can pay enough to be worth feeding.

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I agree with this. People should fight to democratize AI, public model, public data, public fair research. And should fight misuse of it from business schools’ type of guys.

  • TimeMuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 year ago

    Indian newpapers publish anything without any sort of verification. From reddit videos to whatsapp forwards. More than news, they are like an old chinese whispers game which is run infinitely. So take this with a huge grain of salt.

  • balance_sheet@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 year ago

    Wow I am so much worried about a company that is funded by Microsoft going bankrupt!

    They don’t “go bankrupt”. Even if it happens, it’s more let than going.

    • sfgifz@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      1 year ago

      Company go bankrupt, biggest investors take assets and IP at discount. Win.

  • figaro@lemdro.id
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    1 year ago

    Pretty sure Microsoft will be happy to come save the day and just buy out the company.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they’re anywhere near a net profit but they’re far from bankruptcy.

    • subversive_dev@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Right!? I believe it has the hallmark repetitive blandness indicating AI wrote it (because oroboros)

    • pexavc@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The flow of the writing style felt kinda off, like someone was speaking really fast spewing random trivia and leaving

      • NuanceDemon@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 year ago

        It works if you ask it for small specific components, the bigger the scope of the request, the less likely it will give you anything worthwhile.

        So basically you still need to know what you’re doing and how to design a script/program anyway, and you’re just using chatgpt to figure out the syntax.

        It’s a bit of time-saver at times but it’s not replacing anyone in the immediate future.

      • SocialMediaRefugee@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        edit-2
        1 year ago

        I’ve tried using it myself and the responses I get, no matter how I phrase them, are too vague in most places to be useful. I have yet to get anything better than what I’ve found in documentation.

        • sfgifz@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          edit-2
          1 year ago

          My experience is different, the response I get is not perfect but it’s good enough to be a start for any decent dev to refactor and build upon with lesser effort than from scratch. Maybe it depends on what language or framework you’re asking for.

        • Tony Bark@pawb.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          I have problems with it repeating certain words over and over again no matter how much I adjust the style and tone.

  • LemmyLefty@lemmy.world
    link
    fedilink
    arrow-up
    31
    arrow-down
    4
    ·
    1 year ago

    Does it feel like these “game changing” techs have lives that are accelerating? Like there’s the dot com bubble of a decade or so, the NFT craze that lasted a few years, and now AI that’s not been a year.

    The Internet is concentrating and getting worse because of it, inundated with ads and bots and bots who make ads and ads for bots, and being existentially threatened by Google’s DRM scheme. NFTs have become a joke, and the vast majority of crypto is not far behind. How long can we play with this new toy? Its lead paint is already peeling.

  • li10@feddit.uk
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    8
    ·
    1 year ago

    I don’t understand Lemmy’s hate boner over AI.

    Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.

    Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      9
      ·
      edit-2
      1 year ago

      AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It’s genuinely an astonishing historical and cultural achievement, peak of human ingenuity.

      No idea why such hate…

      One can hate disney ceo for misusing AI, but why shitting on AI?

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        1 year ago

        It’s shit on because it is not actually AI as the general public tends to use the term. This isn’t Data from Star Trek, or anything even approaching Asimov’s three laws.

        The immediate defense against this statement is people going into mental gymnastics and hand waving about “well we don’t have a formal definition for intelligence so you can’t say they aren’t” which is just… nonsense rhetorically because the inverse would be true as well. Can’t label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.

        Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this “AI haters” group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems’ problems intrinsically means you want them to fail. That’s BS.


        Machine Learning and Large Language Models are amazing, they’re game changing, but they aren’t magical panaceas and they aren’t even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.

        Anyone trying to say different isn’t familiar with the field or is trying to sell you something. It’s the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.

        The frustrating part is that people caught up in the hype train of AI will say the same thing: “You just don’t understand!” But then they’ll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.


        At least in my opinion that’s where the negativity comes from.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Personally, having been in Tech for almost 3 decades I am massivelly skeptical when the usual suspects put out yet another incredible claim backed up by overly positive one-sided evaluations of something they own, and worse in an area I actually have quite a lot of knowledge in and can see through a lot of the bullshit, and it gets picked up by mindless fanboys who don’t have the expertise to understand jack-shit of what they’re parroting and greedy fuckers using salesspeak because they stand to personally gain if enough usefull idiots jump into the hype train.

          You don’t even need to be old enough to remember that “revolution in human transportation” was how the Segway was announced: all it takes is to look at the claims about Bitcoin and the blockchain and remember the fraud-ridden shitshow the whole area became.

          As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.

          It’s not even negativity (except towards the greedy fuckers trying to take advantage of others and who can Burn In Hell), it’s informed (both historically and by domain knowledge) skepticism.

          • SirGolan@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.

            I’ve been working on AI projects on and off for about 30 years now. Honestly, for most of that time I didn’t think neural nets were the way to go, so when LLMs and transformers got popular, I was super skeptical. After learning the architecture and using them myself, I’m convinced they’re part of but not the whole solution to AGI. As they are now, yes, they are world changing. They’re capable of improving productivity in a wide range of industries. That seems pretty world changing to me. There are already products out there proving this (GitHub Copilot, jasper, even ChatGPT). You’re welcome to downplay it and be skeptical, but I’d highly recommend giving it an honest try. If you’re right then you’ll have more to back up your opinion, and if you’re wrong, you’ll have learned to use the tech and won’t be left behind.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              In my experience they’re a great tool to wrap and unwrap knowledge in and from language envelopes with different characteristics and I wouldn’t at all be surprised if they replace certain jobs which deal mostly with communicating with people (for example, I suspect the kind of news reporting of news agencies doesn’t really need human writters to compose articles, just data in bullet point format an LLM to turn it into a “story”).

              What LLMs are not is AGI and using them as knowledge engines or even just knowledge sources is a recipe for frustration as you end up either going down the wrong route by believing the AI or spending more time validating the AI output than the time it would take to find out the knowledge yourself from reliable sources.

              Whilst I’ve been on and off on the whole “might they be the starting point from which AGI comes” (which is really down to the question “what is intelligence”), what I am certain is nobody who is trully knowledgeable about it can honestly and assuredly state that “they are the seed from which AGI will come”, and that kind of crap (or worse, people just stating LLMs already are intelligent) is almost all of the hype we get about AI at the moment.

              At the moment and judging by the developments we are seeing, I’m more inclined to think that at least the reasoning part of intelligence won’t be solved by this path, though the intuition part of it might as that stuff is mainly about pattern recognition.

              • SirGolan@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                1 year ago

                Yeah, I generally agree there. And you’re right. Nobody knows if they’ll really be the starting point for AGI because nobody knows how to make AGI.

                In terms of usefulness, I do use it for knowledge retrieval and have a very good success rate with that. Yes, I have to double check certain things to make sure it didn’t make them up, but on the whole, GPT4 is right a large percentage of the times. Just yesterday I’d been Googling to find a specific law or regulation on whether airlines were required to refund passengers. I spent half an hour with no luck. ChatGPT with GPT4 pointed me to the exact document down to the right subsection on the first try. If you try that with GPT3.5 or really anything else out there, there’s a much higher rate of failure, and I suspect a lot of people who use the “it gets stuff wrong” argument probably haven’t spent much time with GPT4. Not saying it’s perfect-- it still confidently says incorrect things and will even double down if you press it, but 4 is really impressive.

                Edit: Also agree, anyone saying LLMs are AGI or sentient or whatever doesn’t understand how they work.

                • Aceticon@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  That’s a good point.

                  I’ve been thinking about the possibility of LLM revolutionizing search (basically search engines) which are not autoritative sources of information (far from) but they’ll get you much faster to those.

                  LLM’s do have most of the same information as they do, add the whole extra level of being able to use natural language to query it in a more natural way and due to their massive training sets, even if one’s question is slightly incorrect the nearest cluster of textual tokens in the token space (an oversimplified descriptions of how LLMs work, I know) to said incorrect question might very well be were the correct questions and answers are, so you get the correct answer (and funnilly enough the more naturally one poses the question the better).

                  However as a direct provider of answers, certainly in a professional setting, it quickly becomes something that produces more work than it saves, because you always have to check the answers since there are no cues about how certain or uncertain that result was.

                  I suspect many if not most of us also had human colleagues who were just like that: delivering even the most “this is a wild guess” answer to somebody’s question as an assured “this is the way things are”, and I suspect also that most of of those who had such colleagues quickly learned to not go to them for answers and always double check the answer when they did.

                  This is why I doubt it will do things like revolutionizing programming or in fact replace humans in producing output in hard-knowledge domains that operate mainly on logic, though it might very well replace humans whose work is to wrap things up in the appropriate language for the target audience (I suspect it’s going to revolutionize the production of highly segmented and even individually targetted propaganda in social networks)

      • HellAwaits@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        What I don’t understand is why so many people conflate “hating disney CEO for misusing AI” with “hating AI”. Maybe if people understood the differences, they would “understand the hate”

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        7
        ·
        edit-2
        1 year ago

        Ah, yes.

        Remind me again how that “revolution of human mobility”, the Segway, is doing now…

        Or how wanderful every single one the announcements of breakthroughs in Fusion generation have turned out to be…

        Or how the safest Operating System ever, Windows 7, turned out in terms of security…

        Or how Bitcoin has revolutionized how people pay each other for stuff…

        Some of us have seen lots of hype trains go by over the years, always with the same format and almost all of them originating from exactly the same subset of people as the AI one, and recognize the salesspeak from greedy fuckers designed to excite ignorant naive fanboys of such bullshit chu-chu-trains when they come to the station.

        Rational people who are not driven by “personal profit maximization on the backs of suckers” will not use salesspeak and refer to anything brand new as “the most incredible creation of humanity” (it’s way too early to tell) or deem any and all criticism of it as “shitting on it”.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          3
          ·
          1 year ago

          “Completely unrelated thing X didn’t live up to its hype, therefore thing Y must also suck” is not particularly sound logic for shitting on something.

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            1 year ago

            Funny how from all the elements were it ressonates with historical events: “people promoting it”, “bleeding edge tech”, “style of messaging”, “extraordinary claims without extraordinary proof” and more, your ended up making the kind of simplistic conclusion that a young child might make.

            • SirGolan@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              extraordinary claims without extraordinary proof

              What are you looking for here? Do you want it to be self aware and anything less than that is hot garbage? That latest advances in AI have many uses. Sure Bitcoin was over hyped and so is AI, but Bitcoin was always a solution with no problem. AI (as in AGI) offers literally a solution to all problems (or maybe the end of humans but hopefully not hah). The current tech though is widely useful. With GPT4 and GitHub Copilot, I can write good working code at multiple times my normal speed. It’s not going to replace me as an engineer yet, but it can enhance my productivity by a huge amount. I’ve heard similar from many others in different jobs.

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          1 year ago

          AI, even at the current state is one of the most incredible creation of humanity.

          If there was a nobel prize for math and computer science, the whole field would deserve one next year. It would probably go to a number of different people who contributed to the current methodologies.

          You cannot compare nft to AI. You can open nature or science (the scientific publications) now and you’d see how big is the impact of AI.

          You can start your research here https://www.deepmind.com/research/highlighted-research/alphafold . Another nobel prize material

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 year ago

            I actually have some domain expertise so excuse me if I don’t just eat up that overexcited ignorant fanboy pap and phamplet from one of the very companies trying to profit for such things.

            GAI (General Artificial Intelligence, i.e. a “thinking machine”) would indeed be that “incredible creation of humanity”, but that’s not this shit. This shit is a pattern matching and pattern reassembly engine - a technologically evolve parrot capable of producing outputs that mimic what was present in its training sets to such a level that they even parrot associations that were present in their training sets (i.e. certain questions get certain answers, only the LLM doesn’t even understand them as “questions” and “answers” just as textual combinations).

            Insuficiently intelligent people with no training in hard sciences often actually confuse such perfect parroting of that which intelligent beings previously produces with actually having intelligence, which is half part hilarious and half part sad.

            Edit: that was actually unfair, so let me put things better: some reactions to the hype on this AI remind me of how my grandmother - an illiterate old lady from the countryside who had been very poor most of her life - used to get very confused when she saw the same actor in multiple soap operas. The whole concept of actors and Acting was beyond her life experience so when I was a kid and she had moved to live with us in the “big city”, she took what she saw on TV at face value. I suspect a lot of people who have no previous understanding of the domain and related are going down the same route of reasoning on AI as my nana did on soap operas, so end up confusing the LLM’s impeccable imitation of human language use with there actually being a human-like intelligence behind it, just like my nana confused the “living truthfully in imaginary circunstances” of good actors with the real living it imitated.

            • Zeth0s@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              As you have domain expertise you will agree with us that, despite not being AGI, as it is now, deep learning, reinforcement learning, generative AI are an incredible creation of humanity, that, among other things, are capable already of:

              1. solving long standing scientific challenges such as protein folding,
              2. taking independent decisions and develop strategies that, on specific tasks, surpass human experts
              3. mapping human languages and artistic creations in high dimensional vector spaces where concepts and relationships are retained as properties of the spaces, allowing to perform math and statistical inference, generating original images and text (a thing for which, few decades ago, not many would have guessed such manageable mathematical representation could even exist).

              On top of this we give for granted all the current already existing applications, such as image recognition, translation, text classification…

              You would also agree with us that the potential of current AI methodologies in all fields of science and technology is already enormous, as demonstrated by alphafold for instance. We just need few more years to see even more groundbreaking applications of the exising methodologies, while we wait for even more powerful techniques or, why stop dreaming, AGI in few decades.

              • Aceticon@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                1 year ago

                What it’s doing is just a natural extension of what was done with basic Neural Networks back in the 90s when it started being used for recognition of human-written postal code numbers on mail envelopes.

                This is why I disagree that this specific moment in the development of AI is “an incredible creation of humanity”. Maybe the domain as a whole will turn out to be as groundbreaking as computers, but the idea that what’s being done now by itself is that is ignorant, premature or both.

                As for the rest, I actually studied Physics at a Degree level and with it complex Mathematics and your point #3 is absolute total bollocks.

                • Zeth0s@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  1 year ago

                  I was actually taking the time to share with you some very basic resources for you to learn something on basic stuff such as latent space, embedding, attention mechanism, markov decision processes, but your attitude really made change my mind.

                  It’s fine that you clearly don’t have the domain knowledge you claim, but your rudeness is really annoying. Enjoy your life with your achievement of complex math at degree level and learn how to speak

                  BTW, neural networks, even if few decades old, are an incredible achievement of humanity, even knowing how to roughly simulate a human neural network involves understanding of the brain, of non-linear math and existence of computers and (each of them) are astonishing achievements of humanity

    • Not A Bird@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Lemmy and Mastodon to a larger extent hate anything owned by a corporation. That voice is getting more and more louder by the day.

    • deadcream@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      15
      ·
      edit-2
      1 year ago

      It’s just projection of the hate for techbros (especially celebrities like Musk). Everything that techbros love (crypto, ai, space, etc) is hated automatically.
      I.e. they don’t really hate AI. You can’t hate something if you have zero understanding what that something is. It’s just an expression of hate for someone who promotes that something.

      • chaogomu@kbin.social
        link
        fedilink
        arrow-up
        12
        arrow-down
        6
        ·
        1 year ago

        AI is not good. I want to be good, but it’s not.

        I’ll clarify, it’s basically full of nonsense. Half of the shit it spits out is nonsense, and the rest is questionable. Even with that, it’s already being used to put people out of their jobs.

        Techbros think AI will run rampant and kill all humans, when they’re the ones killing people by replacing them with shitty AI. And the worst part is that it isn’t even good at the jobs it’s being used for. It makes shit up, it plagiarizes, it spits out nonsense. And a disturbing amount of the internet is starting to become AI generated. Which is also a problem. See, AI is trained on the wider internet, and now AI is being trained on the shitty output of AI. Which will lead to fun problems and the collapse of the AI. Sadly, the jobs taken by AI will not come back.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          It’s a tool which can be used to great effect in the right setting, for example to wrap cold knowledge summarily stated into formats with much broader appeal and to revert the process.

          However it’s being sold by greedy fuckers who stand to gain from people jumping into the hype-train as something else altogether: a shortcut into knowledge and the output of those who have it, because there’s a lot more money to be made from that than there is of something which can “write an article from a set of bullet points”.

          For me the most infuriating aspect of this is that this is hardly the 1st such hype train going to “FleeceTheSuckersTown” coming out of “TechBrosCity” that we’ve seen in the last 2 decades, not even the 2nd or the 3rd - there have been a lot of such things always following the same formula, to the point that the “great men” of the age in Tech (such as Musk) are, unlike the ones in the first Tech boom (that ended in 2000), people who repeatedly used this kind of thing to make themselves rich by fleecing suckers, not makers.

        • _danny@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          It’s definitely gone down hill recently, but at the launch of gpt4 it was pretty incredible. It would make several logical jumps that a lot of actual people probably wouldn’t make. I remember my “wow moment” was asking how many M&M’s would fit in a typical glass milk jug, and then I measured it myself (by weight) and got an answer about 8% off. It gave measurements and cited actual equations. I couldn’t find anything through Google that solved the same problem or had the same answer that it could have just copied. It was supposed to be bad at math, but gpt4 got those types of problems pretty much spot on for me.

          I think that most people who have tried the latest AI models have had a bad experience because its power is distributed over more users.

          • chaogomu@kbin.social
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            There’s also the issue of model collapse, when the AI is trained on data generated by AI, the errors and hallucinations start to compound until all you have left is gibberish. We’re about halfway there.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              ChatGPT is trained on data with a cutoff in September 2021. It’s not training on AI-generated data.

              Even if some AI-generated data is included, as long as it’s reasonably curated and it’s mixed with non-AI data model collapse can be avoided.

              “Model collapse” is starting to feel like just a keyword for “this AI isn’t as good as I wanted.”

            • _danny@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              I feel like you’re undereducated on how and when AI models are trained. Especially for the gpt model, it’s not “constantly learning” like other models. It’s being tweaked in discreet increments by developers trying to cover their ass, and get it to less frequently say things they can be sued for.

              Also, AI are already training other AI, that’s kinda how AI are made… There’s an AI that detects how well a given phrase follows another phrase, and that’s used to train the part of the AI you interact with. (arguably they are part of the same whole, depending on how you view the architecture)

              CGP gray has a good into video on how bots learn, it’s pretty outdated and not really applicable to how LLMs learn, but the general idea is still there.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        Not everyone that dislikes a thing or the promoters of that thing “have no idea what it is”…but sure, go off I guess. 🤷