An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

  • ExclamatoryProdundity@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Look, I hate racism and inherent bias toward white people but this is just ignorance of the tech. Willfully or otherwise it’s still misleading clickbait. Upload a picture of an anonymous white chick and ask the same thing. It’s going go to make a similar image of another white chick. To get it to reliably recreate your facial features it needs to be trained on your face. It works for celebrities for this reason not a random “Asian MIT student” This kind of shit sets us back and makes us look reactionary.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      It’s less a reflection on the tech, and more a reflection on the culture that generated the content that trained the tech.

      Wang told The Globe that she was worried about the consequences in a more serious situation, like if a company used AI to select the most “professional” candidate for the job and it picked white-looking people.

      This is a real potential issue, not just “clickbait”.

      • HumbertTetere@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        If companies go pick the most professional applicant by their photo that is a reason for concern, but it has little to do with the image training data of AI.

      • JeffCraig@citizensgaming.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Again, that’s not really the case.

        I have Asian friends that have used these tools and generated headshots that were fine. Just because this one Asian used a model that wasn’t trained for her demographic doesn’t make it a reflection of anything other than the fact that she doesn’t understand how MML models work.

        The worst thing that happened when my friends used it were results with too many fingers or multiple sets of teeth 🤣

    • notacat@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You said yourself you hate inherent bias yet attempt to justify the result by saying if used again it’s just going to produce another white face.

      that’s the problem

      It’s a racial bias baked into these AIs based on their training models.

      • thepineapplejumped@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I doubt it is concious racial bias, it’s most likely that the training data is made up of mostly white people and labeled poorly.

        • notacat@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I also wouldn’t say it was conscious bias either. I don’t think it’s intentionally developed in that way.

          The fact still remains though whether conscious or unconscious, it’s potentially harmful to people of other races. Sure it’s an issue with just graphic generation now. What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?

          The possibilities are virtually endless, but if we don’t start pointing out and addressing any type of bias, it’s only going to get worse.

          • Altima NEO@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I feel like you’re overestimating the capabilities of current ai image generation. And also presenting problems that don’t exist.