An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

  • ExclamatoryProdundity@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Look, I hate racism and inherent bias toward white people but this is just ignorance of the tech. Willfully or otherwise it’s still misleading clickbait. Upload a picture of an anonymous white chick and ask the same thing. It’s going go to make a similar image of another white chick. To get it to reliably recreate your facial features it needs to be trained on your face. It works for celebrities for this reason not a random “Asian MIT student” This kind of shit sets us back and makes us look reactionary.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      It’s less a reflection on the tech, and more a reflection on the culture that generated the content that trained the tech.

      Wang told The Globe that she was worried about the consequences in a more serious situation, like if a company used AI to select the most “professional” candidate for the job and it picked white-looking people.

      This is a real potential issue, not just “clickbait”.

      • HumbertTetere@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        If companies go pick the most professional applicant by their photo that is a reason for concern, but it has little to do with the image training data of AI.

      • JeffCraig@citizensgaming.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Again, that’s not really the case.

        I have Asian friends that have used these tools and generated headshots that were fine. Just because this one Asian used a model that wasn’t trained for her demographic doesn’t make it a reflection of anything other than the fact that she doesn’t understand how MML models work.

        The worst thing that happened when my friends used it were results with too many fingers or multiple sets of teeth 🤣

    • notacat@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You said yourself you hate inherent bias yet attempt to justify the result by saying if used again it’s just going to produce another white face.

      that’s the problem

      It’s a racial bias baked into these AIs based on their training models.

      • thepineapplejumped@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I doubt it is concious racial bias, it’s most likely that the training data is made up of mostly white people and labeled poorly.

        • notacat@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I also wouldn’t say it was conscious bias either. I don’t think it’s intentionally developed in that way.

          The fact still remains though whether conscious or unconscious, it’s potentially harmful to people of other races. Sure it’s an issue with just graphic generation now. What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?

          The possibilities are virtually endless, but if we don’t start pointing out and addressing any type of bias, it’s only going to get worse.

          • Altima NEO@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I feel like you’re overestimating the capabilities of current ai image generation. And also presenting problems that don’t exist.

  • starcat@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Racial bias propagating, click-baity article.

    Did anyone bother to fact check this? I ran her exact photo and prompt through Playground AI and it pumped out a bad photo of an Indian woman. Are we supposed to play the raical bias card against Indian women now?

    This entire article can be summarized as “Playground AI isn’t very good, but that’s boring news so let’s dress it up as something else”

  • pacoboyd@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Also depends on what model was used, prompt, strength of prompt etc.

    No news here, just someone who doesn’t know how to use AI generation.

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    This is not surprising if you follow the tech, but I think the signal boost from articles like this is important because there are constantly new people just learning about how AI works, and it’s very very important to understand the bias embedded into them.

    It’s also worth actually learning how to use them, too. People expect them to be magic, it seems. They are not magic.

    If you’re going to try something like this, you should describe yourself as clearly as possible. Describe your eye color, hair color/length/style, age, expression, angle, and obviously race. Basically, describe any feature you want it to retain.

    I have not used the specific program mentioned in the article, but the ones I have used simply do not work the way she’s trying to use them. The phrase she used, “the girl from the original photo”, would have no meaning in Stable Diffusion, for example (which I’d bet Playground AI is based on, though they don’t specify). The img2img function makes a new image, with the original as a starting point. It does NOT analyze the content of the original or attempt to retain any features not included in the prompt. There’s no connection between the prompt and the input image, so “the girl from the original photo” is garbage input. Garbage in, garbage out.

    There are special-purpose programs designed for exactly the task of making photos look professional, which presumably go to the trouble to analyze the original, guess these things, and pass those through to the generator to retain the features. (I haven’t tried them, personally, so perhaps I’m giving them too much credit…)

  • gorogorochan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Meanwhile every trained model on Civit.ai produces 12/10 Asian women…

    Joking aside, what you feed the model is what you get. Model is trained. You train it on white people, it’s going to create white people, you train it on big titty anime girls it’s not going to produce WWII images either.

    Then there’s a study cited that claims Dall-e has a bias when producing images of CEO or director as cis-white males. Think of CEOs that you know. Better yet, google them. It’s shit but it’s the world we live in. I think the focus should be on not having so many white privileged people in the real world, not telling AI to discard the data.

  • Alwaysfallingupyup@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Or… and Im just spit balling here. Dont ask it to do something you knew probably wouldnt give you something youre happy with and you wont be insulted…

    • enkers@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      and you wont be insulted…

      I’m guessing you didn’t read the article? This was just someone playing with AI generation and sharing a result they found funny.

      “My initial reaction upon seeing the result was amusement,” Wang told Insider. “However, I’m glad to see that this has catalyzed a larger conversation around AI bias and who is or isn’t included in this new wave of technology.”

  • EmotionalMango22@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    So? There are white people in the world. Ten bucks says she tuned it to make her look white for the clicks. I’ve seen this in person several times at my local college. People die for attention, and shit like this is an easy-in.

  • 21Cabbage@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Honestly news stories about dumb ideas not working out don’t really bother me much. Congrats, the plagiarism machine tried to make you look like you fit in to a world that, to the surprise of nobody but idealists, still has a shitload of racial preferences.

    • Asafum@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Honestly it’s just not being used correctly. I actually believe this is just user error.

      These AI image creators rely on the base models they were trained with and more than likely were fed wayyyyy more images of Caucasians than anyone else. You can add weights to what you would rather see in your prompts, so while I’m not experienced with the exact program she used, the basics should be the same.

      You usually have 2 sections, the main prompt (positive additions) and a secondary prompt for negatives, things you don’t want to see. An example prompt could be “perfect headshot for linked in using supplied image, ((Asian:1.2))” Negative: ((Caucasian)), blue eyes, blonde, bad eyes, bad face, etc…

      If she didn’t have a secondary prompt for negatives I could see this being a bit more difficult, but still there are way better systems to use then. If she didn’t like the results from the one she used instead of jumping to “AI racism!” she could have looked up what other systems exist. Hell, with the model I use with Automatic1111 I have to put Asian in my negatives because it defaults to that often.

      Edit: figures I wrote all this then scrolled down and noticed all the comments saying the same thing lol at least we’re on the same page

  • camillaSinensis@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Disappointing but not surprising. The world is full of racial bias, and people don’t do a good job at all addressing this in their training data. If bias is what you’re showing the model, that’s exactly what it’ll learn, too.

  • funkajunk@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Sigh

    It’s not racial bias, it works from a limited dataset and what it understands a “professional headshot” even is.

    Seems like some ragebait to me.

      • Grimfelion@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Nope… because I just tried it as a white male and got back a pure Asian man using the same prompts… and I’ll be damned if I’m not jealous/sad because the man it spit back out was way better looking than me…