These scammers using Mr Beasts popularity, generosity, and (mostly) deep fake AI to scam people into downloading malware, somehow do not go against Instagrams community guidelines.
After trying to submit a request to review these denied claims, it appears I have been shadow banned in some way or another as only an error message pops up.
Instagram is allowing these to run on their platform. Intentional or not, this is ridiculous and Instagram should be held accountable for allowing malicious websites to advertise their scam on their platform.
For a platform of this scale, this is completely unacceptable. They are blatant and I have no idea how Instagrams report bots/staff are missing these.
I’ve reported Nazis, violent threats, and literal child pornography on Instagram that then told me it didn’t go against their guidelines.
I read between the lines: this is the content they support, so it’s not a platform for me.
Actual child porn? How do you mean?
As in child sexual abuse material. It’s pretty rampant on Instagram where they like to ‘hide’ under certain tags.
Porn with actual children.
Can you be more specific? Like AI generated 17 year olds, or real photos of some 3 year old kid in someone’s dungeon? There’s a big difference.
Both are children… So why does it matter? In USA under 18 is classified as a minor/child regardless if it is generated or not still illegal
No idea mate; not the OP.
Did you report the CP to the police?
No usually I report it to NCMEC who has better resources to deal with it. Cops very rarely care or are able to do anything.
I reported a pic of a nazi flag with Hitler in front of it, with the caption: Hitler did nothing wrong, f**k jews.
Doesn’t go against community standards.
I made a video about the struggles of children who are sexual abused, with a link to donate to a charity that helps children. Instant shadowban and no longer monetized.
All of metas moderation is done by bots, and they are terrible at moderation.
I had something like that happen.
I report death threats against me from transphobic bigots, that specifically cited me being trans as why they wanted to kill me. Reported it as hate speech and a threat of violence. “We’re sorry, this does not violate community guidelines.”
Later I made a self-deprecating joke about being white.
Three month ban for “Racism and Bigotry”
Facebook is a fucking joke, and not a funny one either.
deleted by creator
Oh hey another site that looks at reports and just bans the reporter…
Instagram owned by the Reddit people?
If you stop using Instagram, then you won’t have to worry about it.
That doesn’t mean the issue disappears.
But it does mean that an unpaid moderator isn’t attempting to moderate their platform. Let them see what happens when they take the extwitter approach of letting the computers handle everything.
I’d rather vulnerable or stupid people didn’t get scammed first.
What issue?
The issue of fake and scam ads on social media platforms.
So what they are saying is they are willing to take liability and thus be open to being sued over this as they know of the scams but say they do not break community guidelines
Got it
At this point I’m convinced meta either gets paid under the table to keep that shit, or (probably more likely) they make so much money off of the sheer volume of ai scam ads that they just don’t care.
Is there a difference?
Not really, but the first one would piss me off even more if both sides plainly agreed and knew the payment was to keep the ads live
Instagram is owned by Meta… Facebook.
Facebook had no problem helping pedophiles distribute child pornography on their platform, terrorists and Nazis from organizing events on their platform, or allowing deceptive political ads that swayed the votes of democratic nations.
Why would they give any fuck about fake Mr beast ads?
Each time I see “Meta’s product didn’t remove reported malicious post” I just think that this is valid punishment for user and their ego for wasting their time on these shitty platforms. 😅
I’ve been reporting ads on Instagram as spam for over a year now. Every single ad I see gets reported. Occasionally I get a report that says the ad has been deleted.
Translation: These ads make us lots of money.
Same with YouTube ads. Lots of scam’s and reporting it always ends in my report getting denied…
Reviewed by an AI. Nice.
I doubt they’re missing them. They simply don’t care and will continue to not care until something happens that makes the money generated by the ADs not worth it.
Meta’s “guidelines” are basically: does this content somehow stop us from making money?
The answer is generally no. If people stopped using the platform because of its poor handling of these kinds of situations, I guess that would affect them. Maybe?
On Twitter I’ve reported:
- Pictures of dead babies/toddlers
- Pictures of murdered people
- Death threats towards public figures
- Illegal videos of terrorist acts
- Ads for illegal weapons (tasers)
- So so much crypto spam
Things found by Twitter to go against their community standards? 0
He fired more than 80% of the original workers. There is nobody to check the reports.