He generally shows most of the signs of the misinformation accounts:
- Wants to repeatedly tell basically the same narrative and nothing else
- Narrative is fundamentally false
- Not interested in any kind of conversation or in learning that what he’s posting is backwards from the values he claims to profess
I also suspect that it’s not a coincidence that this is happening just as the Elon Musks of the world are ramping up attacks on Wikipedia, specially because it is a force for truth in the world that’s less corruptible than a lot of the others, and tends to fight back legally if someone tries to interfere with the free speech or safety of its editors.
Anyway, YSK. I reported him as misinformation, but who knows if that will lead to any result.
Edit: Number of people real salty that I’m talking about this: Lots
Heuristics, data analysis, signal processing, ML models…etc
It’s about identifying artificial behavior not identifying artificial text, we can’t really identify artificial text, but behavioral patterns are a higher bar for botters to get over.
The community isn’t in a position to do anything about it the platform itself is the only one in a position to gather the necessary data to even start targeting the problem.
I can’t target the problem without first collecting the data and aggregating it. And Lemmy doesn’t do much to enable that currently.