

I read comments and posts from Chinese posters who go through VPNs regularly.
░░░░░███████ ]▄▄▄▄▄▄▄▄ 💣💣💣💣
☻/ ▂▄▅█████████▅▄▃▂
/▌ Il███████████████████].
/ \ ◥⊙▲⊙▲⊙▲⊙▲⊙▲⊙▲⊙◤..
I read comments and posts from Chinese posters who go through VPNs regularly.
Sounds like a bad way to do things. Won’t you end up with gore on your feed? Not to mention it would be hard to filter out really dehumanizing, gross porn (which is frankly a description that fits most of it anyway)
are you my TA that I mentioned in the other comment
Throwback to when someone shared the OG version of this meme to my uni chat, I replied with "Oh you can simply do
def is_even(n: int) -> boolean:
if n > 0 return not is_even(n - 1)
elif n < 0 return not is_even(n + 1)
else return True
And instead of laughing at the joke the TA in the chat said “When you start getting internships you’ll do n % 2
” like I was being serious.
people IRL: hey man how’s it going
You should be able to contact your admins so they purge your account. Purged accounts have all the content they ever published scrubbed from all databases on Lemmy. I’m not entirely sure how it works if there’s an instance that is defederated from your home instance after you post something and it gets federated to them, I assume it wouldn’t be deleted in that case so it would still be available online there, but certainly a lot harder to track down.
Why is the Stasi, a state organ responsible for countering all the fascist groups the US was still backing, comparable to the state organs of the Nazi Germany that the communists were intent on destroying?
Agreed, don’t expect it to break absolutely everything but I expect that software development is going to get very hairy when you have to use whatever bloated mess AI is creating.
It won’t be long (maybe 3 years max) before industry adopts some technique for automatically prompting a LLM to generate code to fulfill a certain requirement, then iteratively improve it using test data to get it to pass all test cases. And I’m pretty sure there already are ways to get LLM’s to generate test cases. So this could go nightmarishly wrong very very fast if industry adopts that technology and starts integrating hundreds of unnecessary libraries or pieces of code that the AI just learned to “spam” everywhere so to speak. These things are way dumber than we give them credit for.
You have a pretty interesting idea that I hadn’t heard elsewhere. Do you know if there’s been any research to make an AI model learn that way?
In my own time while I’ve messed around with some ML stuff, I’ve heard of approaches where you try to get the model to accomplish progressively more complex tasks but in the same domain. For example, if you wanted to train a model to control an agent in a physics simulation to walk like a humanoid you’d have it learn to crawl first, like a real human. I guess for an AGI it makes sense that you would have it try to learn a model of the world across different domains like vision, or sound. Heck, since you can plug any kind of input to it you could have it process radio, infrared, whatever else. That way it could have a very complete model of the world.
I think the question isn’t “why are Western countries afraid of Israel” or “why does the West fear its citizens criticizing Israel,” but “what are Western countries planning to do in the near future (especially with the climate crisis) that requires them to support Israel and learn from it right now?”