

I can see laying off the dead weight dipshits who are holding everyone else back. My company does layoffs every year, and while I think it’s shitty overall, my team did rejoice when a particular middle manager got the axe.


I can see laying off the dead weight dipshits who are holding everyone else back. My company does layoffs every year, and while I think it’s shitty overall, my team did rejoice when a particular middle manager got the axe.


The article has specific advice about how to make sure the layoffs are a wise decision before cutting the jobs. In the movie, Office Space, they had consultants interviewing everyone and determining who is adding value or not. I guess those were the good ol’ days and now we’ve gotten to the point where the only consideration is whether this will make the numbers look good this quarter.


Maybe make regulations to limit the degree to which any car sold can spy on citizens? Nah, let’s limit competition so U.S. companies can keep making huge profits from inferior cars that still spy on everyone with no need to up their game.


I have a Linux HTPC on Mint where I have the Pegasus frontend with Jellyfin, VacuumTube, and a bunch of games. The main usability issue is the need to exit each app differently to get back to Pegasus. There’s no notion of using a home button to go back to the home screen and pause the current process, which is what I’d be hoping to get with Bigscreen. Otherwise, Pegasus is fine for launching apps with a remote, and Jellyfin and VacuumTube work fine with a remote as well.


Yeah, LLMs are useful tools, though not the silver bullet the hype proclaims them to be. The tech bros tightly controlling LLMs and chasing insane profits with their closed models, data centers, and subscriptions are the main problem. Open models like Qwen 3.6 27B that are approaching frontier capabilities while running on consumer hardware is really the only thing that gives me any hope for the future of LLMs.


deleted by creator


AI is not doing the jobs; the remaining employees are.


Interesting read. Here’s a link without the paywall: https://archive.ph/20260507223322/https://www.bloomberg.com/news/articles/2026-04-27/why-china-s-deepseek-qwen-and-moonshot-are-a-worry-for-us-ai-rivals
Edit:
Anthropic has accused DeepSeek, Moonshot and another Chinese AI lab, MiniMax, of “industrial-scale distillation attacks” — illegally extracting capabilities from its proprietary Claude model using 24,000 fraudulent accounts to gain an edge
Anthropic’s says:
Illicitly distilled models lack necessary safeguards, creating significant national security risk
So open weight models are a national security risk? Guess we’d all better pay subscriptions, let the data centers be built, and let a couple companies building proprietary models have a monopoly to protect our security, then. 🙄


Right? Qwen 3.6 27B is fantastic, for example. Data centers need to stop hoarding all the hardware so we can afford to run models like that.


Spanking the code monkey.
Gabe doesn’t do a lot of interviews and mostly keeps out of the public eye. Elon Musk also had a lot of fans until he started running his mouth too much and revealing who he really is.


I think the key point is that you’re not outsourcing critical thinking to LLMs, but are instead using it as a tool to do grunt work that you could’ve done yourself, but the LLM can pump out faster. This means constantly being critical of everything it does, asking questions, asking for links to credible sources, asking it to provide info to help evaluate the pros and cons of multiple approaches, with you making the decisions and learning along the way. Overall, any work a LLM produces that will have your name on it should be work you entirely understand and agree with. For coding, I find agent markdown files to be especially helpful to make sure the LLM follows my desired practices without me constantly making it refactor.
Largely, my assumption at this point is that LLMs may not always be around, so I definitely don’t want to be left holding the bag with a bunch of slop I can’t manage on my own. I think I’ll feel better when I can run open weight models on my own hardware that are fully competitive with cloud models. With models like Qwen 3.6 27B, it seems we are getting closer to that.


deleted by creator


Are all these data centers really going to be running at full capacity with open models like Qwen 3.6 27B that have performance approaching frontier, but can run on consumer hardware? Sure, it’s slow as of now, though there are tweaks to optimize it, and how long until we see open models that run reasonably fast and give frontier models a run for their money? My company MacBook can run models like this, so will there be a point where companies stop paying hundreds per user per month for cloud AI and have devs run open models on the laptops they already have? I definitely won’t be surprised if that’s the case.
An official member of the US military industrial complex is making a phone with a proprietary OS that hoovers up your data and shoves AI slop in your face 24/7. What’s not to like?


Just as open weight models are getting good. Qwen 3.6 27B just dropped with claimed performance approaching Opus 4.6, but it can run on a Mac with a M-series SoC. I tested it out today on a M4 Pro with Ollama and Cline and was impressed with its reasoning, but it was slow. Going to try with llama.cpp tomorrow and mess around tweaking it for speed.
https://ai.rs/ai-developer/qwen-3-6-27b-local-coding-model
AI coding agents are useful, but it’s time for the cloud-based models to chill out so we can get cheap RAM again to run our shit locally.


If we are going to eschew open source projects from shitty tech companies, then there’s a pretty long list.
In many cases, legit games have more malware (spyware, DRM, rootkits, etc.) and the cracked versions remove it. It’s annoying to pay for a game and it won’t work offline because the company wants to collect your data, but the cracked version does work offline.