Mama told me not to come.

She said, that ain’t the way to have fun.

  • 13 Posts
  • 7.87K Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle








  • not finishing so many of your games shows some kind of problem

    If they’ve played 23%, that’s a lot of games, as in, well over 1k. Thy said nothing about how many they’ve finished, but I don’t think “finishing” is all that important.

    What I’m more interested in is how much time they have for playing games. What’s they’re lifestyle like that they can play nearly 2k games while also accomplishing other life goals? It’s not an unreasonable amount, just sufficiently high that it raises some eyebrows.

    I feel like it’s an obligation for me to finish a game unless I don’t like it.

    If OP isn’t finishing any games, yeah, I agree. But there are a ton of games that I don’t find worth finishing, in any sense you define that, but that I still find worth playing.

    For example, I didn’t finish Brutal Legend because I really didn’t like the RTS bits at the end. I still love that game and recommend it, but I only recommend it w/ the caveat that the ending is quite different from the rest of the game and it’s okay to bail. That type of game isn’t going to have an amazing ending, so the risk of not seeing the ending is pretty small (and I can always look that up on YT or elsewhere if I want). I did the same for Clustertruck because the ending had an insane difficulty spike on the last level and I just didn’t care enough to finish it.

    However, other times I have pushed through, such as Ys 1 Chronicles, which has an insane difficulty spike on the final boss. I am happy I pushed through, because I really liked the world and the ending, which feeds into the next game (in fact, on Steam, it automatically started Ys II after finishing Ys 1). I ended up not liking Ys II as much (still finished), but I really liked the tie-over from the first to the second.

    So yeah, I don’t fault someone for not finishing games, but I do think they’re missing out if they never finish games.



  • A certain amount of skepticism is healthy, but it’s also quite common for people to go overboard and completely avoid a useful thing just because some rich idiot is pushing it. I’ve seen a lot of misinformation here on Lemmy about LLMs because people hate the environment its in (layoffs in the name of replacing people with “AI”), but they completely ignore the merit the tech has (great at summarizing and providing decent results from vague queries). If used properly, LLMs can be quite useful, but people hyper-focus on the negatives, probably because they hate the marketing material and the exceptional cases the news is great at shining a spotlight on.

    I also am skeptical about LLMs usefulness, but I also find them useful in some narrow use-cases I have at work. It’s not going to actually replace any of my coworkers anytime soon, but it does help me be a bit more productive since it’s yet another option to get me unstuck when I hit a wall.

    Just because there’s something bad about something doesn’t make the tech useless. If something gets a ton of funding, there’s probably some merit to it, so turn your skepticism into a healthy quest for truth and maybe you’ll figure out how to benefit from it.

    For example, the hype around cryptocurrency makes it easy to knee-jerk reject the technology outright, because it looks like it’s merely a tool to scam people out of their money. That is partially true, but it’s also a tool to make anonymous transactions feasible. Yes, there are scammers out there pushing worthless coins in a pump and dump scheme, but there are also privacy-focused coins (Monero, Z-Cash, etc) that are being used today to help fund activists operating under repressive regimes. It’s also used by people doing illegal things, but hey, so is cash, and privacy coins are basically easier to use cash. We probably wouldn’t have had those w/o Bitcoin, though they use very different technology under the hood to achieve their aims. Maybe they’re not for you, but they do help people.

    Instead of focusing on the bad of a new technology, more people should focus on the good, and then weigh for themselves whether the good is worth the bad. I think in many cases it is, but only if people are sufficiently informed about how to use them to their advantage.


  • Can’t search for something on the net anymore without being served f-tier LLM-produced garbage.

    I don’t see a material difference vs the f-tier human-produced garbage we had before. Garbage content will always exist, which is why it’s important to learn to how to filter it.

    This is true of LLMs as well: they can and do produce garbage, but they can and are useful alternatives to existing tech. I don’t use them exclusively, but as an alternative when traditional search or whatever isn’t working, they’re quite useful. They provide rough summaries about things that I can usually easily verify, and they produce a bunch of key words that can help refine my future searches. I use them a handful of times each week and spend more time using traditional search and reading full articles, but I do find LLMs to be a useful tool in my toolbox.

    I also am frustrated by energy use, but it’s one of those things that will get better over time as the LLM market matures from a gold rush into established businesses that need to actually make money. The same happens w/ pretty much every new thing in tech, there’s a ton of waste until the product finds its legs and then becomes a lot more efficient.


  • VR is still cool and will probably always be cool, but I doubt it’ll never be mainstream. 3D was just awkward, and they really just wanted VR but the tech wasn’t there yet.

    I own neither, yet I’ve been considering VR for a few years now, just waiting for more headsets to have proper Linux support before I get one.

    Likewise, I’m not paying for LLMs, but I do use the ones my workplace provides. They’re useful sometimes, and it’s nice to have them as an option when I hit a wall or something. I think they’re interesting and useful, but not nearly as powerful as the big corporations want you to think.


  • There’s a difference between healthy skepticism and invalid, knee-jerk opposition.

    LLMs are a useful tool sometimes, and I use them for refining general ideas into specific things to research, and they’re pretty good at that. Sure, what they output isn’t trustworthy on its own, but I can pretty easily verify most of what it spits out, and it does a great job of spitting out a lot of stuff that’s related to what I asked.

    For example, I’m a SW dev, so I’ll often ask it stuff like, “compare and contrast popular projects that do X”, and it’ll find a few for me and give easily-verifiable details about each one. Sometimes it’s wrong on one or two details, but it gives me enough to decide which ones I want to look more deeply into. Or I’ll do some greenfield research into a topic I’m not familiar with, and it does a fantastic job of pulling out keywords and other domain-specific stuff that help refine what I search for.

    LLMs do a lot less than their proponents claim, but they also do a lot more than detractors claim. They’re a useful tool if you understand the limitations and have a rough idea of how they work. They’re a terrible tool if you buy into the BS coming from the large corps pushing them. I will absolutely push back against people on both extremes.




  • That depends on what you mean by “know.” It generates text from a large bank of hopefully relevant data, and the relevance of the answer depends on how much overlap there is between your query and the data it was trained on. There are different models with different focuses, so pick your model based on what your query is like.

    And yeah, one big issue is the confidence. If users are aware of its limitations, it’s fine, I certainly wouldn’t put my kids in front of one without training them on what it can and can’t be relied on to do. It’s a tool, so users need to know how it’s intended to be used to get value from it.

    My use case is distilling a broad idea into specific things to do a deeper search for, and I use traditional tools for that deeper search. For that it works really well.


  • While true, I think it’s important to note that many buy the Switch for other reasons. My kids wanted a Switch, but I didn’t get it until there were enough games my wife and I really wanted to play. My wife was bummed about Kinect dying and was Ted a replacement for her exercise games, and I had been missing Zelda games, so I got the Switch, some Just Dance games, Ring Fit Adventure, the two Zelda remakes, and a couple games for the kids. The kids have kind of taken it over, but it still fulfills our purposes in getting it.

    My point is that the Switch has a lot more appeal than just shutting kids up for a bit. It’s a good console on its own, and the only console I’m willing to buy. The PS5 and Xbox Series has nothing I’m interested outside of a few exclusives, so my wife and I just play on our PCs and my Steam Deck.


  • My history with consoles is:

    1. Whatever by brother bought
    2. OG Xbox to play Halo
    3. Xbox 360 for Kinect games
    4. Switch - play w/ kids; Smash has been amazing for this
    5. Steam Deck - not a console, but I use it as one; got it to play games in bed

    I play most games on PC because I’m just not as interested in exclusives anymore, except maybe Zelda games, and with BOTW and TOTK, I’m less interested in those (they lost the formula I like).

    I’ll probably get the Switch 2 eventually, but I’ll wait until there’s a game I really want (say, ALttP remake or something), my kids break our OLED Switch, or there’s an OLED Switch 2 with better battery life.


  • I sincerely hope people understand what LLMs are and what they’re aren’t. They’re sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

    If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there’s still the chance that the LLM hallucinates something, that just how they work.

    For most queries, I’m mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it’s pretty effective.

    Don’t use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!