Mama told me not to come.

She said, that ain’t the way to have fun.

  • 13 Posts
  • 9.18K Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle


  • It is unwrap’s fault. If they did it properly, they would’ve had to explicitly deal with the problem, which could clarify exactly what the problem is. In this case, I’d probably use expect() to add context. Also, when doing anything with strict size requirements, I would also explicitly check the size to make sure it’ll fit, again, for better error reporting.

    Proper error reporting could’ve made this a 5-min investigation.

    Also, the problem in the first place should’ve been caught with unit tests and a test deploy. Our process here is:

    1. Any significant change to queries is tested with a copy of production data
    2. All changes are tested in a staging environment similar to production
    3. All hotfixes are tested with a copy of production data

    And we’re not a massive software shop, we have a few dozen devs in a company of thousands of people. If I worked at Cloudflare, I’d have more rigorous standards given the global impact of a bug (we have a few hundred users, not billions like Cloudflare).


  • Ift is precious and beyond compare. It has tools that most other languages lack to prove certain classes of bugs are impossible.

    You can still introduce bugs, especially when you use certain features that “standard” linter (clippy) catches by default and no team would silence globally. .unwrap() is very controversial in Rust and should never be used without clear justification in production code. Even in my pet projects, it’s the first thing I clear out once basic functionality is there.

    This issue should’ve been caught at three separate stages:

    1. git pre-commit or pre-push should run the linter on the devs machine
    2. Static analysis checks should catch this both before getting reviews and when deploying the change
    3. Human code review

    The fact that it made it past all three makes me very concerned about how they do development over there. We’re a much smaller company and we’re not even a software company (software dev is <1% of the total company), and we do this. We don’t even use Rust, we’re a Python shop, yet we have robust static analysis for every change. It’s standard, and any company doing anything more than a small in-house tool used by 3 people should have these standards in place.


  • Use something like Backblaze or Hetzner storage boxes for off-site backups. There are a number of tools for making this painless, so pick your favorite. If you have the means, I recommend doing a disaster recovery scenario every so often (i.e. disconnect existing drives, reinstall the OS, and load everything from remote backup).

    Generally speaking, follow the 3-2-1 rule:

    • 3 copies of everything on
    • 2 different types of media with
    • 1 copy off site (at least)

    For your situation, this could be:

    • 3 copies - your computer (NVMe?), TrueNas (HDD?), off-site backup; ideally have a third local device (second computer?)
    • 2 media - NVMe and HDD
    • 1 copy off site - Backblaze, Hetzner, etc

    You could rent a cloud server, but it’ll be a lot more expensive vs just renting storage.




  • Exactly.

    There’s a difference between gatekeeping and being transparent about what’s expected. I’m not suggesting people do it the hard way as some kind of hazing ritual, but because there’s a lot of practical value to maintaining your system there. Arch is simple, and their definition of simple means the devs aren’t going to do a ton for you outside of providing good documentation. If your system breaks, that’s on you, and it’s on you to fix it.

    If reading through the docs isn’t your first instinct when something goes wrong, you’ll probably have a better experience with something else. There are plenty of other distros that will let you offload a large amount of that responsibility, and that’s the right choice for most people because most people don’t want to mess with their system, they want to use it.

    Again, it’s not gatekeeping. I’m happy to help anyone work through the install process. I won’t do it for you, but I’ll answer any questions you might have by showing you where in the docs it is.




  • Yes, Arch is really stable and has been for about 10 years. In fact, I started using Arch just before they became really stable (the /usr merge), and stuck with it for a few years after. It’s a fantastic distro! If openSUSE Tumbleweed stopped working for me, I’d probably go back to Arch. I ran it on multiple systems, and my main reason for switching is I wanted something with a stable release cycle for servers and rolling on desktop so I can use the same tools on both.

    It has fantastic documentation, true, but most likely a new user isn’t going to go there, they’ll go to a forum post from a year ago and change something important. The whole point of going through the Arch install process is to force you to get familiar with the documentation. It’s really not that hard, and after the first install (which took a couple hours), the second took like 20 min. I learned far more in that initial install than I did in the 3-ish years I’d used other distros before trying Arch.

    CachyOS being easy to setup defeats the whole purpose since users won’t get familiar with the wiki. By all means, go install CachyOS immediately after the Arch install, buy so yourself a favor and go through it. You’ll understand everything from the boot process to managing system services so much better.


  • I 100% agree. If you want the Arch experience, you should have the full Arch experience IMO, and that includes the installation process. I don’t mean this in a gatekeepy way, I just mean that’s the target audience and that’s what the distro is expecting.

    For a new user, I just cannot recommend Arch because, chances are, that’s not what they actually want. Most new users want to customize stuff, and you can do that with pretty much every distro.

    For new users, I recommend Debian, Mint, or Fedora. They’re release based, which is what you want when starting out so stuff doesn’t change on you, and they have vibrant communities. After using it for a year or two, you’ll figure out what you don’t like about the distro and can pick something else.


  • I disagree. If you want to use Arch for the first time, install it the Arch way. It’s going to be hard, and that’s the point. Arch will need manual intervention at some point, and you’ll be expected to fix it.

    If you use something like Manjaro or CachyOS, you’ll look up commands online and maybe it’ll work, but it might not. There’s a decent chance you’ll break something, and you’ll get mad.

    Arch expects you to take responsibility for your system, and going through the official install process shows you can do that. Once you get through that once, go ahead and use an installer or fork. You know where to find documentation when something inevitably breaks, so you’re good to go.

    If you’re unwilling to do the Arch install process but still want a rolling release, consider OpenSUSE Tumbleweed. It’s the trunk for several projects, some of them commercial, so you’re getting a lot of professional eyeballs on it. There’s a test suite any change needs to pass, and I’ve seen plenty of cases where they hold off on a change because a test fails. And when it does fail (and it probably will), you just snapper rollback and wait a few days. The community isn’t as big as other distros, so I don’t recommend it for a first distro, but they’re also not nearly as impatient as Arch forums.

    Arch is a great distro, I used it for a few years without any major issues, but I did need to intervene several times. I’ve been on Tumbleweed about as long and I’ve only had to snapper rollback a few times, and that was the extent of the intervention.








  • everyone panic selling could spread over to people panic selling everything and trying to get their hands on cold hard cash so their entire life savings dont vanish in an instant, so market wide we could see big drops?

    Yeah, that’s basically what happens in a major correction. In fact, stock prices are valid basically the result of how many prior people are buying vs selling; more buyers than sellers causes prices to go up, more sellers than buyers cause prices to go down. Stock prices tend to have momentum precisely because of this (people try to jump on the bandwagon on the way up and jump off on the way down). And that’s also why we tend to see a quick recovery afterward once all the facts come out.

    A 20-30% drop is a pretty big deal. It’s not anomalous though. There have been 19 major corrections (over 20% loss) over the past 150 years, meaning it happens every 7-10 years on average (150/19 ~= 7.8 years).

    I don’t think this is like the .com or financial markets of the 2000s. But let’s say it is. If I bought at the peak of the .com bubble (March 10, 2000), I would’ve gotten 5.3% annualized growth over that 25 years (so $1k would be $3900-ish), assuming I don’t sell. The impact would be limited long term.

    The AI bubble popping wouldn’t be the catastrophy many are making it out to be. I think it’ll be closer to the 2020 correction.

    I think Nvidia is overvalued. I don’t think the economy will crash if AI crashes.