• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Eh, this seems a little half-assed, IMO. There’s no adjustment for error magnitude in relation to the rest of the video content. If they fuck up a basic stat on an informational video, it can easily have a more significant effect of viewers than a major error on a niche topic in a long, multi-topic video.

    This works into the whole prioritization portion of the post. Many of the things they classify as “low severity” and only warranting a pinned comment could lead to completely different behavior. Example:

    1. Very Low Severity
    • The statement could possibly be misunderstood, but it’s generally true and most people would be fine with how it’s currently presented.
    • eg. The host says, “One of DisplayPort’s main advantages over HDMI is its higher bandwidth,” but this is only true when comparing certain generations of the standards. HDMI 2.1, for example, has much higher bandwidth than DP 1.1.

    If the whole video is on the differences between HDMI and DP, this statement is completely opposite of the truth on a basic, primary statistic. The connector’s ONLY purpose is to transfer data, which is rate and bandwidth. The erroneous statement is an absolute - something IS something. All LTT needs to do is add, “some versions” or “generally”. THEN it becomes a softer interpretation error.

    • DP’s bandwidth is GENERALLY higher than HDMI’s? Let me look up when this statement doesn’t hold true…
    • DP bandwidth IS HIGHER than HDMI? Guess I don’t need to look up that info because they’re presenting it as a universal truth.

    Purchasing decision time… That GPU has 3x HDMI and 1x DP ports? Well, I’m not sure about these other stats, but I know that HDMI sucks compared to DP, so I can eliminate all the GPUs with more HDMI ports than DP ports. I guess I’m getting this 5 year old GPU with 1xHDMI and 3x DP ports instead.

    If that mistake was in a 30 minute news video covering 8 stories and the statement was an aside, okay, maybe a pinned comment.

    In reality, all these errors should be easy to correct, but that’s hampered by YouTube’s tools. LTT should be able to replace a small segment of video with a cut away to stock or b-roll and a voiceover with the correct information. But, you can’t do that on YT.

    Another alternative, if the information is insignificant enough for only a pinned comment, is to simply mute that 2-3 seconds of video. If the misstatement was significant enough that muting it makes the rest of the video tough to follow, then the mistake wasn’t small enough for a pin.



  • Generally, if someone’s being a total asshole so severely that they have to be yeeted with several thousand other unaware bystanders, I expect to see a bunch of examples within the first… 2, maybe 3, links.

    If someone can point me to a concise list of examples (actual data), I find it more disturbing that an admin on another server can yeet my account because they make noise on a discord server.I mean, yes, federating is a feature, but why even offer the ability to enroll users? Maybe for a group of friends, or something, but just rando users is nothing but a liability to everyone involved.


  • I almost thought I had written your comment and completely forgot about it. No, I just almost made the exact comment and want that hour of my life back.

    If there was some over the top racist rant, I sure didn’t see it. And the admin pushing for the defederation sounds so bizarre. Bizarre is the best word I could come up with because “petty” makes me think it was like high school politics. This is closer to a grade school sandbox argument.

    The worst I saw was “defedfags” and it was used in a way that was meant to highlight how they never said anything offensive. Like saying, “If you thought what I said before was offensive, let’s see how you respond to something intended to be negative.”

    The crazy thing is that the decision is being made because the admin just liked a post. It’s not even because of the post content - which has nothing controversial and appeared maybe 8 times in my Lemmy/kbin feed yesterday.

    Editing to add that this is the article: https://kbin.social/search?q=wakeup+call


  • Oh, I’ve just been toying around with Stable Diffusion and some general ML tidbits. I was just thinking from a practical point of view. From what I read, it sounds like the files are smaller at the same quality, require the same or less processor load (maybe), are tuned for parallel I/O, can be encoded and decoded faster (and there being less difference in performance between the two), and supports progressive loading. I’m kinda waiting for the catch, but haven’t seen any major downsides, besides less optimal performance for very low resolution images.

    I don’t know how they ingest the image data, but I would assume they’d be constantly building sets, rather than keeping lots of subsets, if just for the space savings of de-duplication.

    (I kinda ramble below, but you’ll get the idea.)

    Mixing and matching the speed/efficiency and storage improvement could mean a whole bunch of improvements. I/O is always an annoyance in any large set analysis. With JPEG XL, there’s less storage needed (duh), more images in RAM at once, faster transfer to and from disc, fewer cycles wasted on waiting for I/O in general, the ability to store more intermediate datasets and more descriptive models, easier to archive the raw photo sets (which might be a big deal with all the legal issues popping up), etc. You want to cram a lot of data into memory, since the GPU will be performing lots of operations in parallel. Accessing the I/O bus must be one of the larger time sinks and CPU load becomes a concern just for moving data around.

    I also wonder if the support for progressive loading might be useful for more efficient, low resolution variants of high resolution models. Just store one set of high res images and load them in progressive steps to make smaller data sets. Like, say you have a bunch of 8k images, but you only want to make a website banner based on the model from those 8k res images. I wonder if it’s possible to use the the progressive loading support to halt reading in the images at 1k. Lower resolution = less model data = smaller datasets to store or transfer. Basically skipping the downsampling.

    Any time I see a big feature jump, like better file size, I assume the trade off in another feature negates at least half the benefit. It’s pretty rare, from what I’ve seen, to have improvements on all fronts.



  • I’m generally a Windows user, but on the verge of doing a trial run of Fedora Silverblue (just need to find the time). It sounds like a great solution to my… complicated… history with Linux.

    I’ve installed Linux dozens of times going back to the 90s (LinuxPPC anyone? Yellow Dog?), and I keep going back to Windows because I tweak everything until it breaks. Then I have no idea how I got to that point, but no time to troubleshoot. Easily being able to get back to a stable system that isn’t a fresh install sounds great.