• 2 Posts
  • 403 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Yes it’s fairly simple to do, essentially the user needs to download an image of a Linux install disc, flash it onto a USB stick (or a Dvd I guess), and then reboot their PC. They may need to press a key at boot to open the boot menu and select the USB (or the bios to change the boot order).

    After that, most distros offer a very easy to follow installer which will install the new OS.

    Most Linux installs can be done alongside windows (on the same hard drive or it’s own drive) but windows tends to break the boot loader with updates. It’s gernallt better to only dual boot if you’re good at fixing things - otherwise a full Linux install is better.

    The most inportant thing is back up all your important data, and only do this if you genuinely want to leave windows. I’d make sure your windows license is digital before doing this too as that allows using windows again if you want to go back.

    I’d say anyone can use Linux, it’s user friendly and robust. In terms of installing Linux, I’d only do it if you are sure you know what you’re doing - installing any OS - including windows - can involved trouble shooting problems.


  • I’ve tried Arch - it allows you to make a system that is exactly what you want. So no bloat installing stuff you never need or use. It also gives you absolute control.

    On other distros like Fedora, you get a pre configured system set up for a wide range of users. You can reduce down the packages somewhat but you will often have core stuff installed that is more than you’ll need as it caters to everyone.

    Arch allows you to build it yourself, and only install exactly the things you actually want, and configure then exactly how you want.

    Also you learn an awful lot about Linux building your system in this way.

    I liked building an arch system in a virtual machine, but I don’t think I could commit to maintaining an arch install on my host. I’m happy to trade bloat for a “standard” experience that means I can get generic support. The more unique your system the more unique your problems can be I think. But I can see the appeal of arch - “I made this” is a powerful feeling.


  • I think the new device is good news. I can see what you’re saying - the benefit is if Steam Machines expand the PC games market with former console only players. But otherwise the threshold for PC development is already much lower than consoles; there are no dev kit fees, a wide choice of engines to target, relatively greater independence etc.

    The steam machine may help somewhat in having a specific hardware profile to target, but the games are still on steam’s store so still have to be able to run widely on Windows or Linux. That’s always been the complexity of PC development - the steam machine doesn’t change that much. Although admittedly the Steam Verified benchmarks are useful for users to simplify understanding what their kit can actually run which will benefit indie devs.


  • For me it seems to be when you go through to download the windows binary, you get an iframe on the page containing another site. That has ads and serves up the download. So I’m guessing the ads are on the website that provides videolan with hosting for its binaries?

    They are old fashioned intrusive ads pretending you need to click then to start your download. But the download starts already.



    • OS - - > Linux OpenSuSE with KDE

    • YouTube - - > Freetube - opensource, private YouTube client for Linux, MacOS and Windows

    • Downloading music/videos --> yt-dlp

    • Downloading videos/images --> gallery-dl

    • Email - - > Thunderbird (really moved forward in last few years)

    • Notes - - > Joplin

    Selfhosting (mine is on raspberry pi) :

    • Streaming library - Jellyfin

    • Photo library - imich

    • Downloads - qbittorrent, prowlaar, radaar, sonaar, lazy librarian in a docker stack with VPN

    • smart home - Homeassistant

    • filesync - - > Syncthing (I don’t have problems with long file names - maybe a Windows issue or Linux FS? I use EXT4 on all my devices and don’t use Windows anymore)


  • BananaTrifleViolin@lemmy.worldtoLinux@lemmy.mlTimeshift
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    14 days ago

    Looking at your error it’s because Rsync is erroring.

    I’d starr by testing Rsync with an individual text file saving to /dev/dm-0 and see what error is returned.

    Timeshift is good but it basically is just a tool to use Rsync to save a copy of your system folders (or other folders if you wish).

    Rsync needs to be able to read the source and write to the destination, so I’d start with testing that Rsync is able to do that.

    Given you’re using an encrypted partition it’s possible you’re trying to read/write to the wrong locations. You’ve provided device UUIDs but you’d probably actually need to be backing up the mounted decrypted locations? I.e. the root file system / will actually be a mounted location in your Linux set up, probably under /run, with symlinka pointing to it for all the different system folder. Similar for /home/ if you want to back up personal files.

    The device UUID would point to the filesystem containing the encrypted file (managed by LUKS) which will have very limited read/write permissions, rather than directly to the decryoted contents / or /home partitions as you’d expect in a normal system. In particular if /dev/dm-0 (looks to be an nvme drive) is an encrypted destination then really you also want to be pointing directly to it’s decrypted mounted location to write your files into, not the whole device.

    Edit: think of it like this, you don’t want to back up the encrypted container with Timeshift, you want to back up the decryoted contents (your filesystem) into amother location in your filesystem (encrypted or decrypted). If the destination is also an encrypted location you need to back up into its file system, not the device where the encrypted file sits. So use more specific filesystem paths not UUIDs. That would be something like /mnt/folder or /run/folder not /dev/anything as that’s hardware location, and not directly mounted in an encrypted filesystem unlike how it can be in a non-encryoted system.


  • Any points and click adventure game, there are loads including old classics and modern good games.

    Monkey Island remasters are fun and can be played with mouse. Broken Sword games are also good.

    Rusty Lake games are great if you prefer more puzzle games than narrative ones. Still has a great somewhat surreal plot just not like a point and click narrative game.

    Also If you havent played dwarf fortress now is the time to learn, the siege update came out this week. Mouse or keyboard, or both, but definitely can be done one handed.

    Vampire Survivor that others have suggested is a good shout, one hand on the keyboard is enough and its very addictive.


  • 100% CPU use doesnt make sense. RAM would be the main constraint not the CPU. Worth looking into - maybe a bug or broken piece of software.

    Also the DE may he more the issue than the distro itself. You could install an even more lightweight desktop environment like Open box. Also worth checking whether youre using x11 or Wayland. Its easy to imagine Wayland has not been optimised or extensively tested on something like your device, and could. Easily be a random bug if the DE is pushing your CPU to 100%

    There are super lightweight distros like Puppy linux.


  • It had to happen eventually. Seems reasonable time to make the moce. It’ll be beneficial for all Linux users, and probably a huge relief for Gnome devs to be be able to focus purely on wayland.

    It just will suck a bit for those on rolling release distros who still experience major issues with Wayland, particularly when its not Gnome or Wayland projects that need to make a fox - looking at you Nvidia.

    I wouldn’t be surprised if other big DEs, such as KDE, start making firmer plans for dropping X11. I’m one of the 30% of KDE users still using X11 - for me it was Nvidia issues, and I do remain anxious about being reliant on drivers from a notoriously bad manufacturer. Having said the drivers have improved massively over the past 18-24 months for me at least, and maybe everyone moving over to Wayland is what’s needed to force Nvidia to act.


  • In terms of KDE dependencies, you’re talking basically about QT. The amount of packages you download shouldnt be too much and likely used for other QT programs which are common.

    However there is also GSconnect which is a Gnome extension and uses the KDE connect protocol.

    I would say that your concerns regarding the KDE Connect dependencies should be balanced against the good Android and iOS support, and the wide use of KDE connect means it is well maintained, supported and responsive to security updates. These considerations may outweigh the installation of packages that you otherwise won’t be using? It may be better to go mainstream and accept the dependencies than hunt down a lesser supported alternative and deal woth the associated shortcomings.


  • The key is getting out at the right time, and that is weighed massively against small investors. The big investors and institions control the market and can move quickly while small investors cannot.

    Tesla is not doing well - look at its falling sales. It’s a risky stock to hold. The AI companies are also highly risky stocks to hold.

    That doesn’t mean don’t hold them - all anyone is saying really is that these are high risk investments, and at some point they are going to probably crash because it’s a bubble.

    That doesn’t necessarily mean “don’t invest”. It does certainly mean be prepared to get out fast and also only use money you can afford to lose when investing with such high risk stocks.


  • It’s about short term vs long term costs, and AWS has priced itself to make it cheaper short term but a bit more expensive long term.

    Companies are more focused on the short term - even if something like AWS is more expensive long term, if it saves money in the short term that money can be used for something else.

    Also many companies don’t have the money upfront to build out their own infrastructure quickly in the short term, but can afford longer term gradual costs. The hope would be even though it’s more expensive, they reach a scale faster where they make bigger profits and it was worth the extra expense to AWS.

    This is how a lot of outsourcing works. And it’s exacerbated by many companies being very short term and stock price focused. Companies could invest in their own infrastructure for long term gain, but they often favour short term profit boosts and cost reduction to boost their share price or pay out to share holders.

    Companies frequently so things not in their long term interests for this reason. For example, companies that own their own land and buildings sell them off and rent them back. Short term it gives them a financial boost, long term it’s a permanent cost and loss of assets.

    In Signals case it’s less of a choice; it’s funded by donations and just doesn’t have the money to build out it’s own data centre network. Donations will support ongoing gradual and scaling costs, but it’s unlikely they’d ever get a huge tranch of cash to be able to build data centres world wide. They should still be using multiple providers and they should also look to buildup some Infrastructure of their own for resilience and lower long term costs.


  • It does make sense for Signal as this is a free app that does not make money from advertising. It makes money from donations.

    So every single message, every single user, is a cost without any ongoing revenue to pay for it. You’re right about the long run but you’d need the cash up front to build out that infrastructure in the short term.

    AWS is cheap in the sense that instead of an initial outlay for hardware, you largely only pay for actual use and can scale up and down easily as a result. The cost per user is probably going to be higher than if you were to completely self host long term, but that does then mean finding many millions to build and maintain data centres all around the world. Not attractive for an organisation living hand to mouth.

    However what does not make sense is being so reliant on AWS. Using other providers to add more resilience to the network would make sense.

    Unfortunately this comes back to the real issue - AWS is an example of a big tech company trying to dominate a market with cheap services now for a potential benefits of a long term monopoly and raised prices in the future. They have 30% market share and already an outage by Amazon is highly disruptive. Even at 30% we’re at the point of end users feeling locked in.


  • Rust or mold, it doesn’t really matter. As other have said it’s on the outer part of the circle - the bit contacting the outer glass thread. The inner circle is the plug that contacts the contents and is clear.

    If it feels scratch with a finger nail its rust, if it’s soft and scrapes off its mold. But as I said it’s not in contact with the contents so it doesn’t matter.

    Also the contents of the jar are pickled. That means brine or vinegar, which is highly acidic and is what keeps the food fresh/prevents mould and bacteria. So if the pickles themselves look fine then they’ll be fine to eat. If the pickling had failed the contents would be mouldy.

    Rust would make sense as the content of the jar is acidic and acids accelerate rust. There could be small pockets of air left at that location when you seal the jar and some fluid inevitably gets forced out as it is sealed; air plus acid is perfect for rust. But the jars internally themselves were otherwise well sealed as there is no rust in the inner bit of the circle, suggesting it plugged the jar contacting the fluid directly and no gas was left.

    This likely reflects the jar lids are not quite perfect for the jar or possibly not screwed on to their perfect max tightness leaving air behind at those locations. But they were screwed on well enough to seal the content.


  • So in terms of hardware, I use a Raspberry Pi 5 to host my server stack, including Jellyfin with 4k content. I have a nvme module with a 500gb stick and an external HDD with 4tb of space via USB. The pi5 is headless and accessed directly via SSH or RDC.

    The Raspberry Pi 5 has H.265 hardware decoding and if you’re serving 1 video at a time to any 1 client you shouldn’t have any issues, including up to 4k. It will of course use resources to transcode if the client can’t support that content directly but the experience should be smooth for 1 user.

    For more clients it will depend on how much heavy lifting the clients do. I my case I have a mini PC plugged into my TV, I stream content from my pi5 to the mini PC and the mini PC is doing the heavy lifting in terms of decoding. The hardware on the pi5 is not; it just transfer the video and the client does the hard work. If all your clients are capable then such a set up would work with the pi5.

    An issue would come if you wanted to stream your content to multiple devices at the same time and the clients don’t directly support H.265 content. In that case, the pi5 would have to transcode the content to another format bit by but as it streams it to the client. It’d cope with 1 user for sure but I don’t know how many simultanous clients it could support at 1440p.

    The other consideration is what other tools are being use on the sever at the same time. Again for me I live alone so I’m generally the only user of my pi5 servers services. Many services are low powered but I do find things like importing a stack of PDFs into Paperless NGX is surprisingly CPU intense and in that case the device could struggle if also expected to transcode content.

    I think from what you describe the pi5 could work but you may also want to look at higher powered mini PC as your budget would allow that.

    For reference I use dietpi as the distro on my server, and I use a mix of dietpi packages (which are very well made for easy install and configuration) and docker. I am using quite a few docker stacks now due to the convenience of deploying. Dietpi is debian based, and has a focus on providing pre configured packages to make set up easy, but it is still a full debian system and anything can be deployed on it.

    Obviously the other consideration in the pi5 is an ARM device and a mini PC would be X86_64. But so far I’ve not found any tools or software I’ve wanted that aren’t compiled and available for the Pi5 either via dietpi or docker; ARM devices are popular in this realm. I have come across a bug in docker on ARM devices which broke my VPN set up - that was very frustrating and I had to downgrade docker a few months ago while awaiting the fix. That may be worth noting given docker is very important in this realm and most servers globally are still x86.

    If I were in your position and I had $200 I’d buy the maximum CPU and GPU capability I could in 1 device, so I’d actually lean to a mini PC. If you want to save money then the Pi5 is reasonabkr value but you’d need to include a case and may want to consider a nvme or ssd companion board. Those costs add up and the value of the mini PC may compare better as an all in one device; particularly if you can get a good one second hand. There are also other SBC that may offer even better value or more power than a pi5.

    Also bear in mind for me I have a mini PC and pi5; they do different things with the pi5 is the server but the mini PC is a versatile device and I play games on it for example. If you will only have 1 server device and pre exisiting smart tvs etc you’ll be more reliant on the servers capabilities so again may want to opt for the most powerful device you can afford at your price point.





  • So RetroArch is the standard front end for the open source Libretro collection of emulation engines/cores. It’s a decent interface but not necessarily to everyone’s tastes. The Libretro cores are excellent and give a standardised way of managing and emulating a whole range of game systems.

    ES-DE is a frontend for numerous emulators, aimed at helping people organise their games collections and emulators in a visual way. It can act as a good front-end for RetroArch, as well as other emulators. It’s a more customisable and perhaps more user friendly experience, which also works well with controllers.

    Retroarch + ES-DE is a common combination, and for Linux users (including Steam Deck) RetroDECK is one great option. RetroDECK is a single preconfigured Flatpak with ES-DE, Retroarch and other emulators all packaged together and containerised. All you have to do is install the flatpak, then add the games and ROMs as wanted.

    EmuDeck is another option which again combines RetroArch + ES-DE amongst other tools. It’s available for Linux including SteamOS, but also Windows and Android. It’s an installation script that allows you to select the emulators and tools you want and it’ll install them all from available stores or from projects websites, and then configure them on your system. Then you add your games and ROMs as wanted.