![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/980b54a8-1df9-48b5-827d-17ed7965e7be.png)
It’s about fzf. I use skim, btw ;p
It’s about fzf. I use skim, btw ;p
Not offline but self-hosting available , there’s LinguaCafe it’s like book reading web app with tons of niceties. I am yet to try it for myself.
I’m using github.com/mag37/dockcheck for this, with its “-d N” argument. There’s a tradeoff between stability and security, you need to decide for yourself. It will also depend on what services you’re hosting. For example, nextcloud and immich would be disastrous under such a regime.
I used it for an afternoon or so, a few years back and didn’t find it anything useful. It would generate a text file with my commit messages, like redirecting git log. Please take this as an over simplification, maybe there’s more to it and I just couldn’t get the gist…
I’m using mxlinux “ahs” version, it comes with kde at their “ahs” repos for supporting latest hardware and graphics cards. You may also check for the non-ahs, there might be a meta-package for kde plasma and that’s it…
Thanks. I did searx and read something wrong apparently.
deleted by creator
Not what you asked for, but I’d go for… Chatgpt, claude, mistral, etc. all have free plans that are big enough nowadays… Also, there’s ollama. A searxng plugin with the latter would be neat!
we should definitively have a wiki (though people should use “search” too, I wonder if a wiki would help really). This “topic” comes every month. I have posted this already, here it goes again: https://github.com/anderspitman/awesome-tunneling
Perhaps a chronological view is a bonus of the idea lives on for long enough. And having links between stories, or tags can be useful at some point too… https://www.usememos.com/
If you’re looking forward to use some LLM AI chatbot (e.g. ollama), you might prefer nvidia. I have been using my card without problems. If your only usecase is videogames and movies, then probably AMD is okay. Something similar might happen on regards to transcoding (serving movies to many devices), you might want to check which brand offers the best (some nvidia cards have hacks to enable features that are otherwise off because they segment market with that). In any case, depending on your usecase, you might prefer look into models actually, and not only the brand… on top of it, prices are fluctuating a lot. Perhaps you find a great card, for a great price, but with some compromise on the desired functionality. I’ve been there, ideally you want something to tackle all problems (incl. future desires of functionality and so on…)
Have you tried protonvpn-cli ? it has no GUI, of course.
Seems like you used a Linux long time ago. Or, a “libre” distro without drivers, and you went on trying to use hardware that wasn’t ~1 year old or more.
How about using some isolation? nix-env, distrobox, or flathub (official builds only?). I have listed them in my own (personal) order of preference.
From what I see, it’s like distrobox (using Podman) for Fedora inmutable spins. Cool!
Please, go ahead! I like what you did previously, asking about recent improvements people applied to their systems. Just now, I have realized that moderating is also about inspiring significant contributions within the community. Challenge accepted!
Ah, docker-mailserver and delta.chat could also be great for your case!!
E2E is complicated, if you self-host for a group, having TLS and encrypting data at rest (storage) may be enough. Get a threat model. That being said, I would recommend snikket.org which is a superset of extensions over XMPP which is the open source IM that was the base of almost every app out there. Matrix and Rocket are both alright too. Depends too on your resources, synapse requires too much RAM (or so I heard)
Syncthing because it’s p2p/ local-first. Meaning it’s robust to interruptions.
Yes. I believe all self-hosting apps are like that. As an example, I have a docker container running Searxng and I use it locally on my PC as default search engine. Just keep in mind that docker compose port mapping (e.g. “3000:80”) attaches to all available IPs unless you specify it like “127.0.0.1:3000:80”.