I’m thinking of expanding my homelab to support running some paid SaaS projects out of my house, and so I need to start thinking about uptime guarantees.
I want to set up a cluster where every service lives on at least two machines, so that no single machine dying can take a service down. The problem is the reverse proxy: the router still has to point port 443 at a single fixed IP address running Caddy, and that machine will always be a single point of failure. How would I go about running two or more reverse proxy servers with failover?
I’m guessing the answer has something to do with the router, and possibly getting a more advanced router or running an actual OS on the router that can handle failover. But at that point the router is a single point of failure! And yes, that’s unavoidable… but I’m reasonably confident that the unmodified commodity router I’ve used for years is unlikely to spontaneously die, whereas I’ve had very bad luck with cheap fanless and single-board computers, so anything I buy to use as an advanced router is just a new SPOF and I might as well have used it for the reverse proxy.
I get it, and I’ve seen this response other places I’ve asked about this too. But a license agreement can just offer refunds for downtime, it doesn’t have to promise any specific amount of availability. For small, cheap, experimental subscription apps, that should be enough; it’s not like I’m planning on selling software to businesses or hosting anything that users would store critically important data in. The difference in cost between home servers and cloud hosting is MASSIVE. It’s the difference between being able to make a small profit on small monthly subscriptions, versus losing hundreds or thousands per month until subscriber numbers go up.
(also fwiw this entire plan is dependent on getting fiber internet, which should be available in my area soon; without fiber it would be impractical to run something like this from home)
This will blow up in your face. You know enough to be dangerous but no enough to know that uptime is very hard.
AWS or Azure really isn’t that expensive if you are just running a VM with some containers. You don’t need to over think it. Create a VM and spin up some docker containers.
You aren’t going to get high reliability unless you spend big time. Instead, could you just offer uptime during business hours? Maybe give yourself a window to do planned changes.
That’s not the point. Its unprofessional. Someone is going to smash and grab OPs idea and actually have the skills to host it properly. Probably at a fraction of the cost because OP doesn’t understand that hosting SaaS products out of his house isn’t professional or effective.
Also; cloud is cheaper than self hosting at any small amount of scale. This wouldn’t cost much to run in AWS if built properly. The people who struggle with AWS costs are not professionals and have no business hosting anything.
This is so true. You can’t expect your home server to ever be compatible to enterprise setups. Companies who have stuff on prem are still paying for redundant hardware and software which requires money and skill to maintain.
I’ve done the on prem design. I’ve migrated people entirely to the cloud. I specialize a little in between.
Without any shred of doubt the cloud is going to be more cost effective than self hosting for 99% of all use cases. They’re priced that way intentionally. You cannot compete with Cloudflare/AWS/GCP/Vultr/Akami/Digital Ocean/etc.
My homelab isn’t about scaling, production workloads and definitely isn’t accessible to anyone but me. I’d argue using it in any other way defeats the purpose and shows a lack of understanding.
The cloud is cheaper hosting things like websites that need HA. However if you are doing big compute or storing lots of data it will not be cheaper.