Taking ollama for instance, either the whole model runs in vram and compute is done on the gpu, or it runs in system ram and compute is done on the cpu. Running models on CPU is horribly slow. You won’t want to do it for large models
LM studio and others allow you to run part of the model on GPU and part on CPU, splitting memory requirements but still pretty slow.
Even the smaller 7B parameter models run pretty slow in CPU and the huge models are orders of magnitude slower
So technically more system ram will let you run some larger models but you will quickly figure out you just don’t want to do it.
Yale z-wave work well and last a long time between needing to replace batteries, and can run off of rechargeables. Can add to home assistant and work with Siri and Alexa integrations on home assistant.
Had some Schlage locks that ran through batteries way too fast.