There are models freely available to download with the censoring stripped out. They’re not as capable as ChatGPT, but they’re not terrible, and they’re improving quickly.
You can run some of the (right combination of smaller and/or quantized) models on consumer laptop/desktop GPUs and even more can be run (if slowly) on CPU with plenty of RAM, but, yeah, beyond a certain point in model capability/performance, you are going to own, or rent, datacenter GPUs.