![](https://media.kbin.social/media/7c/f7/7cf79000895266ab0838e872ae4fe9019dce4b973928dc6378c95482b1027942.jpg)
![](https://lemmy.ml/pictrs/image/q98XK4sKtw.png)
I do the same. Fedora on my laptop because I want a balance of stability and having the newest features. Servers run Debian, because I don’t have time to fix and update things.
I do the same. Fedora on my laptop because I want a balance of stability and having the newest features. Servers run Debian, because I don’t have time to fix and update things.
Logcheck. It took ages to make sure innocent logs are ignored, but now I get an email as soon as anything non-routine happens on my servers. I get emails with logs from every update, every time I log in, etc. This has given me the most confidence that nothing unexpected is happening on my servers. Of course, one needs to make sure that the firewall is configured well, and that you use ssh keys etc., but logcheck is how I know I’m doing enough.
How do you upload a snapshot?
Basically, as you said. Mount the data somewhere and back up its contents.
I back up snapshots rather than current data, because I don’t want to stop the running containers that read and write from that data. I’d rather avoid the situation where the container is writing data while it’s being backed up. The back up happens shortly after the daily snapshot is made so the difference between current and snapshot data is small.
As others have said, with an incremental filesystem level mechanism, the backup process won’t be too taxing for the CPU. I have ZFS set up which makes this easy and I make hourly snapshots using sanoid which also get sent to another mirrored pair of connected drives using syncoid. Then, once a day, I upload encrypted daily snapshots to a bucket in the cloud using restic. Sounds complicated, but actually sanoid/syncoid and restic do all the heavy lifting. All I did is automate their schedules using systemd timers and some scripts to backup the right directories.
It’s worth noting that you don’t even need to still have the Kindle device physically with you. I had to throw mine out (I still had the original first ever model), but it’s still registered and the token is valid for Calibre’s DeDRM.
Looks perfect! Exactly what I was looking for. Thanks!
Very interesting project! However, I can’t help shake the feeling that whilst you pitch it as a platform for sharing DRM-free games, it will get used for sharing games against the licenses and wishes of publishers. I don’t really care about the publishers, but do you not think there is a great risk that once your app gets enough attention, it will draw their ire and force you to shut down? Perhaps not directly, but e.g., removing you from the windows store etc.
For caching, are you sure you’re generating enough traffic to benefit from it? Plus, CDN caching’s strength only really comes into play when the users are geographically distributed which isn’t really the case for most self hosters.
For DDoS check if your VPS host does DDoS protection. Some do and include it for free. I’ve been monitoring my server traffic lately. Since I’ve ditched Cloudflare, I haven’t needed DDoS protection.
You can still use Cloudflare DNS without redirecting traffic via their CDN. I do that.
The point about not revealing the IP address is a personal one it seems. I think it indeed does matter if that IP address is if your home, but not so much of it’s of a VPS in some data center. But anyway, this point seems personal.
However, everything is a trade off and everybody has a personal take on which trade off they want to take. When I was in a similar situation, I ditched CDN proxying via Cloudflare though I still kept them for DNS.
My configuration and deployment is managed entirely via an Ansible playbook repository. In case of absolute disaster, I just have to redeploy the playbook. I do run all my stuff on top of mirrored drives so a single failure isn’t disastrous if I replace the drive quickly enough.
For when that’s not enough, the data itself is backed up hourly (via ZFS snapshots) to a spare pair of drives and nightly to S3 buckets in the cloud (via restic). Everything automated with systemd timers and some scripts. The configuration for these backups is part of the playbooks of course. I test the backups every 6 months by trying to reproduce all the services in a test VM. This has identified issues with my restoration procedure (mostly due to potential UID mismatches).
And yes, I have once been forced to reinstall from scratch and I managed to do that rather quickly through a combination of playbooks and well tested backups.
What benefit do you get from running a Cloudflare proxy if you’re directing it to a VPS? I used to run with a Cloudflare proxy when my reverse proxy was hosted at home. Since then, I’ve moved it to a VPS and I no longer use the Cloudflare proxy, because I only expose the IP address of the VPS which is fine. Arguably Cloudflare provides you with DDoS protection, but that’s so far never been a problem for me.
Wireguard easily supports dual stack configuration on a single interface, but the VPN server must also have IPv6 enabled. I use AirVPN and I get both IPv6 and IPv4 with a single wireguard tunnel. In addition to the ::/0 route you also need a static IPv6 address for the wireguard interface. This address must be provided to you by ProtonVPN.
If that’s not possible, the only solution is to entirely disable IPv6.
Most open source vpn protocols, afaik, do not obfuscate what they are, because they’re not designed to work in the presence of a hostile operator. They only encrypt the user data. That is, they will carry information in their header that they are such and such vpn protocol, but the data payload will be encrypted.
You can open up wireshark and see for yourself. Wireshark can very easily recognize and even filter wireguard packets regardless of port number. I’ve used it to debug my firewall setups.
In the past when I needed a VPN in such a situation, I had to resort to a paid option where the VPN provider had their own protocol which did try to obfuscate the nature of the protocol.
Thanks for your reply! Out of curiosity, what made you go with Prometheus over zabbix and check_mk in the end? Those two seem to be heavily recommended.
Maintaining legacy options is always maintenance overhead or things you need to work around when implementing new features. I suspect that they’ve concluded that not enough people use it anymore to justify the overhead.
Why not have the reverse proxy also do renewal for the SMTP relay certificate and just rsync it to the relay? For a while I had one of my proxies do all the renewals and the other would rsync it.
I deploy as much as I possibly can via Ansible. Then the Ansible code serves as the documentation. I also keep the underlying OS the same on all machines to avoid different OS conventions. All my machines run Debian. The few things I cannot express in Ansible, such as network topology, I draw a diagram for in draw.io, but that’s it.
Also, why not automate the certificate renewal with certbot? I have two reverse proxies and they renew their certificates themselves.
Plasma is amazing. It has been my DE of choice for years now. So happy I’m donating to the project.
That’s because podman-compose is not a goal for the project IIRC. Therefore, it will never be feature complete. They encourage using systemd or other tools to manage the pods. It seems that podman-compose is just not an enterprise use case.
Edit: so if docker-compose is important then yea, stick to docker. I moved to using systemd instead. Podman can generate the systems files for you.
It does not seem like you heard the arguments presented in the article. It isn’t about being offended by any left or right wing politics, but because women engineers and scientists were uncomfortable about it for a variety of reasons. In a field which struggles to attract and keep female talent, this is a pretty big thing. The model herself spoke out and asked to be “retired from tech”.