Digest: r/selfhosted

ID Type Limit Status Last Update Next Update
digest-selfhosted digest 8 Enabled 5 days ago tomorrow
Posts History Gallery Config RSS JSON

Posts (8)

Digest: r/selfhosted: May 01 - May 08, 2026

Published: 5 days ago | Author: System

She may come to regret asking.

image

Buckle up sis, that's just the top 10% of the iceberg.

⬆️ 997 points | 💬 65 comments


Patch your servers, peeps, new Linux kernel vulnerability just dropped

CopyFail just dropped, it's a new Linux kernel vulnerability that gives attackers root privileges. https://arstechnica.com/security/2026/04/as-the-most-severe-linux-threat-in-years-surfaces-the-world-scrambles/

Debian has an updated kernel, Proxmox too. Looks like Raspberry Pi hasn't released an updated version yet.

⬆️ 366 points | 💬 162 comments


A homepage dashboard I'm finally happy with.

image

⬆️ 399 points | 💬 42 comments


Vaultwarden 1.36.0 patches vulnerabilities

https://github.com/dani-garcia/vaultwarden/releases/tag/1.36.0

Security fixes

This release contains security fixes for the following advisories. We strongly advice to update as soon as possible.

SSO Login CSRF - GHSA-pfp2-jhgq-6hg5, GHSA-w6h6-8r66-hcv7
User/Organization Enumeration - GHSA-hxqh-ff5p-wfr3
SSO existing-user binding - GHSA-j4j8-gpvj-7fqr
GHSA-6x5c-84vm-5j56
SSRF via Icon Endpoint - GHSA-72vh-x5jq-m82g
Some crate's updated and other minor security enhancements

These are private for now, pending CVE assignment.

https://github.com/dani-garcia/vaultwarden/releases/tag/1.36.0

⬆️ 254 points | 💬 14 comments


3-2-1 rule , how are you all doing it without breaking bank?

So my nas is getting big now slowly with around 8tb of data.

I run it on raid 1, but I wonder in the worst case scenario, I wanted to also have a off site backup. But obviously 8tb + on cloud is going to be expensive no?

How are you guys storing your offline backup? And where do you guys store it?

⬆️ 264 points | 💬 196 comments


PSA for anyone not using LXCs on Proxmox

The Point: Holy shit LXCs are so cool and felt like black magic getting "free" RAM back. If you're newer, like me, and have just been using VMs instead of LXCs, you should look at changing that.

I started my server back in November knowing absolutely nothing about using Linux, using CLI, or Docker. At the same time, I also went in raw, jumping straight into Proxmox on three nodes. As a result, I ended up using a lot of the Proxmox VE Helper Scripts for initial setup and have since gone back and learned how to do a lot of things myself. One of the hugely inefficient decisions I made at the time was to use a VM for Docker instead of an LXC.

For context, two of my nodes are running an i3-5005U and 8gb of soldered DDR3 RAM. One of those machines was exclusively running a VM to run Docker containers largely centered around downloads. On average, I was hitting ~30-50% CPU on the PVE host and ~7GB RAM usage.

Switching to an LXC has brought that down to 10-25% CPU and ~2-2.5GB RAM usage. A machine that felt like it was at its limit suddenly gained immense amounts of headroom.

Just wanted to put this out there for anyone procrastinating switching some VMs to LXCs. In my case, it was worth the relatively low amount of effort to free up such a significant amount of resources.

⬆️ 251 points | 💬 83 comments


Appreciation post: Tailscale and Headscale

These two are the most incredible technologies on the modern Internet. The Web is finally free and open again, just as Tim Berners-Lee intended it so many decades ago at CERN. People are finally taking the Web back from corporations, and it is amazing to see. Tailscale is going to be the biggest tech company in the world by the next decade, and the GTA will overtake the Bay Area as the world's tech capital.

⬆️ 232 points | 💬 75 comments


Digest: r/selfhosted: Apr 24 - May 01, 2026

Published: 1 week ago | Author: System

Glance Dashboard V.2 | GA

https://www.reddit.com/gallery/1swsb7o

After a lot of trial & error (and a few docker restart moments 😅), I finally got my dashboard where I want it:

  • Full monitoring (Docker, services, network)
  • Tailscale + WireGuard integration
  • Custom API widgets (live stats & device tracking)
  • Home Assistant + automation layer
  • Custom themes & UI tweaks

All running on a Raspberry Pi 5 with a clean and optimized Docker stack.

Still a work in progress (because let’s be honest… a homelab is never “finished”), but it’s already my daily control center.

What would you add next? Any ideas for the next upgrade?

⬆️ 333 points | 💬 37 comments


Hound - A Media Server Alternative to Plex/Jellyfin + Stremio

image

What is Hound?

Hound is a self-hosted, open-source media server, like Plex/Jellyfin, but with the extra ability to stream content through P2P (torrent) or HTTP/Debrid without downloading first. With Hound, you have the flexibility of fully controlling your media like Jellyfin, but can also stream instantly ala streaming services. It's the best of both worlds.

I posted about Hound in this sub years ago, when it was originally built as a simple movie/tvshow tracker. Since then Hound has evolved into a full media server. Link.

Links

Features

  • Free-range, organic code, written by a person
  • Stream your own content from your drives, or stream content directly from P2P (torrent) and HTTP/Debrid sources through Stremio addons
  • Download content to your drives directly from the Hound Web portal
  • Very simple to deploy, <10 mins before you start watching content
  • Hound was originally built as a media tracker, so it has robust features such as collections, reviews, comments, watch history/activity. All your watches and rewatches are automatically tracked
  • UI/UX is a core focus, designed with your mom using this in mind
  • No telemetry

Demo

Note that the web portal isn't optimized for mobile yet:

Access the demo here.

username: selfhosted
password: password

This is just the web portal, for actually watching content you'll want to use the apps

Platforms

Android and Android TV apps are available, you'll need to sideload the APKs. iOS and tvOS require a bit more time for testing and to distribute through TestFlight. They share the same code (built on React Native TVOS) so most of the effort is done.

Installation

Docker compose is the recommended way to install Hound:

services:
  hound-postgres:
    container_name: hound-postgres
    image: postgres:18
    environment:
      POSTGRES_DB: hound_db
      POSTGRES_USER: hound
      POSTGRES_PASSWORD: super-strong-password
    volumes:
      - ./Hound Data/postgres_data:/var/lib/postgresql
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U hound -d hound_db"]
      interval: 5s
      timeout: 5s
      retries: 5

  hound-server:
    container_name: hound-server
    image: houndmediaserver/hound:latest
    depends_on:
      hound-postgres:
        condition: service_healthy
    ports:
      - "2323:2323"
    environment:
      - POSTGRES_DB=hound_db
      - POSTGRES_USER=hound
      - POSTGRES_PASSWORD=super-strong-password
      - HOUND_SECRET=super-strong-secret
    volumes:
      - ./Hound Data:/app/Hound Data
      # (Optional) attach your media library
      # IMPORTANT: Please read the docs before doing this
      # - /path/to/movies:/app/External Library/Movies
      # - /path/to/shows:/app/External Library/TV Shows
  • Change POSTGRES_PASSWORD on both hound-postgres and hound-server services
  • Change HOUND_SECRET

Then run docker compose up -d

Access the web portal at port 2323:

http://<ip-address>:2323
username: admin
password: password

Make sure you change your password immediately.

Next, you'll want to set up a provider next to start watching content, refer to the guides below:

Why Hound?

When I set up Jellyfin for my friends and family, I found that they kept switching back to Netflix/Prime when it was more convenient. Today, the Plex/Jellyfin ecosystem is quite mature. But for some (especially older) people, using a separate app, requesting content first, and waiting a couple minutes (or even longer) can be unintuitive/inconvenient. It's much nicer to be able to scroll and discover content, and watch immediately in seconds.

From an admin perspective, drives are getting increasingly expensive, and larger libraries drive electricity costs even more.

My vision for Hound was to have all the advantages of self-hosting media, with the flexibility of streaming. You can still curate a library with whatever content you like, but for content not yet downloaded in your library, Hound switches automatically to P2P/Debrid streaming, so it's a seamless experience for users.

Hound is in Beta + Pricing

Hound is in Beta, so please expect bugs and run backups often. Although Hound is completely self-hosted and open source (AGPLv3), there will be a paid tier when Hound leaves beta:

  • Hound is completely free, all features unlocked for one user
  • A paid license will be required to unlock unlimited users
  • No subscription, one-time purchase at a reasonable price
  • License activation is completely offline

Unfortunately, unlike the amazing maintainers at Jellyfin, I can't keep Hound free. I thought long and hard about pricing that respects self-hosting and open source philosophies. I settled on this model so anyone can try Hound and all its features for free, and have an informed choice on whether or not to purchase.

Since Hound is completely open-source, I can't stop you from forking and removing the license checks. Instead of doing this, if you contribute to Hound's development actively, I'll give you keys upon release.

You can't actually purchase yet since we're in Beta, but I wanted you to know in advance.

Please try the demo and leave feedback! If you like the project, please consider adding Hound to your stack, and even contributing!

⬆️ 228 points | 💬 96 comments


MinIO repository was archived on Apr 25, 2026

image

Just learned about S3-style object storage and was looking into self-hosted options for my homelab. Came across MinIO and got pretty excited because it seemed like exactly the kind of thing I’d want to learn and maybe use.

Then I noticed the repo is archived, which was a bit discouraging.

I know that doesn’t necessarily mean the software is dead, but it made me pause before building around it.

For those using MinIO, would you still adopt it today for a homelab? Or would you look at alternatives instead?

Curious what people here are doing.

⬆️ 310 points | 💬 62 comments


Responsibility and Ownership: You Can’t Vibe‑Code Your Way Around It

The title took me a while to land on, but the thoughts behind it have been sitting in my head for months. I've been into homelabing since early 2020 with my first build, then a second, then a third, then whatever my bank account allowed after that. It's been a lot of fun and tears. But lately browsing this community has had a edge to it, a lot of AI negativity that I mostly understand, and that's what I want to write about.

I'm a programmer by trade, actually army before, but released and went back to school. The usage of AI at work has increased and I don't see that trend stopping quite yet, AI is useful as a companion to handle tedious tasks like documentation, reviewing SQL, tedious front-end markup, one shot scripts etc.. But using it to one-shot a whole application is risky and if published downright irresponsible and this is where I think most of the friction is happening, at least for me.

When I see the AI projects posted here, with my experience, I think I can separate the wholly vibe coded ones from those that AI was used to assist, the latter I don't mind, despite what some Luddites say, that's what the industry is like now. When you code something for your own use, the blast radius is limited, thing could run horribly and it won't matter, you are the only one that suffers the consequences, but if you publish this code you need to take ownership of it and ownership brings responsibilities that you need to shoulder. Even as a programmer I don't take this lightly, this is not something people should dismiss with the command git push origin main.

It's one of the reasons I don't publish my stuff, or at the very least don't advertise it, not because it's vibe coded, it isn't, it's because I still need to take responsibility for it, that's time, effort and commitment that shouldn't be underestimated (many seems to). Maintenance is not a trivial affair, thinking about current and future users, how you approach breaking changes, how you architect things to avoid breaking changes as much as possible. Continuity of the project is also important, if you take your project seriously and your user base seriously you should have this in mind : "What if I can't continue the project?", archiving the repo and disappearing is not the right way to do things.

So, before publishing and parading your project, you just need to ask yourself a simple question : "Can I take ownership and responsibility for this code". The answers will depend on your definitions of these concepts, but if you think about it for more than 5 minute, you might just realise your project should stay private.

PS: Here I am talking more about the moral/ethical implications when I talk about responsibilities and ownership, you are obviously responsible for what run in your machine. Excuse some awkward syntax or phrases, non native English speaker.

⬆️ 230 points | 💬 87 comments


My self-hosted website ran on my Pi Zero 2W

image

So I have been working on my personal portfolio website for some time now, I had since forgot about it and had no motivation to expand it. I have been looking for the perfect usecase for my Pi Zero 2W, after moving my Pi-hole server from it onto my new Pi 5.

I then saw this post: https://www.reddit.com/r/selfhosted/comments/1sqvujn/selfhosted_public_website_running_on_a_10_esp32/

And it honestly got me excited to update my site, and move everything over the Pi. It is certainly much easier to run my site, from a measly 512KB of ram to just about 512MB.

The site is finally in a state where I feel comfortable sharing it, and I hope you guys like the aesthetic. There is a guestbook to sign as well :)

Site:
https://spellbound.sh

⬆️ 220 points | 💬 19 comments


I came to realize that selfhosted forums are an essential part towards digital sovereignty

Hey, here's the HortusFox dev again.

I got inspired by Dan Brown's decision to abandon discord for a hosted zulip instance. And then it hit me...

Back in the day, software projects had a website, documentation and forum. Some had, in addition, an IRC channel somewhere. This just worked. It was an amazing way to foster community and keep control over your data.

So, today I was very unhappy regarding enshittification again. I mean, we used to have soooo many platforms and sites back in the day. Now everything takes place on a handful of platforms. Internet monopolization by corporations. I know, this is no recent news. We all know that.

I believe forums may be a key aspect to regain digital sovereignty again. That's why I've decided to setup a forum infrastructure for HortusFox. When tinkering around, I eventually decided to go with Flarum. Simply because it's easy to install, uses the well-established Laravel framework and I like it's style from the ground without any additional extensions installed.

The selfhosted community is one of the most aware communities when it comes to data protection and digital sovereignty. I love that! That's why I once again decided to post here. ❤️

As for me, I am now going into the process of migrating from discord to flarum. I mean, discord feels great, it offers many features, but it's eventually centralized, it only has closed communities in terms of SEO and recent decisions in terms of age verification are concerning. The latter one is also a reason why I finally abandoned publishing play store apps three years ago, and went fully PWA. Microsoft Store does the same now (removed sign-up fee in favor of ID verification).

Maybe I'm a bit carried away, but imagine, if even the reddit communities such as r/opensource or r/selfhosted would abandon reddit in favor of a forum-based communities run by volunteers? Reddit is not our friend. And various decisions to wipe out third-party apps and pushing echo chambers aren't really something I consider "the heart of the internet". By the way, did you notice Reddit now tests forcing people to use the mobile app when they browse reddit via a mobile browser? Pretty sure, they will eventually rollout this "feature".

What do you think? Both developers and selfhosters, would you like the idea that we turn back to forums again?

PS: HortusFox now also officially backs the open-source petition to have the german government acknowledge opensource work as volunteering by law. A big thanks to Boris Hinzer for launching the campaign.

⬆️ 203 points | 💬 43 comments


PSA: if you’re running iSponsorblockTV you’ll need to pair your devices again

Hi there, I’m iSponsorblockTV’s maintainer.

If you’re running iSponsorblockTV, you’ll need to pair your devices again since YouTube have changed the screenId format and are on the process of revoking all older codes.

For those of you that don’t know, iSponsorblockTV allows you to use SponsorBlock on all YouTube TV devices (TVs, sticks and consoles). It can also click the skip button for you and mute native YouTube ads.

Sadly there’s nothing that can be done on my part other than pairing devices again.

EDIT: the new screen id will be 64 hex digits long, compared to the old 26 characters

⬆️ 197 points | 💬 18 comments


My setup

image

This is my setup. Image made by AI but overall looks like this. There is no connection between proxmost host and media but proxmox uses my truenas storage (16TB). I removed everything. Nginx isn’t connected anymore. Everything is LAN. Started homelabbing in Feb with no background.

Watched a lot of videos and read too many posts on here. I run apps I vibe code for personal use.

⬆️ 200 points | 💬 42 comments


Digest: r/selfhosted: Apr 17 - Apr 24, 2026

Published: 2 weeks ago | Author: System

Self-hosted public website running on a $10 ESP32 on my wall

image

My homelab does have the usual rack of stuff (Dell Poweredge R730s and ECU servers), but this one ESP32 sits separately on the wall and serves a public website entirely by itself. No nginx or apache, no Pi, no container... just a $10 microcontroller holding an outbound WebSocket to a Cloudflare Worker that fronts the traffic.

The original launch of this back in 2022 ran for ~500 days before the original board burned out in 2023. The site sat as a read-only archive until now. I relaunched it after rebuilding it from the ground up with a lot of redundancy in mind such as a Worker relay, daily off-site backups to R2, and more, check out the project's README.

Site: https://helloesp.com

Code: https://github.com/Tech1k/helloesp

---

Update: So slight miscalculation on how popular this was going to get, this was a good stress test of the ESP to say the least. The hug of death hit way harder than I anticipated lol

I believe the ESP32 has fully crashed or it's exhausting heap in a loop. It's not even showing up on my router now. The Cloudflare Worker is still serving the offline page in the meantime which is expected. Probably not the best idea to have made this post while I was at work and away from it. I will reboot and investigate this when I'm home and make adequate changes to get it back online and stable!

Update to the update: it has risen from the cold grasp of offline darkness and reconnected as the WiFi watchdog kicked in and rebooted it automatically. Requests are getting served again and I managed to regain access to it on LAN. Cloudflare is back to showing timeouts for some while others get through (expected behavior). I may lower the SSE cap and raise the min heap threshold. It's back to just getting overloaded at the moment. I will investigate further and see what I can make changes on later to help keep it afloat and serve more requests on 520KB of ram lol

⬆️ 467 points | 💬 35 comments


Bitwarden CLI has been compromised. Check your stuff.

https://socket.dev/blog/bitwarden-cli-compromised

Same as the title. The Bitwarden CLI has been compromised and it would be good to check your stuff. I know how popular Bitwarden is around here.

⬆️ 723 points | 💬 152 comments


Migrated a client off shared hosting to a VPS last week, the difference was embarrassing

so i've been telling this client for 2 years their site was slow because of shared hosting
they finally listened after a competitor started ranking above them on google

moved them to a KVM VPS, same wordpress stack, nothing else changed
page load went from 3.2 seconds to 0.9 seconds. that's it. that's the whole story

the amount of money they lost over 2 years because they didn't want to spend an extra 15€ a month is genuinely painful to think about

if your site is on shared hosting and you're wondering why it feels slow, it's that. it's always that

⬆️ 338 points | 💬 32 comments


Would you go back to using forums?

Something that’s really bothering in the last years is how much we’ve allowed information to be gatekept by Discord, especially in the selfhosting scene.
10 years ago if you ran into a problem installing software, you could just go to the devs forums and look for someone who already had this problem solved.
Nowadays you have to join a discord server, use their shitty search bar, don’t find what you want, and ask a burnt out dev who already gave out the same answer a million times.

From this observation I’m wondering the following question: would you use an open-source forums solution you can deploy in seconds, ready to use out of the box?

I already built an MVP of something like that mainly as an addition to my portfolio, but I’m now wondering if I should bother packaging it into a “one-click” deployment to be used by other people.

The concept is a minimalist & modern app to be used for small communities, events, or even friends & family. It is completely usable out of the box, yet still really customizable, with a nice search bar to actually find the stuff you want.

I’m not selling anything, I genuinely want to know if the data gatekeeping is a concern for you too, and if you would be interested in a solution like this for your own needs.

(also there is no vibecoding here, it’s a legit project for me to learn and develop my dev career)

⬆️ 351 points | 💬 204 comments


Beyond the Basics: What are your non-negotiable Linux server hardening steps before exposing a service to the web?

Most of us start by slapping a reverse proxy (like Nginx Proxy Manager or Traefik) and maybe Tailscale or Wireguard on our setups. But for those of you exposing specific services directly to the web, how far do you take your server hardening?

I usually stick to a strict baseline (Fail2Ban/Crowdsec, UFW, disabling root SSH, key-only auth, and isolating apps in Docker containers), but I’m curious about the more advanced layers. Are any of you actively running SOC-level monitoring, Wazuh, or strict SELinux/AppArmor profiles on your homelabs?

What is the one security measure you think the average self-hoster overlooks until it's too late?

⬆️ 364 points | 💬 187 comments


LubeLogger, Self-Hosted Vehicle Maintenance and Fuel Mileage Tracker, has some Important Quality of Life Improvements You Should Know About

Hi all, it's been a few months and we've made some incremental updates to LubeLogger over that time.

In case you've never heard of LubeLogger, it's a self-hosted vehicle maintenance and fuel mileage tracker, you can log your service records and fillups in here and it will tell you exactly how much you've spent your vehicles.

Website

Documentation

Git Repository

First, as stated in our previous post here with the big UI update, we were going to start converting the grids in mobile views to cards, which makes it a lot easier to see all data without horizontal scrolling on small vertical screens, and that's finally delivered. If you prefer the older grid view in mobile, there is an option to revert in the Settings page.

https://preview.redd.it/13txlwifkkwg1.png?width=800&format=png&auto=webp&s=74c3eae6a1750460529764ff9fa047c0ceeab0b7

Second, there are now real-time notifications built within the app, if you follow us on the r/lubelogger subreddit, you might have heard of a daemon service that needed to be deployed separately, well that's no longer the case as we have integrated the daemon features into the LubeLogger app itself. Real-time notifications will allow you to immediately be notified when a reminder has its urgency changed to an urgency that you're tracking(i.e.: a reminder went from Not Urgent to Urgent), and it can be integrated with nearly every notification service out there as long as they take a HTTP POST request(there are samples for NTFY, Gotify, and Discord in the Documentation), if you don't wish to use an external notification service, it can also be configured to use the pre-existing SMTP settings.

Video Walkthrough

Documentation

As part of this, there are also Automated Events that you can now configure, some examples of what you can do with Automated Events:

  • Send an email to vehicle collaborators at a fixed time everyday containing a list of all reminders in specific urgencies(even if their urgency hasn't changed)
  • Create and backup and send it in an email to the root user at a fixed time everyday
  • Clean up temp folders or unlinked documents and vehicle thumbnails at a fixed time everyday

Here's what the automated backup email looks like:

https://preview.redd.it/q4mgykzzmkwg1.png?width=1363&format=png&auto=webp&s=1175e815a0ff23837cf3ed7192087fcb83c6c39c

Third, there is now a smoother way to onboard OIDC users with SSO-specific registration options

Documentation

Misc. Improvements:

CSV's are now validated before any imports are performed, and it will tell you what went wrong/was formatted wrong:

https://preview.redd.it/k0okuk9unkwg1.png?width=525&format=png&auto=webp&s=ef159f8174acd22b83a9f1814127d2d16c0a5ae3

You can now add multiple recurring reminders to Plan Records and you can modify which reminders are tied to these plan records all the way up until the plan is marked as done

https://preview.redd.it/04ptjed3okwg1.png?width=421&format=png&auto=webp&s=6e521ee9c1226a22f44ee2426b25c59ffea8b378

On that note, there are now QR Codes that you can generate that can either take you to a specific record or to add a new record:

Video Walkthrough

If you want realtime events coming from LubeLogger but you don't want a webhook integration, you can now use web sockets which works on a pub-sub model.

Documentation

Anyways, that's it from us for this update, have a great Summer and we'll see you in Fall.

⬆️ 339 points | 💬 47 comments


My lab domain got added to a DNS blocklist and broke my whole setup.

I setup the hagezi ultimate adblock list in pihole a few months ago and didnt think much of it after that. Today I am chilling and trying to avoid working too much on a Friday afternoon when I get an alert from uptime kuma that my nginx-proxy-manager stopped responding.

I check the docker container first, everything is green and logs look fine, weird but lets restart it just to be sure. No change, hmmm well I can access the demo page at the direct IP so maybe its not this, lets check the DNS resolve.

    > nslookup proxy.homelab.com
Server:         10.0.1.66
Address:        10.0.1.66#53

Name:   proxy.homelab.com
Address: 0.0.0.0
Name:   proxy.homelab.com
Address: ::

Odd that should be resolving to the 10.0.1.66 server not 0.0.0.0 I wonder what changed. I dig around in the Pihole logs for a bit and discover that my domain was actually added to the offical blacklist. I am not really sure how since my public footprint is minimal, gets virtually zero traffic except for some bots to the root domain, and definitely doesn't serve ads. Either way I was able too lookup the commands to white list my domain in Pihole and bam everything was back to normal.

Just some friday fun.

⬆️ 333 points | 💬 54 comments


Self-hosted personal finance automation: n8n + Actual Budget + SimpleFIN + Claude on my homelab.

Sharing something I've been running for a few months that's become one of the most useful things on my homelab.

The stack:

  • Actual Budget (self-hosted, Docker)
  • actual-auto-sync bridge for SimpleFIN bank sync
  • n8n (self-hosted) as the automation backbone
  • Claude Haiku via Anthropic API for AI categorization (~$0.01/100 transactions)
  • Telegram for notifications
  • Notion for rule logging (optional)

What it does:

Six n8n workflows that run on schedules and replace what I used to do manually every week:

  • Auto-categorizer: Fetches uncategorized transactions every 4 hours, sends to Claude with my full category list as context, applies the category if confidence ≥ 85%, creates a permanent payee rule so that merchant never hits the API again. Flags low-confidence items via Telegram.
  • Monthly envelope funder: Fires on the 1st, funds every budget category from a template I configured once. Fixed amounts first, remainder goes to debt payoff.
  • Sunday briefing: Claude reads my month-to-date budget and sends a plain-English summary — what's over, what's under, one focus for the week.
  • Friday paycheck check: Detects paycheck deposits, sends budget snapshot.
  • Rule digest: Monthly analysis of spending patterns using Claude, logs suggestions for new categorization rules.
  • Discovery: One-time run that prints all your Actual Budget account/category IDs. Saves significant setup time.

Architecture notes:

  • All credentials are in n8n's native credential store (Anthropic, Notion, Telegram API types) — nothing hardcoded
  • Bridge key uses Custom Auth credential type
  • Telegram nodes use n8n's native Telegram integration
  • Config node at the top of each workflow — one place to edit, everything else references it

The stack runs entirely on self-hosted n8n. No recurring SaaS costs beyond SimpleFIN (~$15/year) and Anthropic API calls (~$0.01/100 transactions). Everything else runs on your own infrastructure.

https://github.com/hail2victors/n8n-Actual-Automation

⬆️ 307 points | 💬 106 comments


Digest: r/selfhosted: Apr 10 - Apr 17, 2026

Published: 3 weeks ago | Author: System

trust me, bro

image

⬆️ 1,167 points | 💬 29 comments


Must be nice

image

⬆️ 417 points | 💬 75 comments


Borg UI just hit 1,000+ stars and 2.0 is here - Web interface for BorgBackup

https://www.reddit.com/gallery/1sie9qy

When I started Borg UI, it was a personal problem. I needed a reliable way to back up my Immich photo library. I knew how critical backups were, and I wanted something I could actually trust. Four months, 1,100 stars, and 150k Docker pulls later, here we are.

Thank you. Genuinely. Every star, issue, and kind word kept this going. ❤️

What's been happening under the hood

Over the past few months I've closed 250+ issues, pushed combined test coverage to 64% (backend 58%, frontend 81%), and built out smoke tests, integration tests, and unit tests across the stack.
I have 10+ years of software development experience and code quality matters deeply to me. AI helped me move faster on glue code and boilerplate, but every critical path has been manually tested and reviewed. This tool runs on production data. I treat it that way.

Introducing Borg UI 2.0

  • BorgBackup 2 beta support - experiment with the next generation early
  • Fully responsive UI - mobile-friendly across all screens
  • RBAC - role-based access control for teams and enterprise setups
  • Refreshed dashboard - rich view of repo health, schedules, backup activity, and storage
  • Localisation - English, German, Spanish, and Italian
  • OTA announcements - stay informed about updates in-app
  • Hetzner Storage Box support - first-class integration for Hetzner users
  • Theme switching - light and dark mode out of the box
  • Cleaner codebase - DRY principles, single source of truth, hard separation between v1 and v2 logic, well-tested throughout

Website: borgui.com
Docs: docs.borgui.com
Github: https://github.com/karanhudia/borg-ui
Old Post: https://www.reddit.com/r/selfhosted/comments/1p5fg68/borg_ui_web_interface_for_borgbackup_for_your/

If Borg UI has been useful to you, a star ⭐ on GitHub goes a long way. And if you have feedback, ideas, or just want to say hi, I'm here.

Thanks for being part of this.

CONTEXT (FROM OLD POST): I had been using BorgBackup via command line for a while to create backups of my Immich library (self-hosted photo management tool). It felt very tedious to continuously monitor, and maintain while creating a backup, scheduling or restoring, especially via SSH. I have docker containers for everything else, so I thought why don't I put together a Web UI that makes it easier to manage.

It runs as a Docker container (no config needed).

⬆️ 433 points | 💬 104 comments


Built a homelab from old, forgotten hardware during recovery. This is where it ended up (4–5 months later)

https://www.reddit.com/gallery/1sitp0y

Hi. So... a little while back I lost my job and then shortly after had a pretty serious injury that required surgery and months of recovery.

During that time, a family member asked if I could help go through decades worth of old hardware they had lying around and figure out what to keep, what to donate, and what to get rid of. It gave me something to focus on.

What started as simple inventory work slowly turned into something else.

I began trying to revive old machines - figuring out what each one was still capable of, what its "best use" might be, and how far I could realistically push it. Along the way I found a bunch of Raspberry Pis doing nothing, old laptops collecting dust, old hard drives that were either dead, decaying or somehow in perfect working condition despite having 50k+ power-on hours (made sure to stress test them with `badblocks`) and all sorts of forgotten gear that still had some life left in it.

I realized I actually really enjoy working within constraints - taking modest, mismatched hardware and trying to squeeze something meaningful out of it.

So... this is the result of about 4–5 months of that process.

It’s not the most powerful homelab by any stretch. I'm sure many of you are running far more capable setups and the Macbook Air that I'm posting this from can run circles around the hardware in the setup, but that wasn’t really the goal. This was more about experimentation, iteration, and seeing what I could build with what I had.

And with that, here’s the current state of my humble homelab. I'd love to hear your thoughts, roasts, advice or questions if you have any.

This setup is by no means in it's final state. For instance, I've been working on moving some of my infrastructure over to being managed by Kubernetes, such as my project "x", which will one day live in the cloud. This homelab is already far more cohesive than anything I’ve built before - mostly because in the past I was always prioritized work-related needs rather than my own setup. While my body has healed, I am still looking for work in this challenging tech job market and, therefore, have the time to continue iterating and refining.

PS: There are some things I didn't cover, because it was getting too detailed. For example, on every machine that has a hard drive, I also have a docker stack that monitors SMART, does drive health analysis and routine scheduling of drive tests and more - a tool I plan to release as OSS, because I wanted something a little different (more advanced) than Scrutiny (Python/Flask + JS front-end... maybe rewritten in Go some day). Stuff like that, but I might post some more detail in the comments if anyone is interested.

Also, for context on the “physical infrastructure”: everything is running in a basement workshop inside a converted storage closet.

To deal with heat, I ended up improvising an extraction system using an old workshop fan mounted inside a cut-up plant pot, wired to a switch. It’s very much a MacGyver solution, but surprisingly effective at keeping air moving and temperatures under control (all hard drives stay at a comfortable 40-45°C / 104-113°F under normal load and stays in the safe zone when under heavy load).

Anyways, thanks! Have a great day!

⬆️ 442 points | 💬 48 comments


An end to my home labbing journey

image

Sorry for a such a depressing title and the post. I just wanted a space to air out my frustrations and my sadness.

First before I get to my depressing part, I want to talk about my journey. I got intrested in self hosting during my undergraduate studies, graduated at 2024 and started this journey, initially I did not want to spend any money on this and used the really old laptop as my NAS for my services and had it accessible only through private network.

Last month i decided to have proper setup, bought a thermal paste, new cmos battery cleaned up my laptop and also bought a domain and setup cloudflare tunnel(I don't have a static IP).

Things were going good for a month but then issues started to occurred, the system heats to 71C, before fresh paste it heats up to 90C, found the problem to the exhaust fan. Then it was the failing harddisk and ram problems and system generally being extremely slow due to aging hardware.

With the current RAM prices and Storage generally being extremely costly. It is massive investment and my current salary cannot even afford it.

Again sorry for such a depressing post and I wanted to thank this community for all the help and resources it provided me to even start this journey learnt alot guys. Looks like my journey ends here.

Thank you.

⬆️ 369 points | 💬 88 comments


Tailscale improves free tier, 3 free users is now 6

Free tier users bumped from 3 to 6. Smart move because the difference between 3 and 5 was why I started on Netbird for my household.

Official Annoucement: https://tailscale.com/blog/pricing-v4

⬆️ 376 points | 💬 65 comments


What self hosting mistake would you warn beginners about?

I’m still pretty new to self hosting and I thought this could be a useful question for people like me too. What mistake taught you the most once you got into self hosting?

Edit: Thanks a lot to everyone here, I really appreciate all your advice!

⬆️ 346 points | 💬 356 comments


so borg-webui was just a bait and switch?

https://www.reddit.com/gallery/1smsed9

So I've been using karanhudia/borg-ui for a few months now, very happy about it.

I recently upgraded to the newly announced v 2.0 and all I get is spam about upgrading to a Pro version, and how seemingly now I have a limited trial left.

What the heck? this app is built entirely using open source technology, and now the author is deciding to charge for it?

Has anyone considered forking? Or is there a truly FOSS community alternative?

I'm tired of using borgmatic, I need a decent solution to schedule borg backups in my NAS. I can't possibly be the only one in this situation. Any thoughts?

edit: alternatives found in this comment

edit2: author answered here

⬆️ 372 points | 💬 172 comments


Digest: r/selfhosted: Apr 03 - Apr 10, 2026

Published: 1 month ago | Author: System

Cloudflare is the most successful "Man-in-the-Middle" in history

I was thinking about the NSA scandals from years ago, the wiretapping, the underwater cables, the backdoors in datacenters. It was a massive international drama.

But then you look at Cloudflare. By design, they are a massive, legal Man-in-the-Middle. They decrypt, inspect, and re-encrypt the traffic of millions of websites. We’ve reached a point where "privacy" means "hidden from everyone EXCEPT Cloudflare."

It’s the ultimate irony: developers are so obsessed with "security" that they put their entire stack behind a single US-based entity that holds the private keys to half the internet. We basically did the NSA's job for them, and we did it voluntarily because the dashboard is pretty and the CDN is free.

Am I the only one who finds this centralization terrifying, or have we just accepted that true end-to-end privacy is dead in the name of DDoS protection?

⬆️ 1,149 points | 💬 249 comments


we don't do "works without your own server" here

image

⬆️ 893 points | 💬 60 comments


Three weeks ago I was still subbed to Apple Music, Netflix, HBO, Libro.fm, etc. A lot happened in those weeks lol!

image

Hello all! Three weeks ago I asked a friend of mine to help me set up a Plex media server, I purchased a mini PC on the cheap (not pictured), an enclosure (not pictured), some hard drives, and while we were grabbing the supplies I saw this adorable little Pironman and grabbed it + a Pi5 as well. Setting up the Plex server with the arr stack was so fun and easy that I looked into what else I could host, wound up switching all of my music, e-books, audiobooks, podcasts, etc over to my new server. I have my Kobo e-reader working with Grimmory (huge shout out to those devs).

In the process of implementing the 3, 2, 1 method for backup and eventually will switch my cloud storage over too!

These selfhosted projects have been such a joy to do, I am so grateful to the community who has created such amazing software (and I’ve made sure to tip the devs when possible). Also, I’ve love doing these so much that I’ve begun writing my own project, inspired by Homarr as a sort of home management dashboard (tons of these exist but none have the features I’m looking for so I’m writing it and will release it in the future).

Anyway! This is my cute little setup, I had to get a mini monitor for the adorable lil Pironman. I have a mini keyboard too but can’t remember where I put it lol.

⬆️ 636 points | 💬 78 comments


I built Stirling-PDF but for images

https://www.reddit.com/gallery/1sbgjxk

Open Source. One Docker container. Browser-based. Everything local.

Your files never leave your machine.

30+ tools. Resize, crop, rotate, compress, convert, strip metadata, watermarks, reusable pipelines, full REST API, background removal, object eraser, OCR, face/license plate blur, up-scaling and more.

I'm building this to be genuinely useful, not another AI-wrapped gimmick or subscription trap. No cloud lock-in, no "sign up to continue," no features paywalled behind a pro tier. Just a tool that does what it says.

I'm actively looking for feedback from people who would actually use this. What tools would you want? What's missing? What's annoying? What would make you switch from whatever you're using now?

GitHub: https://github.com/stirling-image/stirling-image
Documentation: https://stirling-image.github.io/stirling-image/

⬆️ 396 points | 💬 81 comments


I thought my VPS was hardened, but it was compromised and I can't figure out how. Please help!

I have a VPS that I use to reverse proxy incoming web requests to my self-hosted services at home over wireguard. I got an alert recently that CPU usage was spiking, so I logged in to see a newly-created user running masscan.

The VPS runs 3 publicly-exposed services: nginx, ssh, and wireguard.

It was hardened as follows:

  • ssh password auth off, root login disabled, pubkey auth only
  • ssh on non-standard port
  • root login is locked in /etc/shadow
  • fail2ban is enabled on ssh
  • packages updated to latest (debian 13) with automatic security package updates
  • ufw is enabled, only allowing the 3 services mentioned above

I checked, and I can't find any relevant CVEs for nginx, ssh, or wireguard.

The logs show the following.

At 07:38, I see an authentication failure on, followed by systemd unexpectedly rebooting:

Mar 30 07:38:20  login[695]: pam_unix(login:auth): check pass; user unknown
Mar 30 07:38:20  login[695]: pam_unix(login:auth): authentication failure; logname= uid=0 euid=0 tty=/dev/tty1 ruser= rhost=
Mar 30 07:38:22  systemd[1]: Received SIGINT.
Mar 30 07:38:22  systemd[1]: Activating special unit reboot.target...

Shortly after the reboot (07:40), I can see a login session for "userb":

Mar 30 07:40:22 login[696]: pam_unix(login:session): session opened for user userb(uid=1001) by userb(uid=0)
Mar 30 07:40:22 systemd[1]: Created slice user-1001.slice - User Slice of UID 1001.
Mar 30 07:40:22 systemd[1]: Starting user-runtime-dir@1001.service - User Runtime Directory /run/user/1001...
Mar 30 07:40:22 systemd-logind[602]: New session 1 of user userb.
Mar 30 07:40:22 systemd[1]: Finished user-runtime-dir@1001.service - User Runtime Directory /run/user/1001.
Mar 30 07:40:22 systemd[1]: Starting user@1001.service - User Manager for UID 1001...
Mar 30 07:40:22 (systemd)[1085]: pam_unix(systemd-user:session): session opened for user userb(uid=1001) by userb(uid=0)
Mar 30 07:40:22 systemd-logind[602]: New session 2 of user userb.Mar 30 07:40:22 login[696]: pam_unix(login:session): session opened for user userb(uid=1001) by userb(uid=0)
Mar 30 07:40:22 systemd[1]: Created slice user-1001.slice - User Slice of UID 1001.
Mar 30 07:40:22 systemd[1]: Starting user-runtime-dir@1001.service - User Runtime Directory /run/user/1001...
Mar 30 07:40:22 systemd-logind[602]: New session 1 of user userb.
Mar 30 07:40:22 systemd[1]: Finished user-runtime-dir@1001.service - User Runtime Directory /run/user/1001.
Mar 30 07:40:22 systemd[1]: Starting user@1001.service - User Manager for UID 1001...
Mar 30 07:40:22 (systemd)[1085]: pam_unix(systemd-user:session): session opened for user userb(uid=1001) by userb(uid=0)
Mar 30 07:40:22 systemd-logind[602]: New session 2 of user userb.

Notably, there's no accompanying ssh login entry!! The user is in the sudo group, and starts running commands via sudo at 07:41. They install curl, update sshd_config to allow password login, reload sshd, then ssh in. Weirdly, the home directory isn't created until 07:43, which is when they ssh in.

The shell is changed to bash, then their bash history shows the following, where they bypass ufw, set up screen, and run masscan.

sudo touch vnc.txt && sudo chmod 777 vnc.txt
sudo iptables -I INPUT -j ACCEPT
sudo apt-get install screen libpcap-dev iptables masscan -y
sudo iptables -A INPUT -p tcp --dport 61000 -j DROP
screen
sudo touch res.txt && sudo chmod 777 res.txt
sudo masscan 0.0.0.0/0 -p22 --banners --source-port 61000 --rate 50000 --exclude 255.255.255.255 -oL res.txt
sudo masscan 0.0.0.0/0 -p22 --banners --source-port 61000 --rate 500000 --exclude 255.255.255.255 -oL res.txtsudo touch vnc.txt && sudo chmod 777 vnc.txt
sudo iptables -I INPUT -j ACCEPT
sudo apt-get install screen libpcap-dev iptables masscan -y
sudo iptables -A INPUT -p tcp --dport 61000 -j DROP
screen
sudo touch res.txt && sudo chmod 777 res.txt
sudo masscan 0.0.0.0/0 -p22 --banners --source-port 61000 --rate 50000 --exclude 255.255.255.255 -oL res.txt
sudo masscan 0.0.0.0/0 -p22 --banners --source-port 61000 --rate 500000 --exclude 255.255.255.255 -oL res.txt

For now, I've killed the user, fixed all the hardening, and disconnected wireguard, leaving it as a honeypot of sorts. I've put the full logs here: https://pastebin.com/2M3esRg2

Am I missing something? How did someone get access to a non-ssh login? Is there some unknown vuln here? I was suspicious of the login so I checked with my VPS provider, and they said they're not seeing anything unusual in terms of their backend or the VNC to the VM console, though I'm not sure how hard they checked...

Thanks!

⬆️ 546 points | 💬 100 comments


Nomad Mk3: Offline, Open-source, low-power self-hosted media server

image

Howdy!

I’m back with Nomad Mk3, a pocket-sized, fully self-hosted media server that runs on an ESP32-S3. The goal is simple: a super cheap, ultra low-power way to host your own media without needing the internet, cloud services, or a full server setup.

Once configured, Nomad creates its own Wi-Fi network and serves movies, shows, music, books, images, and files directly to any device with a browser. Multiple users can connect at the same time and stream independently, all completely offline.

Mk3 focuses on making everything smoother and more reliable. This includes a new native video player, Improved music page with queue building, and much more reliable indexing / backend handling.

The main idea behind this project is to go below the typical self-hosted stack. No Raspberry Pi, no Docker, no maintenance. Just flash it, load your media onto an SD card, and it works. The initial setup is more manual, but the system allows for a more flexable and portable hosting option for your media.

The entire project is open source, both the firmware and the web interface. I strongly recommend the DIY route since I’ve tried to make setup as straightforward as possible. If you can plug in a USB cable and follow instructions, you can build one in under an hour.

GitHub:
https://github.com/Jstudner/jcorp-nomad

Build guide (Instructables):
https://www.instructables.com/Jcorp-Nomad-Mini-WIFI-Media-Server/

If you really do not want to build one, I also offer prebuilt units here:
https://nomad.jcorptech.net

If you’re into self-hosting and like the idea of small, offline-first systems, I’d love to hear what you think or what you’d want to see next!

Thx for reading!

-Jackson

⬆️ 523 points | 💬 69 comments


My selfhosted pack

image

After months of tinkering, this is the setup I actually stuck with. Media on Jellyfin, photos on Immich, files on Nextcloud, passwords on Vaultwarden, ads blocked with AdGuard Home, and everything routed through NSL.SH.. Happy to answer questions about any part of the stack

⬆️ 516 points | 💬 114 comments


Digest: r/selfhosted: Mar 27 - Apr 03, 2026

Published: 1 month ago | Author: System

I'm a server

image

⬆️ 869 points | 💬 95 comments


I built a fully local, open-source thermal printer appliance - no cloud, no subscriptions, no accounts

https://www.reddit.com/gallery/1s9jz4f

I built a thermal printer appliance that runs entirely on your local network. No cloud, no accounts, no subscriptions. Turn a dial, press a button, and it prints weather, news, RSS feeds, email, or whatever you need on 58mm receipt paper.

Self-hosted details:

  • Runs on a Raspberry Pi Zero W on your local network
  • Settings UI is password-protected and only accessible locally from your phone or computer - no app, no cloud dashboard
  • API keys are stored on the device.
  • Many modules run completely offline: sudoku, mazes, quotes, journal prompts, text notes, system monitor
  • You bring your own API keys for services like NewsAPI
  • 16 modules across content (weather, news, RSS, email, calendar, astronomy), games (sudoku, mazes, choose-your-own-adventure), and utilities (QR codes, webhooks, system monitor)
  • Assign any combination to 8 channels on a rotary dial

The enclosure is hand-built from walnut and brass - I spent six years as a furniture maker, so the hardware side matters to me as much as the software.

The whole thing is open source: https://github.com/travmiller/paper-console

If you have a Pi and a 58mm thermal printer you can run the software yourself. Happy to answer questions.

More info and build photos: https://travismiller.design/paper-console/

⬆️ 850 points | 💬 60 comments


At least write the advertisement post yourself

Using AI as a help for coding is one thing, okay I do that too for private projects, but its extremely disrespectful to even generate the advertisement post with AI. If you don’t take your time to TELL ME what your tool even does and need an AI agent for it, I will not take my time to read through the generated text and click on your github. There are so many blatantly AI generated text posts here full of the same nonsense phrases. Someone who audited their tool and knows what it does doesn‘t need AI to write the text for him. Hate me all you want for that.

⬆️ 491 points | 💬 127 comments


PSA: Update to Jellyfin 10.11.7 immediately (Critical Security Fixes)

The Jellyfin team just dropped v10.11.7 and the patch notes contain a pretty heavy warning. It’s listed as a minor release, but the devs have explicitly stated:

"WARNING: This release contains several extremely important security fixes. These vulnerabilities will be disclosed in 14 days as per our security policy. Users of all versions prior to 10.11.7 are advised to upgrade immediately."

⬆️ 415 points | 💬 152 comments


Plezy - open-source Plex client with HDR, offline downloads, watch together and more

image

Hello,

I built an open-source Plex client called Plezy and figured this community would appreciate it.

It uses mpv for playback, so codec support is excellent - HEVC, AV1, VP9, DTS, TrueHD, ASS/SSA subtitles, you name it. No Plex Pass required for remote streaming.

Highlights:

- HDR & Dolby Vision support (Android, iOS, macOS, Windows)

- Offline downloads - save media for offline viewing

- Watch Together - synced playback with friends via WebSocket relay

- Live TV - EPG guide, channel tuning

- Cross-platform - Windows, macOS, Linux, iOS, Android, Android TV

- Gamepad & keyboard navigation - works for couch setups

- Wide codec support - mpv-based, plays practically everything

It's built with Flutter, fully open-source on GitHub. Desktop and sideloaded mobile builds are free. App Store / Play Store versions are a one-time purchase (no subscriptions, no IAP).

Linux users: .deb, .rpm, .pkg.tar.zst, portable tar.gz, NixOS package, and Homebrew on macOS are all available.

GitHub: https://github.com/edde746/plezy

Happy to answer any questions. AI used in the development.

⬆️ 437 points | 💬 271 comments


You can now run Google's Gemma 4 model on your local device! (6GB RAM)

Hello everyone! Google just released their new open-source model family: Gemma 4. This means you can now run a ChatGPT like model at home.

There are four models and they all have thinking and multimodal capabilities. There's two small ones: E2B and E4B, and two large ones: 26B-A4B and 31B. The 31B model is the smartest but 26B-A4B is much faster due to it's MoE arch. E2B and E4B are great for phones and laptops.

To run the models locally (laptop, Mac, desktop etc), we at Unsloth converted these models so it can fit on your device. You can now run and train the Gemma 4 models via Unsloth Studio: https://github.com/unslothai/unsloth

Recommended setups:

  • E2B / E4B: 10+ tokens/s in near-full precision with ~6GB RAM / unified mem. 4-bit variants can run on 4-5GB RAM.
  • 26B-A4B: 30+ tokens/s in near-full precision with ~30GB RAM / unified mem. 4-bit works on 16GB RAM.
  • 31B: 15+ tokens/s in near-full precision with ~35GB RAM.

No is GPU required, especially for the smaller models, but having one will increase inference speeds (~80 tokens/s). With an RTX 5090 you can get 140 tokens/s throughput which is way faster than ChatGPT.
Even if you don't meet the requirements, you can still run the models (e.g. 3GB CPU), but inference will be much slower. Link to Gemma 4 GGUFs to run.

Example of Gemma 4-26B-4AB running

You can run or train Gemma 4 via Unsloth Studio:

We've now made installation take only 1-2mins:

macOS, Linux, WSL:

curl -fsSL https://unsloth.ai/install.sh | sh

Windows:

irm https://unsloth.ai/install.ps1 | iex
  • The Unsloth Studio Desktop app is coming very soon (this month).
  • Tool-calling is now 50-80% more accurate and inference is 10-20% faster

We recommend reading our step-by-step guide which covers everything: https://unsloth.ai/docs/models/gemma-4

Thanks so much once again for reading and let me know if you have any questions.

⬆️ 414 points | 💬 107 comments


How to make your own VPN to avoid the UK government's Orwellian future

I know it is very difficult to stop people using a VPN, but if the individual VPN companies fold I want to make sure I have a safe backup.

Can anyone tell me a step by step guide to make my own VPN for privacy and to access sites that the UK considers bad (which probably includes half the internet by next year), plus a shopping list of items if needed.

I am not a tech genius, nor do I want to do anything heinous on the internet, so a fairly simple VPN will do me just fine. any help towards this would be very much appreciated!

⬆️ 408 points | 💬 183 comments


Local multiplayer games remotely 🎮

https://github.com/dmksnnk/star

My girlfriend wanted to play Stardew Valley multiplayer with her sister, who lives in another country. Well, heck, I'm a programmer, so I could hack something together quickly and learn something new along the way. QUIC sounded cool. It all seemed easy until I realized this would involve NAT traversal. Half a year and 3 different versions after: I have a basic working version that can establish a P2P connection between users using NAT hole-punching) and, if that fails, forwards UDP traffic via a relay.

Build with Go, quic-go, and HTML templates.
Hope this can be useful to someone else :)

⬆️ 399 points | 💬 81 comments


Digest: r/selfhosted: Mar 20 - Mar 27, 2026

Published: 1 month ago | Author: System

that HDD churn

image

⬆️ 408 points | 💬 25 comments


Finally understood why self-hosting felt hard

image

Took me way too long to realize the hard part was never Immich itself

⬆️ 576 points | 💬 155 comments


I give you: Huggies-Server

https://www.reddit.com/gallery/1s23o6n

Budget case with great airflow

⬆️ 505 points | 💬 40 comments


M$ will use your data to train AI unless you opt out

image

Microsoft has just submitted this e-mail which says your data will be used to train their AI unless you explicitly opt-out.

They supposedly explain how to do it, but conveniently "forget" to include the actual link, forcing you to navigate a maze of pages to find it. It is a cheap move and totally intentional.

To save you all the hassle, here is the direct link to opt-out: https://github.com/settings/copilot/features and search for "Allow GitHub to use my data for AI model training".

⬆️ 345 points | 💬 66 comments


This is the reason you shouldn't host your own email... Microsoft says 🖕to 200k user ISP.

https://www.ispreview.co.uk/index.php/2026/03/microsoft-domain-blacklist-causes-email-problems-for-uk-isp-zen-internet.html

Microsoft seemingly don't care that they've black listed the IPs of a fairly large and well-respected UK ISP. If they can't get help, what chance does an individual have?

Email does feel like a cartel in many respects. I look forward to the flurry of stories of you hosting your own email since the 90s without issue. But, the truth comes from those who have had issues and how painful it was to resolve.

⬆️ 637 points | 💬 271 comments


Adorama shipped 2x 14 TB drives without any paper or bubble wrap

image

Sharing so others may avoid this hassle.

I was excited to set up a new NAS for my homelab, but the hard drives were shipped without any padding. I'm just shocked someone could be this careless.

Will update once/if they resolve this.

Edit:
Yes, I understand the retail boxes have padding. For the corner to be smashed like that, the box would need to be hit pretty hard. Also, the inside of the shipping box is scraped up from the drives bouncing around.

Since they are charging retail++ for these drives, I think it's fair to want them neither shaken nor stirred.

⬆️ 387 points | 💬 73 comments


My Lifesaver: Use smart plug with server

Hi all,

I just like to to share a finding of mine, which may be helpful for some of you:

I am currently traveling and was very nervous when I realized that all my Proxmox VMs were down for unknown reasons. No access to Home Assistant, no Frigate (cameras), no Paperless ngx nor any other local app, which I usually access via VPN (self-hosted wg-easy). Of course, the VPN did not work either. This was quite frustrating.

Then I realized that (1) my home server is plugged into a Meross Smart Plug, mainly for the reason to track the power consumption, and (2) I had set up a second VPN (WireGuard) directly in my router. Luckily, although I usually control it with HA, I was able to use my WireGuard VPN and remotely switch the plug off and on with the help of the of Meross App. And voila: All VM were up again.

So, the moral of the story: Using a smart plug for your server that can be controlled outside of the Home Assistant setup can avoid some pain!

**EDIT:**

Since you asked: Claude thankfully helped me identifying the problem: My Proxmox server (Dell OptiPlex 3090) went offline due to an Intel e1000e NIC driver hang – the onboard network card froze and couldn't recover on its own. Fixed it by reducing the TX ring buffer from 4096 to 256 (ethtool -G nic0 tx 256) and adding a small watchdog script that automatically resets the NIC if it hangs again.

⬆️ 354 points | 💬 108 comments


NOMAD | self-hosted trip planner with real-time collaboration, interactive maps, budgets, packing lists, and more

image

I've been working on NOMAD, a self-hosted trip planner that lets you organize trips either solo or together with friends and family in real time.

You can try the demo at https://demo-nomad.pakulat.org (resets hourly) or check out the repo: https://github.com/mauriceboe/NOMAD

I built it because every time my friends and I planned a trip, we ended up with a mess of Google Docs, WhatsApp groups, and shared spreadsheets. I wanted one place where we could plan everything together without relying on cloud services that harvest our data.

What it does:

  • Plan trips with drag & drop day planning, place search (Google Places or OpenStreetMap), and route optimization
  • Real-time collaboration via WebSocket.. changes show up instantly for everyone
  • Collab page with group chat, shared notes, polls, and activity sign-ups so you can see who's joining what
  • Budget tracking with per-person splitting, categories, and multi-currency support
  • Packing lists with categories, progress tracking, and smart suggestions
  • Reservations for flights, hotels, restaurants with status tracking and file attachments
  • Weather forecasts for your destinations
  • PDF export of your complete trip plan
  • Interactive Leaflet map with marker clustering and route visualization
  • OIDC/SSO support (Google, Apple, Keycloak, Authentik, etc.)
  • Vacation day planner with public holidays for 100+ countries
  • Visited countries atlas with travel stats

All the collaboration features are optional.. works perfectly fine as a solo planner too. The addon system lets you enable/disable features like packing lists, budgets, and documents so you can keep it as lean or full-featured as you want.

⬆️ 427 points | 💬 71 comments


Digest: r/selfhosted: Mar 13 - Mar 20, 2026

Published: 1 month ago | Author: System

These cameras were supposed to be e-waste. No RTSP, no docs, no protocol anyone's heard of. I reverse-engineered 100 000 URL patterns to make them work.

https://www.reddit.com/gallery/1ruhgeq

Had some old Chinese NVRs from 2016. Spent 2 years on and off trying to connect them to Frigate. Every protocol, every URL format, every Google result. Nothing. All ports closed except 80.

Sniffed the traffic from their Android app. They speak something called BUBBLE - a protocol so obscure it doesn't exist on Google.

Got so fed up with this that I built a tool that does those 2 years of searching in 30 seconds. Built specifically for the kind of crap that's nearly impossible to connect to Frigate manually.

You enter the camera IP and model. It grabs ALL known URLs for that device - and there can be a LOT of them - tests every single one and gives you only the working streams. Then you paste your existing frigate.yml - even with 500 cameras - and it adds camera #501 with main and sub streams through go2rtc without breaking anything.

67K camera models, 3.6K brands.

GitHub: https://github.com/eduard256/Strix

docker run -d --name strix --restart unless-stopped eduard256/strix

Edit: Yes, AI tools were actively used during development, like pretty much everywhere in 2026. Screenshots show mock data showing all stream types the tool supports - including RTSP. It would be stupid to skip the biggest chunk of the market. If you're interested in the actual camera from my story there's a demo gif in the GitHub repo showing the discovery process on one of the NVRs I mentioned.

⬆️ 955 points | 💬 123 comments


[Rant] So sick of every other post being blatantly written by AI

This is not about vibe-coded apps. It's about the literal posts. It looks like every other post on here is written by some AI chatbot. Of course, they have been for a while, but is it just me or has it been getting even worse?

I just can't understand it. Why on earth would you generate a /Reddit post/ with AI?

Recently I've been thinking about looking for private communities, but I keep realizing I wouldn't want to join one in the first place. There's tremendous value in having new people be able to participate whenever they want and having a space to ask questions. That's something that needs to be preserved and protected. Especially from the likes of ChatGPT.

This sucks. I know how to make it better and I'm afraid that no-one really does.

Edit: To the people who think there are too many posts complaining about AI: Try sorting this sub by New. Those of us who do filter all the most egregious slop out, that's why you're not seeing it.

⬆️ 617 points | 💬 169 comments


My neighbor offered me this as a thank-you because I supported him a lot while he was struggling with depression. What can I do with it? It's an M720Q.

image

⬆️ 826 points | 💬 140 comments


TapMap: see where your computer connects on a world map (open source)

image

I built a small open source tool that shows where your computer connects on a world map.

It reads local socket connections, resolves IP addresses using MaxMind GeoLite2, and visualizes them with Plotly.

Runs locally. No telemetry.

Windows build available.

GitHub:

https://github.com/olalie/tapmap

⬆️ 665 points | 💬 48 comments


We built an open-source headless browser that is 9x faster and uses 16x less memory than Chrome over the network

Hey r/selfhosted,

We've been building Lightpanda for the past 3 years

It's a headless browser written from scratch in u/Zig, designed purely for automation and AI agents. No graphical rendering, just the DOM, JavaScript (v8), and a CDP server.

We recently benchmarked against 933 real web pages over the network (not localhost) on an AWS EC2 m5.large. At 25 parallel tasks:

  • Memory, 16x less: 215MB (Lightpanda) vs 2GB (Chrome)
  • Speed, 9x faster: 5 seconds vs 46 seconds

Even at 100 parallel tasks, Lightpanda used 696MB where Chrome hit 4.2GB. Chrome's performance actually degraded at that level while Lightpanda stayed stable.

Full benchmark with methodology: https://lightpanda.io/blog/posts/from-local-to-real-world-benchmarks

It's compatible with Puppeteer and Playwright through CDP, so if you're already running headless Chrome for scraping or automation, you can swap it in with a one-line config change:

docker run -d --name lightpanda -p 9222:9222 lightpanda/browser:nightly

Then point your script at ws://127.0.0.1:9222 instead of launching Chrome.

It's in active dev and not every site works perfectly yet. But for self-hosted automation workflows, the resource savings are significant. We're AGPL-3.0 licensed.

GitHub: https://github.com/lightpanda-io/browser

Happy to answer any questions about the architecture or how it compares to other headless options.

⬆️ 894 points | 💬 76 comments


Booklore is gone.

I was checking their Discord for some announcement and it vanished.

GitHub repo is gone too: https://github.com/booklore-app/booklore

Remember, love AI-made apps… they disappear faster than they launch.

⬆️ 852 points | 💬 469 comments


Open source doesn’t mean safe

As a self-hosted project creator (homarr) I’ve observed the space grow in the past few years and now it feels like every day there is a new shiny selfhosted container you could add to your stack.

The rise of AI coding tools has enabled anyone to make something work for themselves and share it with the community.

Whilst this is fundamentally great, I’ve also seen a bunch of PSAs on the sub warning about low-quality projects with insane vulnerabilities.

Now, I am scared that this community could become an attack vector.

A whole GitHub project, discord server, Reddit announcement could be made with/by an AI agent.

Now, imagine this new project has a docker integration and asks you to mount your docker socket. Suddenly your whole server could be compromised by running malicious code (exit docker by mounting system files)

Some replies would be “read the code, it’s open source” but if the docker image differs from the repo’s source you’d never know unless manually checking the hash (or manually opening the image)

A takeaway from this would be to setup usage limits and disable auto-refill on every 3rd party API you use, isolate what you don’t trust.

TLDR:

Running an un-trusted docker container on your server is not experimentation — it’s remote code execution with extra steps (manual AI slop /s)

ps: reference this post whenever someone finds out they’re part of a botnet they joined through a malicious vibe-coded project

⬆️ 743 points | 💬 113 comments


My humble home lab / self-hosted setup

image

In September of last year I started my homelab/self-hosted journey. I bought the following around that time (except the Pi + case, purchased just last month):

Beelink mini PC (N150+16GB RAM) - $175

2x WD Elements 14 TB external HDD - $170/ea

LG external Bluray drive - $130

Raspberry Pi Zero 2W - $15

Case for Raspberry Pi printed at my library - $0.59

The mini PC runs Ubuntu primarily for Jellyfin but also Pihole and Tunarr (for creating custom TV channels). My Raspberry Pi is my backup DNS for Pihole. The Bluray drive is for ripping our DVD/Bluray/UHD collection (mostly picked up cheap at second hand stores). My Windows PC handles the ripping and any encoding info via Handbrake. I save a backup of all my videos on one of the external HDDs and the other HDD is permanently attached directly via USB to my mini PC and serves as my Jellyfin storage drive. I use WinSCP to send the ripped videos from my Windows PC to my Jellyfin server.

There are some things I can definitely improve e.g. replacing the external USB drive someday with a server grade drive. I also may switch to AdGuard from Pihole per a recommendation from a friend but haven't gotten that far yet.

I've learned a ton about using CLI as well as troubleshooting in all senses of the word. I recently figured out how to get audio dramas/podcasts working properly in Jellyfin which has been a huge hurdle for me and seemingly hasn't really worked for other folks, so I'm looking forward to sharing that in the Jellyfin subreddit soon. But anyway, this has just been a fun hobby and given me ample opportunities to scratch my brain a bit.

There's nothing really glamorous about my setup but I now have a really functional, easy to use, and easy to maintain home media server that doubles as a broad ad blocker. My family and I have gotten a ton of value out of having our movies digitized and also cut all streaming services as we've taken the opportunity to pick up a bunch of cheap second hand discs. I also pull some videos from YouTube to host locally; the benefit at this point is that my kids are basically 100% shielded from advertisements yet we still have access to virtually everything we all enjoy at home or on the go (thanks, Tailscale). We also take advantage of our local library for books, Blurays, and audiobooks to supplement my self hosting.

I've seen some really elaborate and very cool self-hosted setups on this subreddit, but I felt like sharing mine as an example of a simple setup that just does a few things that improve my family's quality of like without much extra effort.

⬆️ 854 points | 💬 80 comments