Digest: r/selfhosted: Mar 13 - Mar 20, 2026
Published: 4 days ago | Author: System
These cameras were supposed to be e-waste. No RTSP, no docs, no protocol anyone's heard of. I reverse-engineered 100 000 URL patterns to make them work.
https://www.reddit.com/gallery/1ruhgeq
Had some old Chinese NVRs from 2016. Spent 2 years on and off trying to connect them to Frigate. Every protocol, every URL format, every Google result. Nothing. All ports closed except 80.
Sniffed the traffic from their Android app. They speak something called BUBBLE - a protocol so obscure it doesn't exist on Google.
Got so fed up with this that I built a tool that does those 2 years of searching in 30 seconds. Built specifically for the kind of crap that's nearly impossible to connect to Frigate manually.
You enter the camera IP and model. It grabs ALL known URLs for that device - and there can be a LOT of them - tests every single one and gives you only the working streams. Then you paste your existing frigate.yml - even with 500 cameras - and it adds camera #501 with main and sub streams through go2rtc without breaking anything.
67K camera models, 3.6K brands.
GitHub: https://github.com/eduard256/Strix
docker run -d --name strix --restart unless-stopped eduard256/strix
Edit: Yes, AI tools were actively used during development, like pretty much everywhere in 2026. Screenshots show mock data showing all stream types the tool supports - including RTSP. It would be stupid to skip the biggest chunk of the market. If you're interested in the actual camera from my story there's a demo gif in the GitHub repo showing the discovery process on one of the NVRs I mentioned.
⬆️ 955 points | 💬 123 comments
[Rant] So sick of every other post being blatantly written by AI
This is not about vibe-coded apps. It's about the literal posts. It looks like every other post on here is written by some AI chatbot. Of course, they have been for a while, but is it just me or has it been getting even worse?
I just can't understand it. Why on earth would you generate a /Reddit post/ with AI?
Recently I've been thinking about looking for private communities, but I keep realizing I wouldn't want to join one in the first place. There's tremendous value in having new people be able to participate whenever they want and having a space to ask questions. That's something that needs to be preserved and protected. Especially from the likes of ChatGPT.
This sucks. I know how to make it better and I'm afraid that no-one really does.
Edit: To the people who think there are too many posts complaining about AI: Try sorting this sub by New. Those of us who do filter all the most egregious slop out, that's why you're not seeing it.
⬆️ 617 points | 💬 169 comments
My neighbor offered me this as a thank-you because I supported him a lot while he was struggling with depression. What can I do with it? It's an M720Q.

⬆️ 826 points | 💬 140 comments
TapMap: see where your computer connects on a world map (open source)

I built a small open source tool that shows where your computer connects on a world map.
It reads local socket connections, resolves IP addresses using MaxMind GeoLite2, and visualizes them with Plotly.
Runs locally. No telemetry.
Windows build available.
GitHub:
⬆️ 665 points | 💬 48 comments
We built an open-source headless browser that is 9x faster and uses 16x less memory than Chrome over the network
Hey r/selfhosted,
We've been building Lightpanda for the past 3 years
It's a headless browser written from scratch in u/Zig, designed purely for automation and AI agents. No graphical rendering, just the DOM, JavaScript (v8), and a CDP server.
We recently benchmarked against 933 real web pages over the network (not localhost) on an AWS EC2 m5.large. At 25 parallel tasks:
- Memory, 16x less: 215MB (Lightpanda) vs 2GB (Chrome)
- Speed, 9x faster: 5 seconds vs 46 seconds
Even at 100 parallel tasks, Lightpanda used 696MB where Chrome hit 4.2GB. Chrome's performance actually degraded at that level while Lightpanda stayed stable.
Full benchmark with methodology: https://lightpanda.io/blog/posts/from-local-to-real-world-benchmarks
It's compatible with Puppeteer and Playwright through CDP, so if you're already running headless Chrome for scraping or automation, you can swap it in with a one-line config change:
docker run -d --name lightpanda -p 9222:9222 lightpanda/browser:nightly
Then point your script at ws://127.0.0.1:9222 instead of launching Chrome.
It's in active dev and not every site works perfectly yet. But for self-hosted automation workflows, the resource savings are significant. We're AGPL-3.0 licensed.
GitHub: https://github.com/lightpanda-io/browser
Happy to answer any questions about the architecture or how it compares to other headless options.
⬆️ 894 points | 💬 76 comments
Booklore is gone.
I was checking their Discord for some announcement and it vanished.
GitHub repo is gone too: https://github.com/booklore-app/booklore
Remember, love AI-made apps… they disappear faster than they launch.
⬆️ 852 points | 💬 469 comments
Open source doesn’t mean safe
As a self-hosted project creator (homarr) I’ve observed the space grow in the past few years and now it feels like every day there is a new shiny selfhosted container you could add to your stack.
The rise of AI coding tools has enabled anyone to make something work for themselves and share it with the community.
Whilst this is fundamentally great, I’ve also seen a bunch of PSAs on the sub warning about low-quality projects with insane vulnerabilities.
Now, I am scared that this community could become an attack vector.
A whole GitHub project, discord server, Reddit announcement could be made with/by an AI agent.
Now, imagine this new project has a docker integration and asks you to mount your docker socket. Suddenly your whole server could be compromised by running malicious code (exit docker by mounting system files)
Some replies would be “read the code, it’s open source” but if the docker image differs from the repo’s source you’d never know unless manually checking the hash (or manually opening the image)
A takeaway from this would be to setup usage limits and disable auto-refill on every 3rd party API you use, isolate what you don’t trust.
TLDR:
Running an un-trusted docker container on your server is not experimentation — it’s remote code execution with extra steps (manual AI slop /s)
ps: reference this post whenever someone finds out they’re part of a botnet they joined through a malicious vibe-coded project
⬆️ 743 points | 💬 113 comments
My humble home lab / self-hosted setup

In September of last year I started my homelab/self-hosted journey. I bought the following around that time (except the Pi + case, purchased just last month):
Beelink mini PC (N150+16GB RAM) - $175
2x WD Elements 14 TB external HDD - $170/ea
LG external Bluray drive - $130
Raspberry Pi Zero 2W - $15
Case for Raspberry Pi printed at my library - $0.59
The mini PC runs Ubuntu primarily for Jellyfin but also Pihole and Tunarr (for creating custom TV channels). My Raspberry Pi is my backup DNS for Pihole. The Bluray drive is for ripping our DVD/Bluray/UHD collection (mostly picked up cheap at second hand stores). My Windows PC handles the ripping and any encoding info via Handbrake. I save a backup of all my videos on one of the external HDDs and the other HDD is permanently attached directly via USB to my mini PC and serves as my Jellyfin storage drive. I use WinSCP to send the ripped videos from my Windows PC to my Jellyfin server.
There are some things I can definitely improve e.g. replacing the external USB drive someday with a server grade drive. I also may switch to AdGuard from Pihole per a recommendation from a friend but haven't gotten that far yet.
I've learned a ton about using CLI as well as troubleshooting in all senses of the word. I recently figured out how to get audio dramas/podcasts working properly in Jellyfin which has been a huge hurdle for me and seemingly hasn't really worked for other folks, so I'm looking forward to sharing that in the Jellyfin subreddit soon. But anyway, this has just been a fun hobby and given me ample opportunities to scratch my brain a bit.
There's nothing really glamorous about my setup but I now have a really functional, easy to use, and easy to maintain home media server that doubles as a broad ad blocker. My family and I have gotten a ton of value out of having our movies digitized and also cut all streaming services as we've taken the opportunity to pick up a bunch of cheap second hand discs. I also pull some videos from YouTube to host locally; the benefit at this point is that my kids are basically 100% shielded from advertisements yet we still have access to virtually everything we all enjoy at home or on the go (thanks, Tailscale). We also take advantage of our local library for books, Blurays, and audiobooks to supplement my self hosting.
I've seen some really elaborate and very cool self-hosted setups on this subreddit, but I felt like sharing mine as an example of a simple setup that just does a few things that improve my family's quality of like without much extra effort.
⬆️ 854 points | 💬 80 comments















