Digest: r/selfhosted

ID: digest-selfhosted | Type: digest | Limit: 8 | Status: Enabled

Last Update: 4 days ago | Next Update: 2 days from now

Posts History Gallery RSS JSON

Posts (8)

Digest: r/selfhosted: Mar 13 - Mar 20, 2026

Published: 4 days ago | Author: System

These cameras were supposed to be e-waste. No RTSP, no docs, no protocol anyone's heard of. I reverse-engineered 100 000 URL patterns to make them work.

https://www.reddit.com/gallery/1ruhgeq

Had some old Chinese NVRs from 2016. Spent 2 years on and off trying to connect them to Frigate. Every protocol, every URL format, every Google result. Nothing. All ports closed except 80.

Sniffed the traffic from their Android app. They speak something called BUBBLE - a protocol so obscure it doesn't exist on Google.

Got so fed up with this that I built a tool that does those 2 years of searching in 30 seconds. Built specifically for the kind of crap that's nearly impossible to connect to Frigate manually.

You enter the camera IP and model. It grabs ALL known URLs for that device - and there can be a LOT of them - tests every single one and gives you only the working streams. Then you paste your existing frigate.yml - even with 500 cameras - and it adds camera #501 with main and sub streams through go2rtc without breaking anything.

67K camera models, 3.6K brands.

GitHub: https://github.com/eduard256/Strix

docker run -d --name strix --restart unless-stopped eduard256/strix

Edit: Yes, AI tools were actively used during development, like pretty much everywhere in 2026. Screenshots show mock data showing all stream types the tool supports - including RTSP. It would be stupid to skip the biggest chunk of the market. If you're interested in the actual camera from my story there's a demo gif in the GitHub repo showing the discovery process on one of the NVRs I mentioned.

⬆️ 955 points | 💬 123 comments


[Rant] So sick of every other post being blatantly written by AI

This is not about vibe-coded apps. It's about the literal posts. It looks like every other post on here is written by some AI chatbot. Of course, they have been for a while, but is it just me or has it been getting even worse?

I just can't understand it. Why on earth would you generate a /Reddit post/ with AI?

Recently I've been thinking about looking for private communities, but I keep realizing I wouldn't want to join one in the first place. There's tremendous value in having new people be able to participate whenever they want and having a space to ask questions. That's something that needs to be preserved and protected. Especially from the likes of ChatGPT.

This sucks. I know how to make it better and I'm afraid that no-one really does.

Edit: To the people who think there are too many posts complaining about AI: Try sorting this sub by New. Those of us who do filter all the most egregious slop out, that's why you're not seeing it.

⬆️ 617 points | 💬 169 comments


My neighbor offered me this as a thank-you because I supported him a lot while he was struggling with depression. What can I do with it? It's an M720Q.

image

⬆️ 826 points | 💬 140 comments


TapMap: see where your computer connects on a world map (open source)

image

I built a small open source tool that shows where your computer connects on a world map.

It reads local socket connections, resolves IP addresses using MaxMind GeoLite2, and visualizes them with Plotly.

Runs locally. No telemetry.

Windows build available.

GitHub:

https://github.com/olalie/tapmap

⬆️ 665 points | 💬 48 comments


We built an open-source headless browser that is 9x faster and uses 16x less memory than Chrome over the network

Hey r/selfhosted,

We've been building Lightpanda for the past 3 years

It's a headless browser written from scratch in u/Zig, designed purely for automation and AI agents. No graphical rendering, just the DOM, JavaScript (v8), and a CDP server.

We recently benchmarked against 933 real web pages over the network (not localhost) on an AWS EC2 m5.large. At 25 parallel tasks:

  • Memory, 16x less: 215MB (Lightpanda) vs 2GB (Chrome)
  • Speed, 9x faster: 5 seconds vs 46 seconds

Even at 100 parallel tasks, Lightpanda used 696MB where Chrome hit 4.2GB. Chrome's performance actually degraded at that level while Lightpanda stayed stable.

Full benchmark with methodology: https://lightpanda.io/blog/posts/from-local-to-real-world-benchmarks

It's compatible with Puppeteer and Playwright through CDP, so if you're already running headless Chrome for scraping or automation, you can swap it in with a one-line config change:

docker run -d --name lightpanda -p 9222:9222 lightpanda/browser:nightly

Then point your script at ws://127.0.0.1:9222 instead of launching Chrome.

It's in active dev and not every site works perfectly yet. But for self-hosted automation workflows, the resource savings are significant. We're AGPL-3.0 licensed.

GitHub: https://github.com/lightpanda-io/browser

Happy to answer any questions about the architecture or how it compares to other headless options.

⬆️ 894 points | 💬 76 comments


Booklore is gone.

I was checking their Discord for some announcement and it vanished.

GitHub repo is gone too: https://github.com/booklore-app/booklore

Remember, love AI-made apps… they disappear faster than they launch.

⬆️ 852 points | 💬 469 comments


Open source doesn’t mean safe

As a self-hosted project creator (homarr) I’ve observed the space grow in the past few years and now it feels like every day there is a new shiny selfhosted container you could add to your stack.

The rise of AI coding tools has enabled anyone to make something work for themselves and share it with the community.

Whilst this is fundamentally great, I’ve also seen a bunch of PSAs on the sub warning about low-quality projects with insane vulnerabilities.

Now, I am scared that this community could become an attack vector.

A whole GitHub project, discord server, Reddit announcement could be made with/by an AI agent.

Now, imagine this new project has a docker integration and asks you to mount your docker socket. Suddenly your whole server could be compromised by running malicious code (exit docker by mounting system files)

Some replies would be “read the code, it’s open source” but if the docker image differs from the repo’s source you’d never know unless manually checking the hash (or manually opening the image)

A takeaway from this would be to setup usage limits and disable auto-refill on every 3rd party API you use, isolate what you don’t trust.

TLDR:

Running an un-trusted docker container on your server is not experimentation — it’s remote code execution with extra steps (manual AI slop /s)

ps: reference this post whenever someone finds out they’re part of a botnet they joined through a malicious vibe-coded project

⬆️ 743 points | 💬 113 comments


My humble home lab / self-hosted setup

image

In September of last year I started my homelab/self-hosted journey. I bought the following around that time (except the Pi + case, purchased just last month):

Beelink mini PC (N150+16GB RAM) - $175

2x WD Elements 14 TB external HDD - $170/ea

LG external Bluray drive - $130

Raspberry Pi Zero 2W - $15

Case for Raspberry Pi printed at my library - $0.59

The mini PC runs Ubuntu primarily for Jellyfin but also Pihole and Tunarr (for creating custom TV channels). My Raspberry Pi is my backup DNS for Pihole. The Bluray drive is for ripping our DVD/Bluray/UHD collection (mostly picked up cheap at second hand stores). My Windows PC handles the ripping and any encoding info via Handbrake. I save a backup of all my videos on one of the external HDDs and the other HDD is permanently attached directly via USB to my mini PC and serves as my Jellyfin storage drive. I use WinSCP to send the ripped videos from my Windows PC to my Jellyfin server.

There are some things I can definitely improve e.g. replacing the external USB drive someday with a server grade drive. I also may switch to AdGuard from Pihole per a recommendation from a friend but haven't gotten that far yet.

I've learned a ton about using CLI as well as troubleshooting in all senses of the word. I recently figured out how to get audio dramas/podcasts working properly in Jellyfin which has been a huge hurdle for me and seemingly hasn't really worked for other folks, so I'm looking forward to sharing that in the Jellyfin subreddit soon. But anyway, this has just been a fun hobby and given me ample opportunities to scratch my brain a bit.

There's nothing really glamorous about my setup but I now have a really functional, easy to use, and easy to maintain home media server that doubles as a broad ad blocker. My family and I have gotten a ton of value out of having our movies digitized and also cut all streaming services as we've taken the opportunity to pick up a bunch of cheap second hand discs. I also pull some videos from YouTube to host locally; the benefit at this point is that my kids are basically 100% shielded from advertisements yet we still have access to virtually everything we all enjoy at home or on the go (thanks, Tailscale). We also take advantage of our local library for books, Blurays, and audiobooks to supplement my self hosting.

I've seen some really elaborate and very cool self-hosted setups on this subreddit, but I felt like sharing mine as an example of a simple setup that just does a few things that improve my family's quality of like without much extra effort.

⬆️ 854 points | 💬 80 comments


Digest: r/selfhosted: Mar 13 - Mar 13, 2026

Published: 1 week ago | Author: System

No posts in this digest period.

Digest: r/selfhosted: Mar 06 - Mar 13, 2026

Published: 1 week ago | Author: System

Apparently we can't call out apps as AI slop anymore...

image

Seems like a bad direction to take the selfhosted community. Looks like the mod team is fine with this sub being bombarded with insecure, AI drivel. Like I get that it was posted on Friday but I think if you use AI to "build an app" you should be required to disclose to what extent AI was used which wasn't disclosed by the OP. I think as a community we need to have higher standards for what we allow to be posted as vibe-coded projects can introduce very extensive security vulnerabilities we all learned with Huntarr and when things are vibe-coded the maintainer doesn't have the capability to fix the issue.

⬆️ 1,115 points | 💬 496 comments


Fully remove every, "I created a", "Selfhosted app!" claude slop.

im hating the idea, not the person ;), also look down for a temp solution

Title speaks for itself, almost every single post in the last few weeks is just someone promoting their vibecoded bs app that is either something simple like file transferring (there is already some well trusted ones that are faster better etc.), or something really complicated that ai cant do without security flaws... (Huntarr).

idc how this post looks, how it sounds, if vibecoders get offended, i just want the mods to actually remove this and not just try to "prevent" it with the rules they changed..

upvote if u think so 2 so it gets to the top, in my opinion commenting on someones post saying its slop wont do anything, wont help anyone.

shout out to u/masterio for this:

It's a shame the Vibe Code and Built with AI labels were removed as it made it incredibly easy to filter out these posts with ublock.

! Enough Vibe Coded bullshit
sh.reddit.com,www.reddit.com##shreddit-post:has-text(/.*Vibe Coded \(Fridays!\).*/)
sh.reddit.com,www.reddit.com##shreddit-post:has-text(/.*Built With AI \(Fridays!\).*/)

Another good way of filtering out the AI generated posts is filtering out on the characters that hardly anyone actually uses in casual online postings.

! AI Slop (No you don't really "use" EM dashes in informal discussion online) 
! See:
! https://www.pieceofk.fr/the-rise-of-the-em-dash-in-ecology-abstracts/
! https://www.reddit.com/r/dataisbeautiful/comments/1kfg9b8/oc_em_dash_usage_is_surging_in_tech_startup/
sh.reddit.com,www.reddit.com##shreddit-post:has-text(/—/i)
sh.reddit.com,www.reddit.com##shreddit-comment:has-text(/—/i)

⬆️ 1,202 points | 💬 302 comments


Goodbye Google — I self-host everything now on 4 tiny PCs in a 3D printed rack

After months of planning and building, I finally have a fully self-hosted setup that replaced almost everything I was paying for or trusting to big tech. Put together a video walking through the whole build if anyone's interested.

https://preview.redd.it/87mqpt1utfng1.jpg?width=4000&format=pjpg&auto=webp&s=943578451b7da34e1ef993b177895a85de9bde67

What I replaced:

  • Google Photos → Immich (with Google Coral TPU for face/object recognition)
  • Google Drive / OneDrive → Nextcloud (file sync across all devices)
  • Ring / Nest cameras → Frigate NVR (Coral AI detection + Home Assistant integration)
  • Various streaming → Plex (with full *arr stack)
  • Commercial router → pfSense (firewall, DNS, DHCP, WireGuard VPN, ntopng monitoring)
  • LastPass → Vaultwarden
  • DNS ad blocking → Pfblocker

https://preview.redd.it/wrdq62uztfng1.png?width=2605&format=png&auto=webp&s=6646b0aa7ef45cbbfe99dd104d91ce6bfa581fef

Hardware:

  • 3x Lenovo M720q + 1x M920q (Proxmox cluster + pfSense)
  • Terramaster D5-310 DAS with 42TB raw storage
  • Google Coral USB TPU
  • All mounted in a 3D printed KWS Rack V2 (12U, 10-inch)
  • Total: $3,737 CAD

https://preview.redd.it/eq5eijv8ufng1.png?width=3011&format=png&auto=webp&s=05211a47456506a577ab4ceac9cdbf42b0026d1e

The honest take:
Setup time is real. This isn't a weekend project — it took weeks of configuring, breaking, and fixing. But now everything runs 24/7, I own my data, and the monthly cost is basically just electricity (~$10-15/month).

The biggest win? Immich. Having Google Photos-level search (face recognition, location, object detection) on hardware I own, with zero cloud dependency — that alone justified the build.

Video (full build walkthrough): https://www.youtube.com/watch?v=5cET4sfqdlE&t=2s

I'm a plumber by trade who fell into self-hosting, so if I can set this up, anyone can. Happy to answer questions.

⬆️ 607 points | 💬 147 comments


im tired of this sub

I cant keep up with this sub, i used to love just being able to browse and find some really awesome projects that have really changed my life. Its not an overexaggeration at all, as an IT person, this place has opened my eyes and have let me discover peace in todays fast paced world where everything is about subscriptions and our private data, selfhosting allowed me to slow down and take a breath, i have built servers, deployed countless ideas and for a moment i finally felt like im free of every corporate bullshit out there.

after all these, the reason im writing this is because the amount of posts that are influenced by ai. dont get me wrong, i can think of it like any other handy tool, but thats only my view and current trends seemingly dont align with it, because there are so much new projects popping up i cant even keep up. It seems like every day some random user reinvents the wheel with their low quality vibecoded project and spams the whole sub with it, thats not good. Its not the fault of ai sadly, its the human behind it, you can elevate your efficiency with ai and still be trusted in my opinion, its about how much you actually care. If i see someone post a fully ai generated marketing letter and then i see that the projects whole git history is basically claude vibing… that someone probably doesnt really care and just wants attention or fame. If you are that person, let me tell you if you want those meaningless github stars then create something that you feel you can put lots of effort in it, dont just vibecode something in a day since we can do that too, thats not really adding any value.

tl;dr: if your project is using ai then at least put an ai disclaimer in your posts…

⬆️ 824 points | 💬 266 comments


Found an old NAS in a box in the basement!

https://www.reddit.com/gallery/1roa5te

Forgot I bought this about a decade ago. Currently don’t have a NAS so I think I’ll get it set up to see how it works. 6TB of WD red drives is like finding gold in these times!

⬆️ 484 points | 💬 42 comments


TrueNAS build system going closed source

Readme updated today:

This repository is no longer actively maintained.

The TrueNAS build system previously hosted here has been moved to an internal infrastructure. This transition was necessary to meet new security requirements, including support for Secure Boot and related platform integrity features that require tighter control over the build and signing pipeline.

No further updates, pull requests, or issues will be accepted. Existing content is preserved here for historical reference only.

https://github.com/truenas/scale-build

Wondering if this is just the first step towards doing a minio in the future.

⬆️ 504 points | 💬 255 comments


This has to be the craziest changelog I've ever seen

image

https://github.com/Sportarr/Sportarr/releases/tag/v4.0.985.1060

⬆️ 575 points | 💬 138 comments


Why does a simple, free, self hosted file storage platform not exist?

I've tried everything from Nextcloud, ownCloud, OpenCloud, and Pydio Cells. But I still can't seem to find exactly what I'm looking for, and I'm wondering why it doesn't already exist. File storage is (in my opinion) one of the most helpful use cases for a self-hosting setup, but I don't understand why there isn't a self hosted cloud storage platform that:

  • is cross-platform
  • has relatively low resource usage
  • uses a flat file structure, not S3-style blobs
  • handles thumbnailing for more file types than just images
  • has virtual filesystems OR selective sync for common operating systems
  • has decent sharing or multi-user tools
  • has good upload and download speeds

Essentially, I don't understand why a fully self-hostable and user-friendly Google Drive alternative doesn't exist. I'm a developer and I understand that it would obviously be a large undertaking to build, but it's a type of software that's very common for self-hosters and I don't see why a better option doesn't exist than the established players. NextCloud is too heavy/is trying to do too much, ownCloud is too corporate and a pain to maintain (plus the interface is crap), Pydio is good but the client apps (aside from the web app) are horrendous, Seafile is limited to blobs and is slightly proprietary, FileRun is paid, etc. Just seems to me like a major gap in the space. Anyone have any insight on why something like this doesn't exist?

⬆️ 458 points | 💬 422 comments


Digest: r/selfhosted: Feb 27 - Mar 06, 2026

Published: 2 weeks ago | Author: System

This will be interesting to self-host.

image

When I bought my first GoPro (hero 8) I also bought a 256 GB micro SD card and GoPro's cloud storage subscription for $5/month. I rode my bicycle around town and to work every day, I went to family outings at the lake, had conversations with friends who I just don't talk to anymore (one is dead), and certain experiences that I just don't have anymore, I just press record and either mount my GoPro somewhere or strap it to my head and forget about it. Eventually I got the media mod that exposed the charging port, bought a 30,000 mAh battery and had a long USBC cable run from my battery in my backpack to my camera on my head/helmet, so I was able to record for literally hours.

All that changed when I found out that GoPro uses AWS for its cloud storage. Now I'm figuring out how to get this kind of storage as fast as possible, and I need to do this preferably before GoPro collapses as a company.

⬆️ 681 points | 💬 198 comments


Today is digital Independence day!

image

Social media is one of the most valuable data points, that is collected about us, so it's time to fundamentally reject surveilance capitalism and switch to self-hostable, open source and decentralized social media.

That's exactly what the fediverse is. In the linked image, there is an overview of some of the networks out there, that are similar to platforms, you are already used to. If you want to learn more about how the fediverse works, look here.

The digital indepence day is all about taking small steps and trying to switch away one service at a time. You don't have to fully commit to the service, just try it out and see if you like it. The fediverse as a whole is constantly growing and especially the stuff you find on piefed / lemmy theese days is often really interresting. You will find some nieche communities if you look around a bit. If you wanna learn more about the digital independence day, look at di.day .

Edit: If you are interrested in some niche fun and chill piefed / lemmy communities, here are some examples, you could look at: https://lemmy.ca/c/shittyfoodporn, https://europe.pub/c/HorseMemes, https://lemmy.world/c/superbowl, https://lemmy.ca/c/trippinthroughtime, https://lemmy.world/c/animalswithjobs, https://lemmy.world/c/comicstrips .

⬆️ 663 points | 💬 137 comments


Grafana dashboard to tell me how expensive my hobby is

image

Over the last couple of months I have used Claude Code extensively to build out a robust Kubernetes cluster on the Proxmox servers in my rack. I had it set up a nice logging and observability stack with Loki, Prometheus, and Grafana.

Recently I had the bright idea to have Claude create a dashboard showing the power usage stats for the 3 Dell servers and my 2 UPS units. I gave it a screenshot of my power bill and it calculated the per-kilowatt-hour cost to give me daily and month-to-date cost estimates.

And yes, I have a new set of batteries on the way for the SU1500.

⬆️ 624 points | 💬 80 comments


why the hell do you all just give away this awesome shit for free?

first off, thank you. legitimately. i work i finance. i have zero technical expertise in this area, but y'all have made this so fucking simple that even a dumbass like me can selfhost a server with a bunch of rad life-improving tools. and this community has been really great, both to follow, and for help/support.

but why the hell do you all just give these things away for free? i ask this as a genuine question. i don't really understand how this works.

-is it career development? does writing/maintaining/contributing to open source projects help pad resumes?

-i know a lot of projects have a small group of dedicated maintainers, but there are a lot of projects where thousands of people have made contributions. is contributing actually easy for someone with your skill set? i understand building something from the ground up is a significant investment. and i understand that everyone has competencies and proficiencies in their respective fields. but all of this is greek to me. how difficult is it for those of you who are technically skilled in this area to make bug fixes or other contributions?

-separately, what motivates you to do that for free? or are there a lot of people who are employed by companies that rely on open source projects that pay their devs and engineers to maintain upstream products as well?

-how much of this is companies getting people to try their product at home and then advocate for it in the office when they see its benefits?

i live near the trailhead of an awesome group of hiking/mtb trails. i will go out occasionally with a group once or twice a year to do some trail maintenance. is it anything like that?

all of this to say, i have no idea why you all do this, but i am sincerely grateful. i've tried to buy a coffee for almost every major project i use, but that feels like small gratitude for what i've got in return. this is such a fun hobby, one i never would've guessed would even be possible for someone with my background and limited capability, but its captured me like nothing else really. so thank you to everyone!

⬆️ 713 points | 💬 343 comments


Happiness from 1st Outage!

So I've been hosting stuff for about 4 months now. I've been very generous with allowing folks to have access to my Audiobookshelf specifically (probably made about 20 users). I haven't heard much from anyone and figured it was kind of a fake enthusiasm for my new hobby.

Fast forward to today: I start and OS update and it goes squirrelly on me (TrueNAS 25.04.1 -> 25.04.2. 6). Portainer breaks. All my docker containers break. I start trying to rebuild things and pick through logs.

My server is down less than 10 minutes and I get my first text. Then a few minutes later I get my second text, then third... turns out this outage disrupted a total of 5 of my friend's from listening to some books!

I felt overjoyed that other people are actually using what I'm hosting! It was a moment of validation with all that I'm doing. It felt awesome.

Everything is back up and running now and I have happy users, but it was just very validating because I thought I was the only person using any of my self-hosted services and it turns out I wasn't! Anyone else have a happy little accident like this?

⬆️ 562 points | 💬 43 comments


Warning: SimpleLogin (Proton) is locking paid accounts for using alternative email infrastructure

posting this here because my post on the simplelogin sub wasn't approved by their mods (shocker).

wtf is going on with proton/simplelogin? I’m a paying pro user. my custom domain's destination inbox is hosted on forwardemail.net (i pay for their encrypted IMAP storage).

support recently refused to let me update my mailbox, claiming forward email is a "temporary/burner" relay service that causes mail loops. I explained to them that i OWN the domain, and it's my permanent inbox, not a relay. just because my provider offers aliases doesn't make my personal domain a burner. by that logic, they should ban gmail too.

instead of actually reading my ticket, their "anti-abuse" team just DISABLED my account completely. locked me out of my own data.

now my yearly subscription billing just bounced, and I literally cannot log in to update my payment or export my aliases so i can migrate to addy.io. password resets do nothing. my account is basically held hostage.

this is ridiculous. they are punishing power users for using alternative/open-source email infrastructure just because it's not a mainstream giant like gmail or protonmail. their job is literally to route email, and they're banning my inbox provider.

if you use a niche provider or self-host your destination inbox behind simplelogin, make sure you have regular backups of your aliases. they will just lock you out if their lazy automated filters decide they don't like your MX records. fuck this vendor lock-in bs.

⬆️ 519 points | 💬 81 comments


I open-sourced a directory of 450+ self-hostable alternatives to popular SaaS with Docker Compose configs

Hey r/selfhosted,

I've been building The AltStack - an open-source directory of 450+ tools across 28 categories that you can self-host. Every tool is vetted for quality and activity.

What makes it different from other lists:

  • 56 tools have ready-to-use Docker Compose configs - literally copy, paste, docker compose up
  • Side-by-side comparisons (e.g. Supabase vs Appwrite vs PocketBase)
  • Savings calculator showing how much you burn on SaaS per year
  • Best-of rankings per category based on GitHub stars and community health

Categories include: BaaS, Analytics, Project Management, CRM, Communication, DevOps, Monitoring, AI Models, and 20 more.

The entire dataset is open source under Apache 2.0: https://github.com/altstackHQ/altstack-data

Live site: https://thealtstack.com

Would love feedback from the community. What tools or categories are we missing?

⬆️ 584 points | 💬 96 comments


BookLore 2.0 is out! Audiobooks, multi-format books, overhauled readers, and a lot more..

BookLore 2.0 is here.

For the unfamiliar, BookLore is a self-hosted digital library for ebooks, comics, and now audiobooks, with multi-user support, smart shelves, metadata fetching, Kobo/KOReader sync, OPDS, and built-in readers.

Also, we just hit 10K stars on GitHub, which is wild. Huge thanks to everyone who's contributed code, bug reports, and feedback over the past year. If you want to support the project, you can sponsor on Open Collective or Ko-fi.

Here's what's new:

Multi-Format & Audiobook Support

  • A single book entry can now hold EPUB, PDF, CBZ, and audiobook formats all under one roof. No more duplicate entries for different formats.
  • Dedicated audiobook player with streaming playback, session tracking, narrator metadata, and sidebar filtering by narrator.

Reader Upgrades

  • PDF reader: annotations (highlights and notes), dark/light mode, range streaming for large files
  • Ebook reader: fullscreen, keyboard shortcuts, search, go-to-percentage navigation, text copy
  • Comic reader: fullscreen, slideshow mode, RTL reading direction, long strip mode, keyboard navigation
  • Fully bidirectional and user-scoped, reading progress syncs both ways

And a lot more:

  • 10 new statistics charts (reading pace, distributions, heatmaps, completion tracking, etc.)
  • Dedicated series browser with search, filtering, and sorting
  • Duplicate book detection and merging
  • Shelves and magic shelves sync as Kobo tags
  • Annotation notebook collecting all your highlights in one place
  • Audit logging for admin actions
  • Author bios and images via Audnexus
  • Sidecar metadata file support (.metadata.json)
  • Content restrictions (age and content ratings) for magic shelves
  • 15+ languages supported through Weblate
  • Upgraded to Java 25, Spring Boot 4, removed nginx (Angular served directly from Spring Boot)
  • Bookdrop folder polling, login rate limiting, upload progress tracking, multi-field sorting, and a long list of bug fixes

A note on the v2 launch

Apologies to those who ran into issues during the initial v2.0.0 rollout. Some setups broke due to file permission problems, especially for Portainer, Unraid, and Proxmox users. In v1.x the Docker container ran as root because the embedded nginx required it. In v2 I removed nginx to pave the way for a rootless container, but I overlooked some edge cases in that transition. With 200+ commits and just me doing all the manual regression testing, I missed some critical areas. That's on me.

Things are in a much better place now with v2.0.4. If you're upgrading and still hit permission issues, make sure your data, bookdrop, and book folder permissions match the user/group set in your Docker Compose file. You can also check GitHub Issues and Discord where others have posted fixes. I'd also really appreciate help with testing before future major releases, so if you're interested, hop over to Discord.

Try it out: demo.booklore.org (user: booklore, pass: 9HC20PGGfitvWaZ1)

If you're enjoying BookLore, a star on GitHub helps more people find it.

FAQ

Q: You use a lot of emojis and ship features fast. Is this vibe coded? No, I just like emojis 😄. BookLore is AI-assisted, not vibe coded. I've been building software for years, going back to early Android (anyone remember Cupcake, Donut, Eclair?) and late Symbian. The project started before AI tooling went mainstream, so all the architecture you see was built by hand. You're in good hands.

Q: Any plans for mobile apps? Yes! I've been prototyping iOS and Android apps for a few months and they're getting close. No ETA yet, but actively in the works.

Q: BookLore uses a lot of RAM, any way to reduce it? That's a tradeoff of running on the JVM, but I think the benefits outweigh it. Java has also been making strides in memory efficiency lately. In v2 I added a flag that should bring RAM down by roughly 10-15%.

Q: Any plans to move to PostgreSQL? Not at this time. BookLore is deeply tied to MariaDB and migrating would be extremely disruptive given the user base we have now.

Q: What are your future plans for BookLore? Short term, stabilizing the v2 release. After that, maybe some social features? I'd love to hear what you all want. Come hang out on Discord or drop suggestions in GitHub Issues.

https://preview.redd.it/hh6lb7f4m5mg1.png?width=3474&format=png&auto=webp&s=7f7e4d2a28c39baa60689f055a6ced64dc9c3265

https://preview.redd.it/6ki8je45m5mg1.png?width=784&format=png&auto=webp&s=00c37f7b5bc8a72c1763ad8a86c34f9ef9ae9f28

https://preview.redd.it/1qpsri2dn5mg1.png?width=1520&format=png&auto=webp&s=c63c982cdabf1d4b600a3e839447af5e46414761

⬆️ 459 points | 💬 105 comments


Digest: r/selfhosted: Feb 20 - Feb 27, 2026

Published: 3 weeks ago | Author: System

Huntarr - Your passwords and your entire arr stack's API keys are exposed to anyone on your network, or worse, the internet.

Today, after raising security concerns in a post on r/huntarr regarding the lack of development standards in what looks like a 100% vibe-coded project, I was banned. This made my spidey senses tingle, so I decided to do a security review of the codebase. What I found was... not good. TLDR: If you have Huntarr exposed on your stack, anyone can pull your API keys for Sonarr, Radarr, Prowlarr, and every other connected app without logging in, gaining full control over your media stack.

The process

I did a security review of Huntarr.io (v9.4.2) and found critical auth bypass vulnerabilities. I'm posting this here because Huntarr sits on top of (and is now trying to replace them as well!) Sonarr, Radarr, Prowlarr, and other *arr apps that have years of security hardening behind them. If you install Huntarr, you're adding an app with zero authentication on its most sensitive endpoints, and that punches a hole through whatever network security you've set up for the rest of your stack.

The worst one: POST /api/settings/general requires no login, no session, no API key. Nothing. Anyone who can reach your Huntarr instance can rewrite your entire configuration and the response comes back with every setting for every integrated application in cleartext. Not just Huntarr's own proxy credentials - the response includes API keys and instance URLs for Sonarr, Radarr, Prowlarr, Lidarr, Readarr, Whisparr, and every other connected app. One curl command and an attacker has direct API access to your entire media stack:

curl -X POST http://your-huntarr:9705/api/settings/general \ -H "Content-Type: application/json" \ -d '{"proxy_enabled": true}'

Full config dump with passwords and API keys for every connected application. If your instance is internet-facing - and it often is, Huntarr incorporates features like Requestarr designed for external access - anyone on the internet can pull your credentials without logging in.

Other findings (21 total across critical/high/medium):

  • Unauthenticated 2FA enrollment on the owner account (Critical, proven in CI): POST /api/user/2fa/setup with no session returned the actual TOTP secret and QR code for the owner account. An attacker generates a code, calls /api/user/2fa/verify, enrolls their own authenticator. Full account takeover, no password needed.
  • Unauthenticated setup clear enables full account takeover (Critical, proven in CI): POST /api/setup/clear requires no auth. Returns 200 "Setup progress cleared." An attacker re-arms the setup flow, creates a new owner account, replaces the legitimate owner entirely.
  • Unauthenticated recovery key generation (Critical, proven in CI): POST /auth/recovery-key/generate with {"setup_mode": true} reaches business logic with no auth check (returns 400, not 401/403). The endpoint is unauthenticated.
  • Full cross-app credential exposure (Critical, proven in CI): Writing a single setting returns configuration for 10+ integrated apps. One call, your entire stack's API keys.
  • Unauthenticated Plex account unlink - anyone can disconnect your Plex from Huntarr
  • Auth bypass on Plex account linking via client-controlled setup_mode flag - the server skips session checks if you send {"setup_mode": true}
  • Zip Slip arbitrary file write (High): zipfile.extractall() on user-uploaded ZIPs without filename sanitization. The container runs as root.
  • Path traversal in backup restore/delete (High): backup_id from user input goes straight into filesystem paths. shutil.rmtree() makes it a directory deletion primitive.
  • local_access_bypass trusts X-Forwarded-For headers, which are trivially spoofable - combine with the unauth settings write and you get full access to protected endpoints

How I found this: Basic code review and standard automated tools (bandit, pip-audit). The kind of stuff any maintainer should be running. The auth bypass isn't a subtle bug - auth.py has an explicit whitelist that skips auth for /api/settings/general. It's just not there.

About the maintainer and the codebase:

The maintainer says they have "a series of steering documents I generated that does cybersecurity checks and provides additional hardening" and "Note I also work in cybersecurity." They say they've put in "120+ hours in the last 4 weeks" using "steering documents to advise along the way from cybersecurity, to hardening, and standards". If that's true, it's not showing in the code.

If you work in cybersecurity, you should know not to whitelist your most sensitive endpoint as unauthenticated. You should know that returning TOTP secrets to unauthenticated callers is account takeover. You should know zipfile.extractall() on untrusted input is textbook Zip Slip. This is introductory stuff. The "cybersecurity steering documents" aren't catching what a basic security scan flags in seconds.

Look at the commit history: dozens of commits with messages like "Update", "update", "Patch", "change", "Bug Patch" - hundreds of changed files in commits separated by a few minutes. No PR process, no code review, no second pair of eyes - just raw trunk-based development where 50 features get pushed in a day with zero review. Normal OSS projects are slower for a reason: multiple people look at changes before they go in. Huntarr has none of that.

When called out on this, the maintainer said budget constraints: "With a limited budget, you can only go so far unless you want to spend $1000+. I allot $40 a month in the heaviest of tasks." That's just not true - you can use AI-assisted development 8 hours a day for $20/month. The real problem isn't the budget. It's that the maintainer doesn't understand the security architecture they're building and doesn't understand the tools they're using to build it. You can't guide an AI to implement auth if you don't recognize what's wrong when it doesn't.

They also censor security reports and ban people who raise concerns. A user posted security concerns on r/huntarr and it was removed by the moderator - the maintainer controls the subreddit. I was banned from r/huntarr after pointing out these issues in this thread where the maintainer was claiming to work in cybersecurity (which they now deleted).

One more thing - the project's README has a "Support - Building My Daughter's Future" section soliciting donations. That's a red flag for me. You're asking people to fund your development while shipping code with 21 unpatched security vulnerabilities, no code review process, and banning people who point out the problems, while doing an appeal to emotion about your daughter. If you need money, that's fine - but you should be transparent about what you're spending it on and you should be shipping code that doesn't put your users at risk.

Proof repo with automated CI: https://github.com/rfsbraz/huntarr-security-review

Docker Compose setup that pulls the published Huntarr image and runs a Python script proving each vulnerability. GitHub Actions runs it on every push - check the workflow results yourself or run it locally with docker compose up -d && python3 scripts/prove_vulns.py.

For what it's worth, and to prove I'm not an AI hater, the prove_vulns script itself was vibe coded - I identified the vulnerabilities through code review, wrote up the repro steps, and had AI generate the proof script.

Full security review (21 findings): https://github.com/rfsbraz/huntarr-security-review/blob/main/Huntarr.io_SECURITY_REVIEW.md

What happens next: The maintainer will most likely prompt these problems away - feed the findings to an AI and ship a patch. But fixing 21 specific findings doesn't fix the process that created them. No code review, no PR process, no automated testing, no one who understands security reviewing what ships. The next batch of features will have the next batch of vulnerabilities. This is only the start. If the community doesn't push for better coding standards, controlled development, and a sensible roadmap, people will keep running code that nobody has reviewed.

If you're running Huntarr, keep it off any network you don't fully trust until this is sorted. The *arr apps it wraps have their own API key auth - Huntarr bypasses that entirely.

Please let others know about this. If you have a Huntarr instance, share this with your community. If you know someone who runs one, share it with them. The more people know about the risks, the more pressure there will be on the maintainer to fix them and improve their development process.

⬆️ 974 points | 💬 123 comments


This how I feel, but only thing I do is copying docker-compose.yml and up -d

image

⬆️ 2,883 points | 💬 150 comments


Large US company came after me for releasing a free open source self-hostable alternative!

TL;DR: I made an open-source, local-first dashboard for drone flight logs because the biggest corporate player in the space locks your older data behind a paywall. They found my GitHub, tracked my Reddit posts, and hit me with a legal notice for "unfair competition" and trademark infringement.

Long version: I maintain a few small open-source projects. About two weeks ago, I released a free, self-hostable tool that lets drone pilots collect, map, and analyze their flight logs locally. I didn't think much of it, just a passion project with a few hundred users.

I can’t name the company (let's call them "Company A") because their legal team is actively monitoring my Reddit account and cited my past posts in their notice. Company A is the giant in this space. Their business model goes like this:

  • You can upload unlimited flight logs for free.
  • BUT you can only view the last 100 flights.
  • If you want to see your older data, you have to pay a monthly subscription and a $15 "retrieval fee."
  • Even then, you can't bulk download your own logs. You have to click them one by one. They effectively hold your own data hostage to lock you into their ecosystem. I am not sure if they are even GDPR complaint even in the EU

To help people transition to my open-source tool, I wrote a simple web-based script that allowed users to log into their own Company A accounts and automate the bulk download of their own files. Company A did not like this. They served me with a highly aggressive, 4-page legal demand (CEASE and DESIST notice). They forced me to:

  1. Nuke the automated download tool entirely from GitHub.
  2. Remove any mention of their company name from my main open-source project and website (since it’s trademarked). I originally had my tagline as "The Free open-source [Company A] Alternative," which they claimed was illegally driving their traffic to my site.
  3. Remove a feature comparison chart I made. (I admittedly messed up here, I only compared my free tool to their paid tier and omitted their limited free tier, which they claimed was misleading and defamatory).

I'm just a solo dev, so I complied with the core of their demands to stay out of trouble. I scrubbed their name, took down the downloader, and sanitized my website. My main open-source logbook lives independent of them.

I admit I was naive about the legal aspects of comparison marketing and using trademarked names. But the irony is that they probably spent thousands of dollars on lawyer fees to draft a threat against my small project that makes close to zero money (I got a few small donations from happy users).

Has anyone else here ever dealt with corporate lawyers coming after your self-hosted/FOSS projects? It’s a crazy initiation :)

⬆️ 1,426 points | 💬 234 comments


Update : Large US company came after me for releasing a free open source self-hostable alternative - Resolved in our favor

This is a follow up to my previous post regarding the C&D notice I received. I have some incredible news for the community: the matter is officially resolved in favor of the entire drone community.

TLDR: AirData UAV has complied with community concerns, implemented a robust data takeout solution, and we have settled the matter gracefully.

The free OSS project in question : www.opendronelog.com

---------------

Since the legal threat is no longer active, I can finally name the company. It was AirData UAV, a US based drone log analysis and reporting service. Eran said it's my choice to name them or not name them here in this update post, I choose to name, because I don't have anything bad to say anymore.

Despite the first approach was a C&D, the final outcome was actually better than I hoped for (surprised actually!). A massive thank you goes to u/Archiver_test4, who acted as my legal representative pro bono (for free!! and denied donations). He prepared a powerful response and helped me pass this with confidence. He has even started a new subreddit, r/Opensource_legalAid, to help other indie devs in similar situations.

The Meeting with the Airdata UAV CEO Eran Steiner

In response to the traction the original post gained, AirData CEO Eran Steiner reached out for a face to face meeting via email within 6 hours of the post going live. He expressed regret over the legal route they initially took (he took the responsibility for that as well as CEO) and personally saw to it that the following changes were made before we even spoke:

  • Official Data Takeout Solution: This was the main goal (and my demand for data portability and fairness, because it's painful to export files one by one, clicking one after another and waiting). AirData UAV now provides a central takeout solution, making them fully GDPR compliant. You can now download your data in its original format without needing my 3rd party automation "patch.". If you are interested, please check out here.
  • Trademark Resolution: We agreed that fair representation and disclaimers are the way to go. I have already added these to my project, and I am free to use their name when representing truthful facts, as permitted by EU laws. I won't go into more technical/legal aspects than this of what trademark rights they actually hold or not.
  • Account Restoration: As a gesture of goodwill, they have fully restored my account and all my log files before I asked. ❤️
  • We agreed to drop all allegations and, in the future, talk through any issues personally rather than involving lawyers.

I am just a solo dev working in my free time, and I have no intention of competing with an established company. I am just thrilled that the community now has true data portability as I hoped for, and they are free to choose as they please based on what features/interface they like. Thank you Eran for making this happen so quick without any drama/delay or missed promise. AirData no longer "holds your data" to keep you on their platform. To be fair, they do have a functional and data rich toolset that many in the community still enjoy (including myself!) - They also have a very robust data sync solution which works very well. I am not paid or bribed or sponsored by them, I am just giving credit where it's due.

Thank you r/selfhosted for all for the support. It made all the difference! Open Source for the WIN!

⬆️ 1,413 points | 💬 62 comments


What 15,000 lines of YAML/CSS can do on Home Assistant

image

⬆️ 609 points | 💬 71 comments


The Huntarr Github page has been taken down

Edit TLDR: Tracking the fallout from https://www.reddit.com/r/selfhosted/comments/1rckopd/huntarr_your_passwords_and_your_entire_arr_stacks/

Maybe a temporary thing due to likely brigading, but quite concerning:

https://github.com/plexguide/Huntarr.io (https://archive.ph/fohW5)

Same with docs:

https://plexguide.github.io/Huntarr.io/index.html (https://archive.ph/UYgBc)

Additionally the subreddit has been set to private:

https://www.reddit.com/r/huntarr/ (https://archive.ph/d2TR2)

Edit: Also, the maintainer has deleted their reddit account:

https://www.reddit.com/user/user9705/ (https://archive.ph/u2c7u)

The docker images still exist for now:

https://hub.docker.com/r/huntarr/huntarr/tags (https://archive.ph/L1wmW)

Wasn't a member, but looks like the discord invite link from inside the app is invalid:

https://discord.com/invite/PGJJjR5Cww (https://archive.ph/M4bnD)

Edit: adding archive links for posterity

The GitHub Org https://github.com/orgs/plexguide/ (https://archive.ph/D5FGh) has been renamed to 'Farewell101' https://github.com/Farewell101 (https://archive.ph/4LE6k) - ty u/SaltyThoughts (https://www.reddit.com/r/selfhosted/comments/1rcmgnn/comment/o6zape9/)

And now the renamed 'Farewell101' https://github.com/Farewell101 github org is also now down and 404ing per u/basketcase91

Maintainer's github account it still up for now https://github.com/Admin9705 (https://archive.ph/lUR4E), but he's actively deleting or privating other repos.

Edit: And, the main maintainer's github account is removed/renamed and 404ing now

Github account just renamed to https://github.com/RandomGuy12555555 (https://archive.ph/MOh9L) - you can follow the journey with `gh api user/24727006` also to follow the org `gh api orgs/62731045` - jfuu_

Edit: Removed from the Proxmox Community Helper scripts, https://github.com/community-scripts/ProxmoxVE/discussions/12225, https://github.com/community-scripts/ProxmoxVE/pull/12226 - Pseudo_Idol

⬆️ 978 points | 💬 298 comments


I got tired of naming my scanned documents so i built this !

image

Hello guys, i wanted to show my project here because it might interest some people and i think it solve a real problem. Naming scanned documents is a real job now days and it's painful both at home and at office.

So basically, it receives documents via FTP from your network scanner, then processes them using Vision AI to analyze the contents. It generates smart filenames using AI, and automatically uploads everything to cloud storage via WebDAV. (Going to add more protocols in the future)

It also supports Docker, so you can deploy it easily with just a few commands. I’ve been using it myself, and it’s saved a lot of time in organizing scanned documents. The project us fully open-source, there is no paid plan or whatever and you have to self-host it. Feel free to open issues if you find any problem and don't hesitate ton contribute

EDIT: Forgot to mention it's fully offline
EDIT2: The AI part is offline and the cloud is offline too if you self-host it 💀
EDIT3: Forgot to add the link (i'm tired sorry guys) : https://github.com/SystemVll/Montscan
EDIT4: Thank you for all your replies and everything, didn't thought didn't though i will get that much engagement 😭

⬆️ 604 points | 💬 75 comments


Why do we still rely on IPv4, instead of IPv6?

I have recently started my self hosting journey. i turned my old laptop into a ubuntu home server which hosts, Nextcloud, Vaultwarden, Pihole, Jellyfin.

I hit a roadblock while trying to expose the services to the internet, because i use Jio-Fiber and they employ CGNAT. I thought of getting a Public IP (costs money + hassle), or employing a VPN (friends outside the network can't use it) or using cloudflare (privacy risk)

Then i stumbled upon using only an IPv6 address. it was a win for sure!

  • No port forwarding
  • Avoids bot scans
  • More static than ipv4, no need for ddns (can use dynv6 if needed)

Why do we keep using IPv4?

Has anyone tried using only IPv6 and come across any limitations?

⬆️ 567 points | 💬 504 comments


Digest: r/selfhosted: Feb 13 - Feb 20, 2026

Published: 1 month ago | Author: System

Any teamspeak alternatives open source for self hosting?

image

context is the image, i am honestly fedup with big corporate date hoarding.

⬆️ 3,317 points | 💬 446 comments


Seerr is finally out!

Seerr is the new unified successor to Overseerr + Jellyseerr. The two teams have merged into one project + one shared codebase, combining all existing Overseerr functionality with the latest Jellyseerr features, including Jellyfin + Emby support.

Highlights

  • Jellyfin + Emby support (alongside Plex)
  • Optional PostgreSQL support (in addition to SQLite)
  • Blocklist (movies/series/tags) + Override rules for smarter request defaults
  • TVDB metadata support (experimental) + TVDB indexer
  • DNS caching (experimental) to reduce DNS spam (Pi-hole/AdGuard friendly)
  • Dynamic placeholders in webhook URLs
  • Notification QOL (e.g., optional embedded posters) + lots of bug fixes

Migrating from Overseerr/Jellyseerr

You must follow the migration guide linked below carefully. BACKUP FIRST so you can roll back if needed Release notes: https://github.com/seerr-team/seerr/releases/tag/v3.0.0

Release announcement: https://docs.seerr.dev/blog/seerr-release
Migration guide: https://docs.seerr.dev/migration-guide

If you hit any issues during upgrade/migration, please report them in our Discord (with steps/logs) and we’ll help you out!

⬆️ 741 points | 💬 103 comments


[Update] bought 2 dying 18TB Seagate Exos drives from Vinted, both still under warranty

image

So 2 weeks ago i posted about my risky move where I bought two dying hdd from Vinted that were still under warranty and sent them to seagate for replacement.

579 people votes and almost 50% thought I wouldn’t get a replacement.

I’m happy to say that seagate has sent two replacement HDDs in perfect Health 😎

⬆️ 664 points | 💬 24 comments


I'm so tired

https://www.reddit.com/gallery/1r79gut

SAAS. The Warner Brothers acquisition. Ads. I'm so tired of it all.

Now it's been a month and a half since i started work on this humble home server.

It currently consists of:

… an HP EliteDesk 800 G3

  • CPU: i5 7500 3.8 GHz
  • RAM: 16 GB DDR4
  • SSD: 256 GB M.2 + 4 TB 2.5"

… running Arch Linux

  • yes

… hosting a Jellyfin stack

  • for my Linux ISOs

... inside Docker containers

… which I, gf and family connect to through Tailscale

Edit: The Arch Linux pain is brutally overexaggerated in my limited experience. Do correct me if you've ever had a basic Docker setup break on an update.

⬆️ 588 points | 💬 144 comments


Discord Alternatives Comparison

image

https://github.com/Hemeka/Discord-Alternatives

⬆️ 668 points | 💬 169 comments


Change my mind: There is no good alternative to Discord (yet?)

There is no alternative which offers:

  • (group) voice chat, (group) text chat, live screen sharing, permission system
  • easy selfhosting with docker
  • open source
  • respects privacy (looking at you matrix)
  • decentralized at best
  • costs nothing

Checked:

  • XMPP based services
  • Matrix
  • Stoat
  • Mattermost

⬆️ 596 points | 💬 550 comments


ArrMatey: A modern, native open-source mobile client for your *arr stack (Android & iOS) - Now in Alpha!

image

Hey everyone!

I’ve been working on a new mobile client for the *arr stack called ArrMatey, and I’m excited to finally share the first alpha launch with the community.

ArrMatey is an all-in-one client that lets you manage your Sonarr, Radarr, and Lidarr instances from your pocket. I found myself wanting a mobile experience that felt truly native on both platforms, so I built this using Kotlin Multiplatform. It uses Jetpack Compose (Material 3 Expressive) for Android and SwiftUI (Liquid Glass) for iOS to ensure the UI feels like it belongs on your device.

Current Features:

  • Multi-Instance Support: Manage and switch between multiple instances of Sonarr, Radarr, and Lidarr seamlessly.
  • Calendar View: Switch between list and month views to see upcoming releases.
  • Interactive Search: Manual search for releases with filters for quality, language, and seeders.
  • Activity Queue: Monitor real-time download progress, ETAs, and cancel/blocklist items.
  • Advanced Networking: Support for custom HTTP headers (great for reverse proxies) and "Slow Instance" modes for high-latency remote setups.
  • Modern UI: Full Material 3 Expressive support on Android with dynamic theming, and Liquid Glass support on iOS 26.

This is an alpha, so I'm just getting started. On the roadmap, I have tablet support, home screen widgets, notifications, and support for more instances like Seer, Prowlarr, and Readarr/Chaptarr.

Licensed under MIT, you can check out the code, report bugs, or contribute here: https://github.com/owenlejeune/ArrMatey

Since we are in Alpha, you'll need to build from source or check the Releases page on GitHub for the latest APK. For iOS, you can build the iosApp target via Xcode.

I’d love to get some feedback on the UI/UX and any features you feel are missing from your current mobile setup, please feel free to open an issue with any requests!

⬆️ 580 points | 💬 92 comments


For those who want a Discord replacement

image

I often hear people say that matrix isnt a great replacement for Discord because it lacks group video and voice chat... Then I saw this...

⬆️ 605 points | 💬 205 comments


Digest: r/selfhosted: Feb 06 - Feb 13, 2026

Published: 1 month ago | Author: System

Let's get a self-hosted Discord "replacement" thread going for 2026.

We've all seen the big news: Discord is introducing facial ID as a requirement to actually use the app starting next month. Which means one thing: people are about to dig through dozens of ancient "what's the best self-hosted Discord alternative?" threads on here and find antiquated opinions and advice.

What are we actually using? What are the clients that work well? What are options that pass the "wife test" of actually being something you could convince your not-techy friends and family to install on their phones?

Let's get into it. I know I'm already anticipating self-hosting something to replace Discord for communities/friend groups who'll naturally slough off when face ID comes along.

⬆️ 653 points | 💬 263 comments


I built a janky Cloudflare Bitwarden server for myself, forgot about it, and woke up to 400+ forks

A while back, I got fed up with password managers gatekeeping 2FA and passkeys behind paywalls.

Also, Bitwarden started forcing email 2FA, which created this annoying chicken-and-egg loop: if I ever lost my logged-in devices, I wouldn't be able to log in to Bitwarden because I'd need the email OTP... but my email password was inside Bitwarden. I just wanted to avoid that mess entirely.

I didn't want to pay for a VPS to host Vaultwarden, but honestly, the main reason was that I don't trust myself. Managing a Linux server means one bad command or missed backup and my passwords are gone forever. I wanted something maintenance-free where I couldn't accidentally nuke my own vault.

So, I hacked together a Bitwarden-compatible server that runs entirely on Cloudflare Workers + D1 for free. Deploy once, forget forever.

I called it warden-worker. It worked "good enough" for me, so I pushed it to GitHub, thought "maybe I'll post this later," and then immediately forgot about it.

Fast forward to this week. I was doing some repo cleanup and realized I had turned off my GitHub notifications. I checked the repo and... what??

  • 400+ forks
  • Issues threads in Chinese?
  • People writing guides on how to deploy it??
  • Someone explaining how to fix my bugs in the issues

The best part is that a user named qaz741wsd856 apparently took my abandoned skeleton and turned it into a full-blown project with KV support and the actual Vaultwarden frontend. Their fork is objectively better than mine in every way.

I'm still using my original "good enough" version because it’s stable and I’m lazy, but it's wild to see an entire community spin up around a project I thought was dead.

If you want the original (don't use this): https://github.com/deep-gaurav/warden-worker

If you want the one that actually works (use this): https://github.com/qaz741wsd856/warden-worker

Just wanted to share because I'm still processing how weird open source can be sometimes.

⬆️ 596 points | 💬 46 comments


I used to think dashboards were dumb.

image

Then I redesigned one to fit my needs. Now my wife has her own dashboard with only her apps and can see if they are offline. As a bonus, I can too!

⬆️ 436 points | 💬 104 comments


bye bye data

I returned home from work today, powered on the TV and loaded jellyfin, "server not found"
missus mentioned a power outage today, so i checked on the server, no disks in truenas.
I swapped the HBA as I keep a spare handy, still no disks
I removed a disk from the array and attached to another PC, dead as a dodo, same with all 8 HDDs in the array, i mourn the loss of my linux ISOs
Stangely the SSDs survived

⬆️ 429 points | 💬 140 comments


My stack

image

I designed this stack while using the handicapped stall in a Walmart bathroom. I'm quite proud of it! No Ai was used but still 100% vibbing.

⬆️ 537 points | 💬 91 comments


How I spent my Sunday to save $100 and avoid having to walk across the room

It all started with my printer dropping off the network. My Brother laser printer, which only cost $75 in 2008 but has worked like a champ and survived four houses, three time zones, two kids, a university degree, and my entire career to date.

Lately however, its struggling. It won't hold a network connection for much longer than 15 minutes, and once it loses it, only a power cycle will bring it back online.

I've tried everything. Wifi, ethernet, dedicated VLAN, static IP, DHCP changes, RTSP on, RTSP off, scripts to ping the printer every 5 minutes.

A normal person would have bought a new printer. A sane person would just decide to turn the printer on when they need it.

I am apparently too stubborn to be a normal person

Why would I spend money on a new printer when I have time I can waste on the problem instead? And why would I resign myself to walking across the room when I can build something to do it for me instead?

So I built a "Legacy Hardware Integration Bridge":

  • A CUPS print server running in a docker on my Unraid machine is now the "printer" for all my computers. The server stays always on, so the computers never see a "Printer Offline" error
  • When a print job hits the CUPS queue, it triggers a state change to a sensor entity on my Home Assistant server using the Internet Printer Protocol integration
  • The state change on that sensor acts as a trigger to an automation, which causes a smart plug to switch on
  • That smart plug is now controlling the power to the printer, so when it switches on, the printer boots up, and gets a fresh connection to the network
  • Once the printer has been idle for 5 minutes, it triggers the smart plug to turn off, and everything is ready for the next print job.

My wife thinks I could have just turned the printer on whenever I needed it and spent my Sunday doing something more productive.

I'm not a caveman though. I have technology.

⬆️ 577 points | 💬 70 comments


Everything is so... easy?

EDIT: Some of you think this post was written by an AI/LLM... Thank you, but it's not. I replied this to an earlier comment that got buried because of the downvotes, so I'm gonna paste this here for clarity as I'm pretty self-conscious about my writing and I'm not about to reply to every single negative comment.

I wrote every single letter of this post myself. You can check the edit history to see how many typos and poorly worded sentences I had in the initial post and how many times I've edited it since then. If you want to expose me for being an LLM, I can give you some consolation and tell you that I have LanguageTools extension installed, but apparently using commas and periods correctly makes me a robot now. Beep boop.

Now I realize that reddit doesn't allow for checking the edit history but fuck it. Next time I write a post (probably never), I'll go out of my way to make it unpolished I guess.

As for the "you must have a lot of time" folks, well, yeah, this post took me 20-25 minutes? It was like 2am, I had time, I bust out my laptop and just started writing it. The initial post had a lot of embarrassing typos, I edited the post a few times.

If you're about to say "jesus christ this guy is crashing out over a reddit post", well, you're right, you won, congrats. I'm an LLM, being enthusiastic is bad, and I should write like an illiterate degenerate. I didn't think I would wake up to a bunch of "get a load of this guy" comments... serves me right for thinking I could express my honest feelings on reddit.

-- ORIGINAL POST BELOW --

So a few weeks ago, one of my close friends got into homelabbing and naturally started talking to me about it. I've always wanted to try similar things but never got around to it, so this time I just said what the hell and after some research, I ordered NanoPi R6S. I found it to be a solid upper mid-range device that could satisfy my thirst for knowledge and help me learn the niche.

Now, I'm pretty good with tech, and I'm very enthusiastic about it, but I'm a total noob when it comes to networking. I know what LAN stands for, and I know how to set up a Cloudflare DNS on an ISP modem, but apart from that, I might as well be a boomer. I'm kinda nervous about setting up a new router, messing with its firmware, opening ports, configuring firewall, and so on.

NR6S arrives and I start researching firmware options. OpenWrt just calls my name because I have used it once years ago, and I didn't really find anything wrong with it.

After some trial and error, I managed to flash OpenWrt on the eMMC storage of NR6S, thanks to this absolute chad.

Okay, I now have NR6S powered by OpenWrt standing between my ISP modem and my Wi-Fi AP. I find a lot of people mentioning bridging the router on forums, so I start looking into what bridging is. OF COURSE, it makes sense, for years both my ISP modem and my Wi-Fi AP have been doing routing, but both are terribly underpowered for that task, so I can now have a dedicated ROUTER for that. I bridge the ISP modem, set my Wi-Fi unit as Dumb AP, and I already feel better about myself. But, I need some more ports... I find Netgear GS308 locally for dirt cheap and for the first time in my life, I have a dedicated network switch. Pretty cool... I guess? WAIT, you're telling me that connecting 2 of my PCs to a single switch allows me to transfer Steam games over LAN? I don't have to wait twice as long for game downloads to play something with my brother? I can just send him game files at gigabit speeds instead of my ISPs shitty 100Mbps? W switch, W Valve, W whoever's reddit comment I came upon about Steam's LAN feature.

Okay, now that I have stable internet, let's Google "self-hosted projects reddit." I find tons of threads, and I find some project names coming up in every single one of these threads. AdGuard Home sounds interesting, it can block ads, trackers, AND help me monitor who and what is using my bandwidth? Let's fucking go. How can I deploy it? Docker, huh? Well hello old friend, you've saved me countless hours deploying my clients' websites on VPSes, let's see how I can set you up on an OpenWrt. Well, that took less than 20 minutes, nice.

I now have Docker, but do I want to ssh into my router every time I want to change a config, see the status of my containers, or restart them? There has to be a solution for that. Huh, there is, and it's called Dockge, cool. Wait, Dockge developer also has this pretty cool project called Uptime Kuma, which will give me a fancy interface for monitoring the status for all of my services. Both of them deployed in less than 10 minutes, just following the official instructions.

Okay, back to AdGuard Home, what can I do here? Holy shit I can just delegate AdGuard Home to be my DNS resolver and configure a bunch of options for it? Count me in. 20 minutes of brokering peace between AGH and OpenWrt over port 80, and now I have redundant DNS resolvers, resolving all of my domain needs using parallel requests to get me to websites ASAP. Oh, and I can see AGH blocking all the TikTok and Google trackers from my family's devices, so I already want to buy a coffee for the developers.

I'm fucking hooked. Let's Google some more interesting projects. Immich? I can take my data back from Google? The app looks just as fancy, and I don't care for some of the features it lacks. What could I use for storage? Maybe this spare Samsung T7 Shield I have lying around? Let's go. Export the whole data from Google Photos, mount my T7 to NR6S with a USB cable, permanently mounted it in OpenWrt, used Immich-Go to upload it to Immich, and bob's your uncle. So. fucking. cool.

Wait, now I have anxiety about losing years of my photos and videos if I fully migrate to Immich. How can I fix that? Immich recommends 3-2-1 backup strategy, and they link this article from Backblaze. Hmm, I've heard that name before. Wait, these guys will give me terabyte of storage for $6/month? Wtf do I pay Google for? But wait, how can I upload there? God bless rclone. Let's also clone to my Windows PC to fully complete the 3-2-1 strategy. Let's automate cloning processes for both local and remote backups, so that all my data gets backed up every night while I'm sleeping. All that work in less than two hours.

By the way, I thought Immich app was supposed to be inferior to Google Photos? Are you serious? I finally have a reliable search by context, file name, file extension, etc. I can set up auto-moving and archiving with CLI and so much more. Fuck Google Photos. Delete every single byte I have on there, uninstall it from all of my devices, cancel the subscription.

Okay, this post is getting terribly long, so I'll try to fast-forward.

I want to remotely turn on my stupid Samsung monitor without using a remote? Home Assistant.

I want to have a universal note-taking and link-saving solution? Linkwarden.

I want to expose my services to the internet so I can access them remotely? Cloudflared.

I want to stash my fucking porn? Stash.

There are solutions for literally everything. My post serves two purposes. The first is to push all of you lurking in this subreddit, hesitant to pull the trigger, thinking you need to be Gilfoyle reincarnated to have any success at this stuff. My modest home lab is no Anton, but boy does it make this shitty corpo-ridden internet a much more tolerable place. All I needed to have was a bit of Googling skills, and a bit of patience reading through the official docs, forum threads, and reddit comments. I still have a LOT to learn about networking, but I already feel like this has been one of the most fulfilling hobbies I've had, and I'm already thinking about getting a NAS to host some stuff for my friends.

Second is to say a massive thank you to the absolute legends behind all the open-source services that we all use and love. I'm sure I will find a lot more in the coming months, and I will try my absolute best to buy all of them a coffee.

I'm not sure if anyone's even going to read all of this, I just felt so good about and so passionate about my new hobby that I wanted to share it with everyone.

P.S: This subreddit desperately needs a "Discussion" flair.

⬆️ 532 points | 💬 114 comments


Discord enshitification begins. Self hosted alternatives?

Alright discord wants my government ID now, that’s fun and cool. So what self hosted options are there that have a similar feature set? Multiple voice channels, text channels, media sharing. Nextcloud comes to mind but that’s overkill. I know teamspeak is popular but it’s only voice. Anything exist out there people like?

⬆️ 522 points | 💬 158 comments


Digest: r/selfhosted: Feb 06 - Feb 06, 2026

Published: 1 month ago | Author: System

I built a “digital safe with multiple keys” after a few too many bike concussions

image

hey homelab folks,

this came from a slightly uncomfortable thought.

I’ve had a few concussions from biking accidents over the years. every time I recover fine, but every time I also think: what if next time I don’t? what if I can’t remember how to log into my own machines?

the obvious answer is “give my 1password to my partner”. but that turns one human into the single point of failure for my whole digital life. that felt… wrong.

so I built something I call ReMemory.

it’s basically a digital safe.

you put some files in it (password manager recovery codes, notes, whatever), and 5 friends each hold a key. any 3 of them together can open it. none of them can open it alone.

the part I’m weirdly proud of: they don’t install anything. they just open a file in a browser and it works. no server, no account, no setup, no “install this tool first”.

links if you’re curious:

I’m not trying to pitch this as a product or anything. I mostly want to know:

how are you handling this today in your lab?

safe? lawyer? printed notes? one trusted person?

⬆️ 1,863 points | 💬 320 comments