[{"id":"1tbrkcv","title":"Found some strange GET requests in my Traefik access logs. Anyone else saw this poor kid trying to escape from Belarus ?","link":"https://www.reddit.com/r/selfhosted/comments/1tbrkcv/found_some_strange_get_requests_in_my_traefik/","author":"Worldly_Topic","published_at":"2026-05-13T06:39:20+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/ff/ff51e1d8285414b1113db92e5fa1cd82bb8a7ac822c43f967610aa763ee52ce4.png\" alt=\"image\"></p>\n\n\n\n\n\n<p><small>⬆️ 380 points | 💬 56 comments</small></p>","metadata":{"score":380,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1tbjmet","title":"Because we are a self hosting family that's why.","link":"https://www.reddit.com/r/selfhosted/comments/1tbjmet/because_we_are_a_self_hosting_family_thats_why/","author":"BobButtwhiskers","published_at":"2026-05-13T00:21:24+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/08/08bb0f1d01a36f2ca97397b107939d71fd2b29738cb418fab57122a5df6b181e.jpg\" alt=\"image\"></p>\n\n\n\n<div><p>Found this and want to share it. </p>\n</div>\n\n<p><small>⬆️ 1,191 points | 💬 46 comments</small></p>","metadata":{"score":4453,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1tajo1n","title":"Accidentally exposed publicly my entire LAN for 2 weeks","link":"https://www.reddit.com/r/selfhosted/comments/1tajo1n/accidentally_exposed_publicly_my_entire_lan_for_2/","author":"ldkv","published_at":"2026-05-11T23:08:38+00:00","content":"\n\n\n<div><p>Posting this as a PSA / confession because I almost had a heart attack last night and I figure if I got bit, someone else will too.</p>\n\n<p><strong>TL;DR:</strong> Replaced pangolin + NPMplus with a double-Caddy + WireGuard setup. Put a \"clever\" config on the <em>local</em> Caddy to minimize maintenance. Tested it once and called it a day. Two weeks later realized my entire LAN was reachable from the public internet via the wildcard tunnel.</p>\n\n<h2>The setup (or: how I outsmarted myself)</h2>\n\n<p>I used to run pangolin (VPS) + NPMplus (local proxy for split DNS) to selectively expose my services. The setup worked fine, but having to click through two different web UIs every time I added a new service was offending my inner lazy engineer. So a few weeks ago I decided to replace them both with a double Caddy setup linked by a WireGuard tunnel.</p>\n\n<p>The Caddyfile on the VPS side is a dumb catch-all that punts everything down the tunnel (first mistake):</p>\n\n<pre><code>*.mydomain.com {\n  route {\n        reverse_proxy http://10.0.0.2:9999\n    }\n}\n</code></pre>\n\n<p>And the local Caddyfile:</p>\n\n<pre><code># Listen on both the Tunnel (Port 9999) and LAN (Port 80/443)\nhttp://:9999, *.mydomain.com {\n    map {host} {vars.is_public} {\n        public1.mydomain.com true\n    public2.mydomain.com true\n        default false\n    }\n\n    @vps_unauthorized {\n        expression \"{local_port} == '9999' &amp;&amp; {vars.is_public} == 'false'\"\n    }\n    handle @vps_unauthorized {\n        abort\n    }\n\n    @public1 host public1.mydomain.com\n    handle @public1 {\n        reverse_proxy 192.168.1.100:8000\n    }\n\n  @local1 host local1.mydomain.com\n    handle @local1 {\n        reverse_proxy 192.168.1.101:8001\n    }\n}\n</code></pre>\n\n<p>The \"clever\" bit is the matcher in the middle (second mistake). The idea: \"if the request came in via the tunnel (port 9999) AND the host isn't on my public allowlist, kill it.\" This way I get split-horizon DNS while only having to maintain one single local Caddyfile.</p>\n\n<p>I did a SINGLE (third mistake) quick test from my phone on cellular: <code>public1.mydomain.com</code> loaded, <code>local1.mydomain.com</code> returned a connection error. I went to bed feeling like a genius.</p>\n\n<h2>The heart attack moment</h2>\n\n<p>Fast forward about two weeks. I was out and accidentally tapped <code>local1.mydomain.com</code> on my phone. It loaded instantly.</p>\n\n<p>I aged five years in about ten seconds. For the past two weeks, anyone who had bothered to enumerate subdomains on the VPS could have walked straight into my LAN, including services with zero authentication (you know which). So much for the elegant solution.</p>\n\n<p>The cleanup afterwards was time consuming. I yanked the VPS tunnel, rotated every credential I could think of, scoured Caddy access logs (thank god I had them on) for anything suspicious, and spent a solid hour combing through logs of my unprotected services.</p>\n\n<p>In the end I think I got away with it because nobody bothered to brute force my VPS (which was also protected by crowdsec), but \"security through nobody-bothered\" is not a posture I want to be in.</p>\n\n<h2>Lessons learned</h2>\n\n<ul>\n<li><p>Explicit blocking on the VPS side is non-negotiable. The little maintenance overhead is worth it for the security benefits. Another benefit is that it minimize useless traffic hitting my local server. I wasn't able to pinpoint what was wrong with my \"clever\" expression, so just ended up scrapping it and added the following line to the VPS Caddyfile: <code>@public header_regexp Host ^(public1|public2)\\.mydomain\\.com$</code>. Yes I'm still very lazy here with the concise regex, but this time I made sure to test it correctly 😅</p></li>\n<li><p>\"It worked when I tested it\" is not the same as \"it's doing what I think it's doing.\" Test both the happy path AND the path that's supposed to fail, from outside, more than once. One green light is not a security audit.</p></li>\n<li><p>Protect local services with authentication. Even a simple HTTP auth layer would have saved me a lot of stress here, and it's not like I don't have the tools to set it up. I was just too lazy and thought nothing could ever happen.</p></li>\n</ul>\n\n<p>This incident has been a wake-up call to my complacency and lack of rigor when it comes to security. Post the story and the broken config above as the cautionary tale. Don't be me 💀.</p>\n</div>\n\n<p><small>⬆️ 341 points | 💬 76 comments</small></p>","metadata":{"score":498,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1taacxr","title":"Puter 26.05: Open-source, self-hosted, Internet OS! 2 years, 370 contributors, 400K downloads, and 40K stars later, we're out of beta!","link":"https://www.reddit.com/r/selfhosted/comments/1taacxr/puter_2605_opensource_selfhosted_internet_os_2/","author":"mitousa","published_at":"2026-05-11T17:34:54+00:00","content":"\n\n<p><a href=\"https://github.com/heyPuter/puter/\" rel=\"noopener noreferrer\">https://github.com/heyPuter/puter/</a></p>\n\n\n\n\n\n<p><small>⬆️ 242 points | 💬 61 comments</small></p>","metadata":{"score":466,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t9mdul","title":"AirPipe v4: my self-hosted file transfer is now true peer-to-peer","link":"https://www.reddit.com/r/selfhosted/comments/1t9mdul/airpipe_v4_my_selfhosted_file_transfer_is_now/","author":"Frag_O_Fobia","published_at":"2026-05-10T23:30:42+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/23/2346508d869c86996011edb3d6097bd70607f86a6ad24b54c00df432587418a8.gif\" alt=\"image\"></p>\n\n\n\n<div><p>I posted about <a href=\"https://airpipe.sanyamgarg.com\" rel=\"noopener noreferrer\">AirPipe</a> here a few months back. Been working on it pretty much non-stop since. v4 just shipped.</p>\n\n<p>Heads up, video editing isn't my strong suit, sorry for the artifacts and quality. Hope it conveys what the thing does.</p>\n\n<p>The big change: files go peer-to-peer over WebRTC. Sender picks how the relay helps. Either as a signaling relay (your bytes flow directly between the two devices), or as an encrypted 10-minute mailbox (relay holds the ciphertext if the receiver isn't online yet).</p>\n\n<p>Either way, the relay only sees ciphertext.</p>\n\n<p>Sender picks the mode. Receiver types the passphrase anywhere. Homepage, CLI with <code>airpipe download &lt;PHRASE&gt;</code>, or scan the QR. One code, three ways in.</p>\n\n<p><strong>Try it:</strong> open <a href=\"https://airpipe.sanyamgarg.com\" rel=\"noopener noreferrer\">airpipe.sanyamgarg.com</a> in two browsers and share a passphrase between them.</p>\n\n<p><strong>Self-host the relay</strong> in one container, or use mine:</p>\n\n<pre><code>docker run -p 8080:8080 ghcr.io/sanyam-g/airpipe-relay\n</code></pre>\n\n<p><strong>CLI for headless boxes:</strong></p>\n\n<pre><code>curl -sSL https://airpipe.sanyamgarg.com/install.sh | sh\nairpipe send report.pdf\n</code></pre>\n\n<p>Source: <a href=\"https://github.com/Sanyam-G/Airpipe\" rel=\"noopener noreferrer\">github.com/Sanyam-G/Airpipe</a> (MIT)</p>\n</div>\n\n<p><small>⬆️ 227 points | 💬 32 comments</small></p>","metadata":{"score":263,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t95c4m","title":"Girls come and go, Docker Servers stay","link":"https://www.reddit.com/r/selfhosted/comments/1t95c4m/girls_come_and_go_docker_servers_stay/","author":"Matletic","published_at":"2026-05-10T12:14:33+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/11/11a4805d4f87cb0aca9a5950d2ed23afe3d27a96548e2af5b5f4c08f0be20080.png\" alt=\"image\"></p>\n\n\n\n\n\n<p><small>⬆️ 715 points | 💬 58 comments</small></p>","metadata":{"score":1536,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t92807","title":"Docker bypasses UFW and exposed my database. Again. Writing this down so I stop forgetting","link":"https://www.reddit.com/r/selfhosted/comments/1t92807/docker_bypasses_ufw_and_exposed_my_database_again/","author":"Substantial_Word4652","published_at":"2026-05-10T09:28:19+00:00","content":"\n\n\n<div><p>Docker bypasses UFW and exposed my database. Again. Writing this down so I stop forgetting.</p>\n\n<p>Self-hosters, this one is for you.</p>\n\n<p>I finish setting up a new app on my VPS, everything looks good, then I run a security check and boom. Same mistake again. Docker silently bypassing my firewall and exposing my database to the internet.</p>\n\n<p>This has happened to me more than once. I keep forgetting it, so I'm writing it here as a reminder for myself and hopefully useful for someone else running their own server.</p>\n\n<p>When you're using docker compose in production on a VPS, remember:</p>\n\n<p>Don't expose database ports unless you absolutely need to. And if you do, don't do this:</p>\n\n<pre><code>ports:\n  - \"5432:5432\"\n</code></pre>\n\n<p>Do this instead:</p>\n\n<pre><code>ports:\n  - \"127.0.0.1:5432:5432\"\n</code></pre>\n\n<p><strong>Why does this matter?</strong></p>\n\n<p>Docker manages network rules at a very low level on Linux. When you publish a port, it sets up routing rules directly in the system networking stack. So if you don't explicitly bind it to localhost, you're effectively exposing that service on the machine's public network interface.</p>\n\n<p>And if you're thinking \"it's fine, I have UFW enabled\", not necessarily. UFW is just a frontend for Linux firewall rules, and Docker bypasses it by manipulating those rules directly. Your database might still be exposed even with the firewall on.</p>\n\n<p>Has anyone else been caught by this?</p>\n</div>\n\n<p><small>⬆️ 321 points | 💬 194 comments</small></p>","metadata":{"score":610,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t8iohb","title":"You guys are begging people to start lying on AI disclosures","link":"https://www.reddit.com/r/selfhosted/comments/1t8iohb/you_guys_are_begging_people_to_start_lying_on_ai/","author":"EmergencyRadiant8038","published_at":"2026-05-09T20:01:45+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/68/68bc30f8247bb512dbd3cc988133eb5a2f56d4e157efbe54f523f20b6d2f3fd9.png\" alt=\"image\"></p>\n\n\n\n<div><p>I understand and am against using AI without any idea of what is going on, but when the community pulls of things like this, the next time this person posts -- or if someone about to posts sees this -- what do you think they will be? Honest? No, and I won't blame them if I start to see false claims. </p>\n</div>\n\n<p><small>⬆️ 614 points | 💬 222 comments</small></p>","metadata":{"score":2329,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t7qfjf","title":"After years of using Heimdall ... I've finally moved to Dashy","link":"https://www.reddit.com/r/selfhosted/comments/1t7qfjf/after_years_of_using_heimdall_ive_finally_moved/","author":"swake88","published_at":"2026-05-09T01:06:40+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/5b/5bd1c49088bf1263f20c7da33b14cdd668b013e35ad90a7f9221452d0081344d.png\" alt=\"image\"></p>\n\n\n\n\n\n<p><small>⬆️ 259 points | 💬 46 comments</small></p>","metadata":{"score":397,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t6iuyd","title":"Which services are you exposing to the internet, and how are you securing them?","link":"https://www.reddit.com/r/selfhosted/comments/1t6iuyd/which_services_are_you_exposing_to_the_internet/","author":"sysadmin_light","published_at":"2026-05-07T18:20:56+00:00","content":"\n\n\n<div><p>I keep thinking about things like SSO and it's got me curious, how are all of you locking down your public-facing services?</p>\n\n<p>Currently, I've got only a select few - primarily Seerr, Immich, Mealie, and FoundryVTT - publicly exposed via SWAG (with geo-ip blocks) so that friends and family can access them without needing extra apps like Tailscale on their devices.</p>\n\n<p>I know all of the services I make available have their own login prompts, but knowing how some projects can be, I figure things could always be more secure, so I'm curious to hear how everyone else does it.</p>\n</div>\n\n<p><small>⬆️ 230 points | 💬 166 comments</small></p>","metadata":{"score":246,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t63vxr","title":"How do you monitor your self-hosted servers?","link":"https://www.reddit.com/r/selfhosted/comments/1t63vxr/how_do_you_monitor_your_selfhosted_servers/","author":"vdorru","published_at":"2026-05-07T07:58:36+00:00","content":"\n\n\n<div><p>I’m curious how people here handle server monitoring.</p>\n\n<p>Right now I’m thinking about things like:</p>\n\n<ul>\n<li>Authentication activity</li>\n<li>Process execution history</li>\n<li>Network activity</li>\n</ul>\n\n<p>But I’m not sure what the “normal” setup looks like for self-hosting.</p>\n\n<p>How are you doing it?</p>\n\n<ul>\n<li>Do you just run ad-hoc Linux commands when something breaks?</li>\n<li>Do you use simple dashboards/start pages that show basic stuff like CPU, disk, RAM?</li>\n<li>Or do you have a full monitoring stack (Grafana, Prometheus, Elastic, etc.)?</li>\n</ul>\n\n<p>Also, what do you actually keep an eye on day to day?</p>\n\n<ul>\n<li>Security events (login attempts, auth logs, etc.)</li>\n<li>System health (CPU, memory, disk usage)</li>\n<li>Network activity / traffic patterns</li>\n<li>Something else?</li>\n</ul>\n\n<p>How many servers are you actually monitoring?</p>\n\n<p>I assume the setup changes a lot depending on scale. One home server is probably very different from managing 10–20 machines (if anyone even has that many for self-hosting).</p>\n\n<p>Would be interesting to hear how your approach changes with the number of servers.</p>\n\n<p>If you’re using dashboards, feel free to share what yours looks like or describe it!</p>\n</div>\n\n<p><small>⬆️ 217 points | 💬 204 comments</small></p>","metadata":{"score":234,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t5mk41","title":"My Homepage Dashboard on my RPi5","link":"https://www.reddit.com/r/selfhosted/comments/1t5mk41/my_homepage_dashboard_on_my_rpi5/","author":"Antonioxsuarez","published_at":"2026-05-06T19:06:48+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/4f/4fd78ec5f46c83d2c4358af311cec5b1f47b217de585002848474498f8c59dd8.png\" alt=\"image\"></p>\n\n\n\n<div><p>I'm on LibreWolf (Firefox fork) and I can't save the whole page as a screenshot. Hence why I had to stitch it.<br>\nEdit: I only run Homepage, Gotify, Pi-hole and Nginx Proxy Manager on my RPi5. Everything else is on my main server.</p>\n</div>\n\n<p><small>⬆️ 235 points | 💬 51 comments</small></p>","metadata":{"score":267,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t5do5v","title":"how can I self host to avoid having Google blow my life up randomly?","link":"https://www.reddit.com/r/selfhosted/comments/1t5do5v/how_can_i_self_host_to_avoid_having_google_blow/","author":"fartedcum","published_at":"2026-05-06T13:55:41+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/42/4213f210dc967eac4b2b5b138821a43686d8de4cabbd01900119504f19061ed2.jpg\" alt=\"image\"></p>\n\n\n\n<div><p>no, I'm not worried about having any offensive child related content on my device, but I have seen people be perma banned with no option to appeal for less offensive things. I'm worried about losing access to my accs with how much MFA I have set up and worse, 10s of thousands of pictures of my life and family. is there a solid way to self host these things that is reliable?</p>\n</div>\n\n<p><small>⬆️ 905 points | 💬 277 comments</small></p>","metadata":{"score":928,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t5cxe5","title":"A homepage dashboard I'm finally happy with.","link":"https://www.reddit.com/r/selfhosted/comments/1t5cxe5/a_homepage_dashboard_im_finally_happy_with/","author":"mwojo","published_at":"2026-05-06T13:27:27+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/9c/9ce6972044519793aceed8aafe0ce62eeddf905fd19f710953732ba519c7783d.png\" alt=\"image\"></p>\n\n\n\n\n\n<p><small>⬆️ 436 points | 💬 51 comments</small></p>","metadata":{"score":477,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t3l3zu","title":"PSA for anyone not using LXCs on Proxmox","link":"https://www.reddit.com/r/selfhosted/comments/1t3l3zu/psa_for_anyone_not_using_lxcs_on_proxmox/","author":"HoeCage","published_at":"2026-05-04T15:31:12+00:00","content":"\n\n\n<div><p>The Point: Holy shit LXCs are so cool and felt like black magic getting \"free\" RAM back. If you're newer, like me, and have just been using VMs instead of LXCs, you should look at changing that.</p>\n\n<p>I started my server back in November knowing absolutely nothing about using Linux, using CLI, or Docker. At the same time, I also went in raw, jumping straight into Proxmox on three nodes. As a result, I ended up using a lot of the Proxmox VE Helper Scripts for initial setup and have since gone back and learned how to do a lot of things myself. One of the hugely inefficient decisions I made at the time was to use a VM for Docker instead of an LXC.</p>\n\n<p>For context, two of my nodes are running an i3-5005U and 8gb of soldered DDR3 RAM. One of those machines was exclusively running a VM to run Docker containers largely centered around downloads. On average, I was hitting ~30-50% CPU on the PVE host and ~7GB RAM usage.</p>\n\n<p>Switching to an LXC has brought that down to 10-25% CPU and ~2-2.5GB RAM usage. A machine that felt like it was at its limit suddenly gained immense amounts of headroom.</p>\n\n<p>Just wanted to put this out there for anyone procrastinating switching some VMs to LXCs. In my case, it was worth the relatively low amount of effort to free up such a significant amount of resources.</p>\n</div>\n\n<p><small>⬆️ 251 points | 💬 83 comments</small></p>","metadata":{"score":314,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t3ba9r","title":"n8n + Paperless-ngx + Paperless-GPT for adding RAG to your documents!","link":"https://www.reddit.com/r/selfhosted/comments/1t3ba9r/n8n_paperlessngx_paperlessgpt_for_adding_rag_to/","author":"hackslashX","published_at":"2026-05-04T08:06:35+00:00","content":"\n\n\n<div><p>Paperless-ngx is undoubtedly one of the most important and useful containers in my self-hosted stack. I have a modest collection of documents, ranging from receipts, to pay-stubs, certificates, notices, IDs, etc. While it's great for cataloging documents, I feel like for scanned documents (especially) the in-built Tesseract based OCR is quite poor (I've worked with Tesseract professionally and it's really hard to get solid OCR performance on documents that have out of the ordinary template or styling). Secondly, there's no ability to semantically search for information within document, for example, \"What was my electricity bill for a particular month\" or \"How much income tax I paid last year\", and so on.</p>\n\n<p>I wanted to keep my implementation as simple and straightforward as possible. There are 5 tools that I used to achieve this.</p>\n\n<ol>\n<li>Paperless-ngx  <a href=\"https://github.com/paperless-ngx/paperless-ngx\" rel=\"noopener noreferrer\">https://github.com/paperless-ngx/paperless-ngx</a>:  We can't do anything without it :p Apart from documents cataloging, it also has a well documented API that allows interfacing with external tools quite easily.</li>\n<li>Paperless-gpt <a href=\"https://github.com/icereed/paperless-gpt\" rel=\"noopener noreferrer\">https://github.com/icereed/paperless-gpt</a>: For automatic metadata generation, and LLM-based OCR (supports self-hosted LLM models too, and third-party document OCR services like Azure and Google).</li>\n<li>n8n <a href=\"https://github.com/n8n-io/n8n\" rel=\"noopener noreferrer\">https://github.com/n8n-io/n8n</a>: Building a workflow that generates embedding for each document. It also has an MCP trigger that can expose a tool to perform a RAG search over the vector database.</li>\n<li>Milvus <a href=\"https://github.com/milvus-io/milvus\" rel=\"noopener noreferrer\">https://github.com/milvus-io/milvus</a>: My choice of vector database. Deployed as a single-replica cluster on K8s using the operator.</li>\n<li>Lobehub <a href=\"https://github.com/lobehub/lobehub\" rel=\"noopener noreferrer\">https://github.com/lobehub/lobehub</a>: Self-hosted chat interface that allows adding MCP. Supports a wide variety of third-party and local LLM providers.</li>\n</ol>\n\n<p><strong>Paperless-GPT</strong></p>\n\n<p>After uploading a document to Paperless, I basically set two tags on the document, <em>paperless-gpt-ocr-auto</em> to perform LLM assisted OCR on the document and replace the content with AI generated text. This is not exact 1-1 OCR but it's very readable and LLM also attempts to fix OCR mistakes. The second tag is <em>paperless-gpt</em> which is used for automatic population of tags, title, correspondent and created-at fields for each document. The important part is \"content\" since that's what the RAG ingestion workflow uses to generate embedding.</p>\n\n<p><strong>The n8n RAG ingestion workflow</strong></p>\n\n<p><a href=\"https://preview.redd.it/xl4utsiqs2zg1.png?width=1640&amp;format=png&amp;auto=webp&amp;s=79cff39c069cde564818ba5be2a75bb70f75defc\" rel=\"noopener noreferrer\">https://preview.redd.it/xl4utsiqs2zg1.png?width=1640&amp;format=png&amp;auto=webp&amp;s=79cff39c069cde564818ba5be2a75bb70f75defc</a></p>\n\n<p>The workflow itself is pretty basic. I use Chat Message trigger to send a document ID to the workflow. This can be replaced with a webhook call and you can configure Paperless to automatically call this URL, although I haven't configured that yet. It also can be replaced with a scheduled job that retrieves new documents added to Paperless and ingest them automatically.</p>\n\n<p>With the document ID, I basically hit a couple of endpoints like below to get all required information.</p>\n\n<pre><code>GET api/documents/&lt;document_id&gt;/\nGET api/correspondents/&lt;correspondent_id&gt;/\nGET api/document_types/&lt;document_type_id&gt;/\nGET api/tags/&lt;tag_id&gt;/ (loop over multiple tags)\n</code></pre>\n\n<p>Now that I have all of the required information, I simply use an Embedding provider (in my case I'm using Azure since I have an Enterprise account with data sharing for model training disabled) that generates embedding for the document. The document is chunked by the splitter at every 2000 characters with 200 characters overlap. This is then pushed to Milvus collection.</p>\n\n<p><strong>Milvus Collection Schema</strong></p>\n\n<p>I created the collection manually since n8n sets varchar size for some fields quite low. You can use pymilvus or Attu to create this:</p>\n\n<table><thead>\n<tr>\n<th>Field Name</th>\n<th>Type</th>\n<th>Key</th>\n<th>Description</th>\n</tr>\n</thead><tbody>\n<tr>\n<td>langchain_primaryid</td>\n<td>Int64</td>\n<td>PK</td>\n<td>Primary identifier</td>\n</tr>\n<tr>\n<td>langchain_vector</td>\n<td>FloatVector (dim=3072)</td>\n<td>—</td>\n<td>Embedding vector</td>\n</tr>\n<tr>\n<td>langchain_text</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Main text content</td>\n</tr>\n<tr>\n<td>source</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Source of the document</td>\n</tr>\n<tr>\n<td>blobType</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Blob type or format</td>\n</tr>\n<tr>\n<td>loc</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Location or path</td>\n</tr>\n<tr>\n<td>document_id</td>\n<td>Float</td>\n<td>—</td>\n<td>Document identifier</td>\n</tr>\n<tr>\n<td>title</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Document title</td>\n</tr>\n<tr>\n<td>correspondent</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Associated correspondent</td>\n</tr>\n<tr>\n<td>document_type</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Type/category of document</td>\n</tr>\n<tr>\n<td>tags</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Tags or keywords</td>\n</tr>\n<tr>\n<td>created</td>\n<td>VarChar (65535)</td>\n<td>—</td>\n<td>Creation timestamp</td>\n</tr>\n<tr>\n<td>document_link</td>\n<td>VarChar (1024)</td>\n<td>—</td>\n<td>Link to the document</td>\n</tr>\n</tbody></table>\n\n<p>I also created separate users with read and write permissions and configured them in n8n accordingly.</p>\n\n<p><strong>The MCP workflow</strong></p>\n\n<p>This is pretty trivial. It's just an MCP Server Trigger with a Retrieve Documents tool. Make sure to update the title and description of the tool in n8n so that it populates properly in MCP tools discovery. I haven't added a re-ranker node here since n8n only supports Cohere for now :(</p>\n\n<p><a href=\"https://preview.redd.it/wjnh9fsdu2zg1.png?width=748&amp;format=png&amp;auto=webp&amp;s=5833d680c8052d93363be38d6fa4f88fd09176a8\" rel=\"noopener noreferrer\">https://preview.redd.it/wjnh9fsdu2zg1.png?width=748&amp;format=png&amp;auto=webp&amp;s=5833d680c8052d93363be38d6fa4f88fd09176a8</a></p>\n\n<p>Also, attach a Bearer Auth token with the MCP trigger to protect the endpoint. Publish the workflow and copy the Production MCP URL from the node settings.</p>\n\n<p><strong>Lobechat Integration</strong></p>\n\n<p>In Lobechat, go to Skills Management and register a new MCP skill. It's pretty straightforward too!</p>\n\n<p><a href=\"https://preview.redd.it/ntpry7n4v2zg1.png?width=1882&amp;format=png&amp;auto=webp&amp;s=64dcd68ded5378ff2dc01125b0b993587ae2a18a\" rel=\"noopener noreferrer\">https://preview.redd.it/ntpry7n4v2zg1.png?width=1882&amp;format=png&amp;auto=webp&amp;s=64dcd68ded5378ff2dc01125b0b993587ae2a18a</a></p>\n\n<p>I also created a new Agent in Lobechat to let it know which tool to call (even if not explicitly requested) and the output format.</p>\n\n<pre><code>You are an AI assistant that answers user queries using the DocumentsRAG knowledge base.\nCore Behavior\nAlways retrieve relevant information using the DocumentsRAG skill before answering.\nDo this even if the user does not explicitly request document lookup.\nBase your responses strictly on retrieved documents whenever possible.\nIf no relevant documents are found, clearly state that and provide the best possible general answer.\nResponse Format\nStructure every response in the following format:\n1. Answer Summary\nProvide a clear, concise answer to the user’s question.\n2. Supporting Details\nExpand on the answer using information from retrieved documents.\nUse bullet points or short paragraphs for readability\nHighlight key facts, definitions, or steps\n3. Sources / References\nList all relevant documents used:\nInclude document title\nProvide direct links (if available)\nOptionally include a short snippet or context\nExample:\nDocument Title 1 – &lt;link&gt;\nDocument Title 2 – &lt;link&gt;\nAdditional Guidelines\nPrefer accuracy over completeness when documents are limited\nDo not fabricate sources or links\nIf multiple documents conflict, mention the discrepancy\nKeep responses structured and easy to scan\nAvoid unnecessary verbosity\n</code></pre>\n\n<p><a href=\"https://preview.redd.it/5ygkyjbcv2zg1.png?width=950&amp;format=png&amp;auto=webp&amp;s=1f91589f52b712f8e83ada28789d0adb6f0dec5c\" rel=\"noopener noreferrer\">https://preview.redd.it/5ygkyjbcv2zg1.png?width=950&amp;format=png&amp;auto=webp&amp;s=1f91589f52b712f8e83ada28789d0adb6f0dec5c</a></p>\n\n<p><strong>Results</strong></p>\n\n<p>I'm pretty impressed by it. Since it has allowed me to naturally query my documents, ask questions, and get information without searching and reading the document.</p>\n\n<p><a href=\"https://preview.redd.it/mahwfp9nv2zg1.png?width=991&amp;format=png&amp;auto=webp&amp;s=eb6b866c392962ab6e89c33d2819423c8a8416af\" rel=\"noopener noreferrer\">https://preview.redd.it/mahwfp9nv2zg1.png?width=991&amp;format=png&amp;auto=webp&amp;s=eb6b866c392962ab6e89c33d2819423c8a8416af</a></p>\n\n<p>Anyways, I just wanted to shared my self-hosted workflow for RAG. But I'm very much interested in what everyone else uses!</p>\n</div>\n\n<p><small>⬆️ 251 points | 💬 31 comments</small></p>","metadata":{"score":258,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t35sph","title":"She may come to regret asking.","link":"https://www.reddit.com/r/selfhosted/comments/1t35sph/she_may_come_to_regret_asking/","author":"RCAMuse","published_at":"2026-05-04T03:06:40+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/79/79880dd17191763e9e69044d72c9b88177bb7c0cdcc2e180be41c9fd1d91b509.png\" alt=\"image\"></p>\n\n\n\n<div><p>Buckle up sis, that's just the top 10% of the iceberg.</p>\n</div>\n\n<p><small>⬆️ 997 points | 💬 65 comments</small></p>","metadata":{"score":4281,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t336f3","title":"A homepage dashboard I'm finally happy with.","link":"https://www.reddit.com/r/selfhosted/comments/1t336f3/a_homepage_dashboard_im_finally_happy_with/","author":"mwojo","published_at":"2026-05-04T01:06:58+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/2e/2e1a52b678394224281d0e59044fbae204aef8a07e8dd378bc5c612fdd0f3233.png\" alt=\"image\"></p>\n\n\n\n\n\n<p><small>⬆️ 399 points | 💬 42 comments</small></p>","metadata":{"score":399,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t2qd26","title":"Vaultwarden 1.36.0 patches vulnerabilities","link":"https://www.reddit.com/r/selfhosted/comments/1t2qd26/vaultwarden_1360_patches_vulnerabilities/","author":"0x3e4","published_at":"2026-05-03T16:38:12+00:00","content":"\n\n<p><a href=\"https://github.com/dani-garcia/vaultwarden/releases/tag/1.36.0\" rel=\"noopener noreferrer\">https://github.com/dani-garcia/vaultwarden/releases/tag/1.36.0</a></p>\n\n\n\n<div><p>Security fixes     </p>\n\n<p>This release contains security fixes for the following advisories. We strongly advice to update as soon as possible.    </p>\n\n<p>SSO Login CSRF - <a href=\"https://github.com/dani-garcia/vaultwarden/security/advisories/GHSA-pfp2-jhgq-6hg5\" rel=\"noopener noreferrer\">GHSA-pfp2-jhgq-6hg5,</a> <a href=\"https://github.com/dani-garcia/vaultwarden/security/advisories/GHSA-w6h6-8r66-hcv7\" rel=\"noopener noreferrer\">GHSA-w6h6-8r66-hcv7</a><br>\nUser/Organization Enumeration - <a href=\"https://github.com/dani-garcia/vaultwarden/security/advisories/GHSA-hxqh-ff5p-wfr3\" rel=\"noopener noreferrer\">GHSA-hxqh-ff5p-wfr3</a><br>\nSSO existing-user binding - <a href=\"https://github.com/dani-garcia/vaultwarden/security/advisories/GHSA-j4j8-gpvj-7fqr\" rel=\"noopener noreferrer\">GHSA-j4j8-gpvj-7fqr</a><br>\n<a href=\"https://github.com/dani-garcia/vaultwarden/security/advisories/GHSA-6x5c-84vm-5j56\" rel=\"noopener noreferrer\">GHSA-6x5c-84vm-5j56</a><br>\nSSRF via Icon Endpoint - <a href=\"https://github.com/dani-garcia/vaultwarden/security/advisories/GHSA-72vh-x5jq-m82g\" rel=\"noopener noreferrer\">GHSA-72vh-x5jq-m82g</a><br>\nSome crate's updated and other minor security enhancements     </p>\n\n<p>These are private for now, pending CVE assignment.  </p>\n\n<p><a href=\"https://github.com/dani-garcia/vaultwarden/releases/tag/1.36.0\" rel=\"noopener noreferrer\">https://github.com/dani-garcia/vaultwarden/releases/tag/1.36.0</a></p>\n</div>\n\n<p><small>⬆️ 254 points | 💬 14 comments</small></p>","metadata":{"score":367,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t2ff9c","title":"Whats the point in a VPS?","link":"https://www.reddit.com/r/selfhosted/comments/1t2ff9c/whats_the_point_in_a_vps/","author":"Unusual_Economics653","published_at":"2026-05-03T08:08:59+00:00","content":"\n\n\n<div><p>So i originally came into self hosting to get away from subscriptions, to stop renting everything/ start owning things, to not rely on external sources, etc... but now that im on reddit and looking to be a bit more active im seeing A LOT of posts on VPSs.</p>\n\n<p>I had no idea what they were so i decided to look it up, and to my surprise, its renting your own self hosted work(?) It still feels wrong and i mightve been lead astray by the internet, but isnt that like everything that self hosting is meant to not be?</p>\n\n<p>VPS - small subscription, relying on external sources, and not owning your own setup?</p>\n\n<p>Owned machine - one time payment, easily upgradeable, not relying kn external sources, etc.</p>\n\n<p>Like i get it might be more affordable in the temporary, but thats everything when owning vs subscribing...</p>\n\n<p>So i ask, is there any genuine technical reason to use a vps over your own machine? Like some reason people cant use their own machine ehile being able to use a VPS? Not like a convenience issue where it would just be annoying to work around, (self hosting isnt typically convenient to setup when starting out) but an actual reason that physically could not work in their own home, or possibly a reason to keep it out of your home? Please let me know, and sorry if this was too much of a ramble</p>\n\n<p>Edit: To all those looking for the answers in the future, this is the large majority of it. I encourage everyone to dig through the comments because there is a lot of useful info. in no order:</p>\n\n<ul>\n<li>Convinience - easier setup, 'just works'</li>\n<li>Offiste setup - many mini reasons; better / stable connection globally, safety from disasters, safety...</li>\n<li>CGNAT - deserves its own point because of quanitity of comments, consistent way to get around it without getting public ip from isp</li>\n</ul>\n\n<p>Thank you everyone for clarifying!!!</p>\n</div>\n\n<p><small>⬆️ 223 points | 💬 224 comments</small></p>","metadata":{"score":225,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t10cj8","title":"3-2-1 rule , how are you all doing it without breaking bank?","link":"https://www.reddit.com/r/selfhosted/comments/1t10cj8/321_rule_how_are_you_all_doing_it_without/","author":"Tasty-Picture-8331","published_at":"2026-05-01T17:40:26+00:00","content":"\n\n\n<div><p>So my nas is getting big now slowly with around 8tb of data.</p>\n\n<p>I run it on raid 1, but I wonder in the worst case scenario, I wanted to also have a off site backup. But obviously 8tb + on cloud is going to be expensive no?</p>\n\n<p>How are you guys storing your offline backup? And where do you guys store it?</p>\n</div>\n\n<p><small>⬆️ 264 points | 💬 196 comments</small></p>","metadata":{"score":289,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t0v2so","title":"Appreciation post: Tailscale and Headscale","link":"https://www.reddit.com/r/selfhosted/comments/1t0v2so/appreciation_post_tailscale_and_headscale/","author":"Curious_Olive_5266","published_at":"2026-05-01T14:30:51+00:00","content":"\n\n\n<div><p>These two are the most incredible technologies on the modern Internet. The Web is finally free and open again, just as Tim Berners-Lee intended it so many decades ago at CERN. People are finally taking the Web back from corporations, and it is amazing to see. Tailscale is going to be the biggest tech company in the world by the next decade, and the GTA will overtake the Bay Area as the world's tech capital. </p>\n</div>\n\n<p><small>⬆️ 232 points | 💬 75 comments</small></p>","metadata":{"score":232,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t0uoxm","title":"Patch your servers, peeps, new Linux kernel vulnerability just dropped","link":"https://www.reddit.com/r/selfhosted/comments/1t0uoxm/patch_your_servers_peeps_new_linux_kernel/","author":"bz386","published_at":"2026-05-01T14:16:23+00:00","content":"\n\n\n<div><p>CopyFail just dropped, it's a new Linux kernel vulnerability that gives attackers root privileges. <a href=\"https://arstechnica.com/security/2026/04/as-the-most-severe-linux-threat-in-years-surfaces-the-world-scrambles/\" rel=\"noopener noreferrer\">https://arstechnica.com/security/2026/04/as-the-most-severe-linux-threat-in-years-surfaces-the-world-scrambles/</a></p>\n\n<p>Debian has an updated kernel, Proxmox too. Looks like Raspberry Pi hasn't released an updated version yet.</p>\n</div>\n\n<p><small>⬆️ 366 points | 💬 162 comments</small></p>","metadata":{"score":919,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t0nq8t","title":"Living in Turkmenistan: 75% of IPs blocked, 6Mbps max speed. Need Linux &amp; VPN advice for 3D Freelancing.","link":"https://www.reddit.com/r/selfhosted/comments/1t0nq8t/living_in_turkmenistan_75_of_ips_blocked_6mbps/","author":"Beautiful-Bread818","published_at":"2026-05-01T08:45:26+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/47/47f27b19e2db8cbc34e81b3541f9a3b89c1140ec6548f872e8f339123f6a45b8.jpg\" alt=\"image\"></p>\n\n\n\n<div><p>Hi everyone,\nI’m a 3D artist living in Turkmenistan, and I’m facing a digital \"survival challenge.\" In my country, the internet is heavily censored: about 75% of global IP addresses are blocked. This includes everything from YouTube and Reddit to Wikipedia.\nThe Situation:\nSpeed: My current speed is 2 Mbps (I plan to upgrade to the national \"maximum\" of 6 Mbps soon).\nHardware/OS: I am using Linux Mint 22.3.\nWork: I work as a freelance 3D artist. Constant disconnections and blocks make it almost impossible to sync assets or even look up tutorials.\nWhat I’ve tried so far:\nPaid VPS: I rented a server from Aeza for $12/month. I was detected and blocked within 3 days. My connection is so unstable that heavy obfuscation protocols often \"choke\" the bandwidth entirely, while simple ones get sniped by the firewall instantly.\nFree VPNs: Most Play Store VPNs only deliver 25-40% of my already slow speed. Paid ones are slightly better (up to 75-90% of line speed), but they get blocked very quickly.\nLegacy Tools: Programs like Free Browser (Android) and SoftEther (Windows) used to work well, but Free Browser is mobile-only, and I can't find a reliable way to run SoftEther on Linux Mint.\nMy Questions:\nWhat is the most \"lightweight\" stealth protocol for a very slow connection (2-6 Mbps) that can survive a national-scale firewall? Is VLESS + Reality a good option here?\nAre there any Linux Mint native clients you recommend? I’ve heard of nekoray or v2rayA, but I’m not sure which handles low bandwidth better.\nAre there specific VPS regions or providers that are less likely to be flagged than the big ones like Aeza?\nAny advice from network engineers or people living in high-censorship regions would be a lifesaver. I just want to be able to work and learn.\nThank you! 🙇‍♂️</p>\n</div>\n\n<p><small>⬆️ 275 points | 💬 88 comments</small></p>","metadata":{"score":705,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t0bo8e","title":"My setup","link":"https://www.reddit.com/r/selfhosted/comments/1t0bo8e/my_setup/","author":"Expert-Paramedic1156","published_at":"2026-04-30T22:51:17+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/3d/3d3c51fe9cc03cf8492482fe96407195480a2cda8b74ece8dc6c05bde6bf4d1d.jpg\" alt=\"image\"></p>\n\n\n\n<div><p>This is my setup. Image made by AI but overall looks like this. There is no connection between proxmost host and media but proxmox uses my truenas storage (16TB). I removed everything. Nginx isn’t connected anymore. Everything is LAN. Started homelabbing in Feb with no background.</p>\n\n<p>Watched a lot of videos and read too many posts on here. I run apps I vibe code for personal use.</p>\n</div>\n\n<p><small>⬆️ 200 points | 💬 42 comments</small></p>","metadata":{"score":827,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1t00urv","title":"Pangolin 1.18: Web proxy through VPN, high availability client routing, wildcard resources, alerts, and more","link":"https://www.reddit.com/r/selfhosted/comments/1t00urv/pangolin_118_web_proxy_through_vpn_high/","author":"MrUserAgreement","published_at":"2026-04-30T16:12:03+00:00","content":"\n\n\n<div><p>Hello everyone!</p>\n\n<p>Pangolin 1.18 brings HTTPS support for private resources, multi-site high availability routing, uptime tracking, health checks, alert rules, wildcard resources, and more. Let's dig in!</p>\n\n<p>GitHub: <a href=\"https://github.com/fosrl/pangolin\" rel=\"noopener noreferrer\">https://github.com/fosrl/pangolin</a></p>\n\n<p><em>Pangolin is an open-source, identity-aware remote access platform. Use it to securely expose authenticated web applications and private VPN resources to anyone with peer-to-peer zero-trust networking.</em></p>\n\n<p><a href=\"https://preview.redd.it/yrj4fzbsqcyg1.png?width=3456&amp;format=png&amp;auto=webp&amp;s=8deba1390d2be6ec6ea5efdb834284333d559703\" rel=\"noopener noreferrer\">https://preview.redd.it/yrj4fzbsqcyg1.png?width=3456&amp;format=png&amp;auto=webp&amp;s=8deba1390d2be6ec6ea5efdb834284333d559703</a></p>\n\n<h1>HTTPS Private Resources</h1>\n\n<p>Private HTTP is a new resource type for web workloads. It behaves like a public resource with a domain name and valid TLS but nothing is exposed on the public internet. The hostname resolves to a reverse proxy running in the site connector (Newt) and only serves traffic when the user has an active Pangolin client connection.</p>\n\n<p><a href=\"https://preview.redd.it/mxs6483tqcyg1.png?width=1730&amp;format=png&amp;auto=webp&amp;s=917528d2af7c82cae70812b07ee0bf64e95cc682\" rel=\"noopener noreferrer\">https://preview.redd.it/mxs6483tqcyg1.png?width=1730&amp;format=png&amp;auto=webp&amp;s=917528d2af7c82cae70812b07ee0bf64e95cc682</a></p>\n\n<h1>Multi-Site Routing and High Availability</h1>\n\n<p>Private resources now support multiple site connectors. Pangolin routes traffic through whichever path is best at the time and automatically fails over if a site goes offline.</p>\n\n<p><a href=\"https://preview.redd.it/wpvwjhqtqcyg1.png?width=1762&amp;format=png&amp;auto=webp&amp;s=5677b90b3ca3271e4f767c478c51b925017352da\" rel=\"noopener noreferrer\">https://preview.redd.it/wpvwjhqtqcyg1.png?width=1762&amp;format=png&amp;auto=webp&amp;s=5677b90b3ca3271e4f767c478c51b925017352da</a></p>\n\n<h1>Wildcard Resources</h1>\n\n<p>Set the subdomain field to * on a public resource and Pangolin routes every hostname at that level through the same resource and tunnel. Access rules and auth apply across all matched hostnames, and the original Host header is preserved for downstream routing.</p>\n\n<h1>And More</h1>\n\n<p>1.18 also adds uptime tracking on sites and resources, standalone health checks (HTTP and TCP) that can watch anything on your network, alert rules with email, webhook, the ability to import an identity provider across organizations, and a handful of UI improvements and bug fixes.</p>\n\n<p><a href=\"https://preview.redd.it/740y4bfneeyg1.png?width=2030&amp;format=png&amp;auto=webp&amp;s=ae0b7f7a9798d002ea2c7a27c4b0bf8169c5d6d1\" rel=\"noopener noreferrer\">https://preview.redd.it/740y4bfneeyg1.png?width=2030&amp;format=png&amp;auto=webp&amp;s=ae0b7f7a9798d002ea2c7a27c4b0bf8169c5d6d1</a></p>\n\n<p>Check out the full blog post for details on everything in this release: <a href=\"https://pangolin.net/news/1-18-release\" rel=\"noopener noreferrer\">https://pangolin.net/news/1-18-release</a></p>\n\n<p>As always, available for self-hosting via the Community or Enterprise editions or on Pangolin Cloud. The Enterprise is free for personal use.</p>\n\n<p>If you haven't starred us on GitHub yet, it genuinely helps. Thank you!</p>\n</div>\n\n<p><small>⬆️ 240 points | 💬 92 comments</small></p>","metadata":{"score":249,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1szdvgo","title":"I came to realize that selfhosted forums are an essential part towards digital sovereignty","link":"https://www.reddit.com/r/selfhosted/comments/1szdvgo/i_came_to_realize_that_selfhosted_forums_are_an/","author":"Digital_Nerve_8765","published_at":"2026-04-29T22:32:07+00:00","content":"\n\n\n<div><p>Hey, here's the <a href=\"https://github.com/danielbrendel/hortusfox-web\" rel=\"noopener noreferrer\">HortusFox</a> dev again. </p>\n\n<p>I got inspired by Dan Brown's decision to <a href=\"https://www.bookstackapp.com/blog/april-2026-community-updates/\" rel=\"noopener noreferrer\">abandon discord for a hosted zulip instance</a>. And then it hit me...</p>\n\n<p>Back in the day, software projects had a website, documentation and forum. Some had, in addition, an IRC channel somewhere. This just worked. It was an amazing way to foster community and keep control over your data. </p>\n\n<p>So, today I was very unhappy regarding enshittification again. I mean, we used to have soooo many platforms and sites back in the day. Now everything takes place on a handful of platforms. Internet monopolization by corporations. I know, this is no recent news. We all know that. </p>\n\n<p>I believe forums may be a key aspect to regain digital sovereignty again. That's why I've decided to setup a forum infrastructure for HortusFox. When tinkering around, I eventually decided to go with <a href=\"https://github.com/flarum/flarum\" rel=\"noopener noreferrer\">Flarum</a>. Simply because it's easy to install, uses the well-established Laravel framework and I like it's style from the ground without any additional extensions installed. </p>\n\n<p>The selfhosted community is one of the most aware communities when it comes to data protection and digital sovereignty. I love that! That's why I once again decided to post here. ❤️</p>\n\n<p>As for me, I am now going into the process of migrating from discord to flarum. I mean, discord feels great, it offers many features, but it's eventually centralized, it only has closed communities in terms of SEO and recent decisions in terms of age verification are concerning. The latter one is also a reason why I finally abandoned publishing play store apps three years ago, and went fully PWA. Microsoft Store does the same now (removed sign-up fee in favor of ID verification).</p>\n\n<p>Maybe I'm a bit carried away, but imagine, if even the reddit communities such as <a href=\"/r/opensource\" rel=\"noopener noreferrer\">r/opensource</a> or <a href=\"/r/selfhosted\" rel=\"noopener noreferrer\">r/selfhosted</a> would abandon reddit in favor of a forum-based communities run by volunteers? Reddit is not our friend. And various decisions to wipe out third-party apps and pushing echo chambers aren't really something I consider \"the heart of the internet\". By the way, did you notice Reddit now tests forcing people to use the mobile app when they browse reddit via a mobile browser? Pretty sure, they will eventually rollout this \"feature\".</p>\n\n<p>What do you think? Both developers and selfhosters, would you like the idea that we turn back to forums again? </p>\n\n<p>PS: HortusFox now also officially backs the <a href=\"https://www.ehrenamt-opensource.de/en\" rel=\"noopener noreferrer\">open-source petition</a> to have the german government acknowledge opensource work as volunteering by law. A big thanks to Boris Hinzer for launching the campaign.</p>\n</div>\n\n<p><small>⬆️ 203 points | 💬 43 comments</small></p>","metadata":{"score":409,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sz2r37","title":"Hound - A Media Server Alternative to Plex/Jellyfin + Stremio","link":"https://www.reddit.com/r/selfhosted/comments/1sz2r37/hound_a_media_server_alternative_to_plexjellyfin/","author":"NearbyYak7156","published_at":"2026-04-29T15:50:30+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/67/67879ef8aa4e2df4e3cb19c108371e5def78035f3fa018cb4985c352d5b7beea.png\" alt=\"image\"></p>\n\n\n\n<div><p><strong>What is Hound?</strong></p>\n\n<p>Hound is a self-hosted, open-source media server, like Plex/Jellyfin, but with the extra ability to stream content through P2P (torrent) or HTTP/Debrid without downloading first. With Hound, you have the flexibility of fully controlling your media like Jellyfin, but can also stream instantly ala streaming services. It's the best of both worlds.</p>\n\n<p>I posted about Hound in this sub years ago, when it was originally built as a simple movie/tvshow tracker. Since then Hound has evolved into a full media server. <a href=\"https://www.reddit.com/r/selfhosted/comments/12fov6v/hound_self_hosted_solution_for_tracking_tv_shows/\" rel=\"noopener noreferrer\">Link.</a></p>\n\n<p><strong>Links</strong></p>\n\n<ul>\n<li><a href=\"https://github.com/Hound-Media-Server/hound\" rel=\"noopener noreferrer\">Github Repo</a></li>\n<li><a href=\"https://hound-media-server.github.io/hound-site/\" rel=\"noopener noreferrer\">Website + Docs</a></li>\n<li>Demo: see below</li>\n<li><a href=\"https://github.com/Hound-Media-Server/hound-app\" rel=\"noopener noreferrer\">Github Repo (Client Apps)</a></li>\n</ul>\n\n<p><strong>Features</strong></p>\n\n<ul>\n<li>Free-range, organic code, written by a person</li>\n<li>Stream your own content from your drives, or stream content directly from P2P (torrent) and HTTP/Debrid sources through Stremio addons</li>\n<li>Download content to your drives directly from the Hound Web portal</li>\n<li>Very simple to deploy, &lt;10 mins before you start watching content</li>\n<li>Hound was originally built as a media tracker, so it has robust features such as collections, reviews, comments, watch history/activity. All your watches and rewatches are automatically tracked</li>\n<li>UI/UX is a core focus, designed with your mom using this in mind</li>\n<li>No telemetry</li>\n</ul>\n\n<p><strong>Demo</strong></p>\n\n<p>Note that the web portal isn't optimized for mobile yet:</p>\n\n<p>Access the demo <a href=\"https://hound-demo.yuwono.xyz/\" rel=\"noopener noreferrer\">here</a>.</p>\n\n<pre><code>username: selfhosted\npassword: password\n</code></pre>\n\n<p>This is just the web portal, for actually watching content you'll want to use the apps</p>\n\n<p><strong>Platforms</strong></p>\n\n<p>Android and Android TV apps are available, you'll need to sideload the APKs. iOS and tvOS require a bit more time for testing and to distribute through TestFlight. They share the same code (built on <a href=\"https://github.com/react-native-tvos/react-native-tvos\" rel=\"noopener noreferrer\">React Native TVOS</a>) so most of the effort is done.</p>\n\n<ul>\n<li><a href=\"https://github.com/Hound-Media-Server/hound-app/releases\" rel=\"noopener noreferrer\">Android and Android TV Releases</a></li>\n<li><a href=\"https://github.com/Hound-Media-Server/hound-app\" rel=\"noopener noreferrer\">Github Repo - Android, Apple</a></li>\n</ul>\n\n<p><strong>Installation</strong></p>\n\n<p>Docker compose is the recommended way to install Hound:</p>\n\n<pre><code>services:\n  hound-postgres:\n    container_name: hound-postgres\n    image: postgres:18\n    environment:\n      POSTGRES_DB: hound_db\n      POSTGRES_USER: hound\n      POSTGRES_PASSWORD: super-strong-password\n    volumes:\n      - ./Hound Data/postgres_data:/var/lib/postgresql\n    healthcheck:\n      test: [\"CMD-SHELL\", \"pg_isready -U hound -d hound_db\"]\n      interval: 5s\n      timeout: 5s\n      retries: 5\n\n  hound-server:\n    container_name: hound-server\n    image: houndmediaserver/hound:latest\n    depends_on:\n      hound-postgres:\n        condition: service_healthy\n    ports:\n      - \"2323:2323\"\n    environment:\n      - POSTGRES_DB=hound_db\n      - POSTGRES_USER=hound\n      - POSTGRES_PASSWORD=super-strong-password\n      - HOUND_SECRET=super-strong-secret\n    volumes:\n      - ./Hound Data:/app/Hound Data\n      # (Optional) attach your media library\n      # IMPORTANT: Please read the docs before doing this\n      # - /path/to/movies:/app/External Library/Movies\n      # - /path/to/shows:/app/External Library/TV Shows\n</code></pre>\n\n<ul>\n<li>Change <code>POSTGRES_PASSWORD</code> on both hound-postgres and hound-server services</li>\n<li>Change <code>HOUND_SECRET</code></li>\n</ul>\n\n<p>Then run <code>docker compose up -d</code></p>\n\n<p>Access the web portal at port <code>2323</code>:</p>\n\n<pre><code>http://&lt;ip-address&gt;:2323\nusername: admin\npassword: password\n</code></pre>\n\n<p>Make sure you change your password immediately.</p>\n\n<p>Next, you'll want to set up a provider next to start watching content, refer to the guides below:</p>\n\n<ul>\n<li><a href=\"https://hound-media-server.github.io/hound-site/installation.html\" rel=\"noopener noreferrer\">Full Installation Docs</a></li>\n<li><a href=\"https://hound-media-server.github.io/hound-site/provider.html\" rel=\"noopener noreferrer\">Setting Up a Provider</a></li>\n</ul>\n\n<p><strong>Why Hound?</strong></p>\n\n<p>When I set up Jellyfin for my friends and family, I found that they kept switching back to Netflix/Prime when it was more convenient. Today, the Plex/Jellyfin ecosystem is quite mature. But for some (especially older) people, using a separate app, requesting content first, and waiting a couple minutes (or even longer) can be unintuitive/inconvenient. It's much nicer to be able to scroll and discover content, and watch immediately in seconds.</p>\n\n<p>From an admin perspective, drives are getting increasingly expensive, and larger libraries drive electricity costs even more.</p>\n\n<p>My vision for Hound was to have all the advantages of self-hosting media, with the flexibility of streaming. You can still curate a library with whatever content you like, but for content not yet downloaded in your library, Hound switches automatically to P2P/Debrid streaming, so it's a seamless experience for users.</p>\n\n<p><strong>Hound is in Beta + Pricing</strong></p>\n\n<p>Hound is in Beta, so please expect bugs and run backups often. Although Hound is completely self-hosted and open source (AGPLv3), there will be a paid tier when Hound leaves beta:</p>\n\n<ul>\n<li>Hound is completely free, all features unlocked for one user</li>\n<li>A paid license will be required to unlock unlimited users</li>\n<li>No subscription, one-time purchase at a reasonable price</li>\n<li>License activation is completely offline</li>\n</ul>\n\n<p>Unfortunately, unlike the amazing maintainers at Jellyfin, I can't keep Hound free. I thought long and hard about pricing that respects self-hosting and open source philosophies. I settled on this model so anyone can try Hound and all its features for free, and have an informed choice on whether or not to purchase.</p>\n\n<p>Since Hound is completely open-source, I can't stop you from forking and removing the license checks. Instead of doing this, if you contribute to Hound's development actively, I'll give you keys upon release.</p>\n\n<p>You can't actually purchase yet since we're in Beta, but I wanted you to know in advance.</p>\n\n<p>Please try the demo and leave feedback! If you like the project, please consider adding Hound to your stack, and even contributing!</p>\n</div>\n\n<p><small>⬆️ 228 points | 💬 96 comments</small></p>","metadata":{"score":621,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sz2kur","title":"Do you keep your docker containers running 24/7","link":"https://www.reddit.com/r/selfhosted/comments/1sz2kur/do_you_keep_your_docker_containers_running_247/","author":"shrimpdiddle","published_at":"2026-04-29T15:44:24+00:00","content":"\n\n\n<div><p>Do you keep your docker containers running 24/7, or spin them up before they are needed. For example, I use BentoPDF maybe three times a week. So I've gotten to where I down the container after I'm done using it. The only containers I leave up, are my “infrastructure” apps... vaultwarden, radicale, WireGuard, NPM, Jellyfin.</p>\n\n<p>Given that most images have unresolved CVEs, reducing exposure, is just another security layer. As well it frees up memory, and reduces CPU load, and the power that requires.</p>\n</div>\n\n<p><small>⬆️ 174 points | 💬 146 comments</small></p>","metadata":{"score":268,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1symidv","title":"Can I host myself streaming games (like on Twitch) to my own website?","link":"https://www.reddit.com/r/selfhosted/comments/1symidv/can_i_host_myself_streaming_games_like_on_twitch/","author":"FantasticFrontButt","published_at":"2026-04-29T02:57:28+00:00","content":"\n\n\n<div><p>I essentially want to be able to embed a stream of myself (thru OBS) onto a personal website without relying on external services like YouTube, Kick, or Twitch.</p>\n\n<p>I do not expect large audiences, but somehow integrating IRC chat would be great.</p>\n\n<p>Might anyone point me in any direction I'd need to start to accomplish this?</p>\n</div>\n\n<p><small>⬆️ 185 points | 💬 58 comments</small></p>","metadata":{"score":185,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sxqyh7","title":"It’s always DNS.","link":"https://www.reddit.com/r/selfhosted/comments/1sxqyh7/its_always_dns/","author":"warriorforGod","published_at":"2026-04-28T04:23:13+00:00","content":"\n\n\n<div><p>Well having a proxmox server go down silently, then upon bringing it back up and having it spin up a second DNS server that had the same IP as your primary DNS server so that nothing works in terms of name resolution whether local or remote is a sobering experience.</p>\n\n<p>You should try it sometime. Lmao.</p>\n\n<p>Edit: Autocorrect fixing. </p>\n</div>\n\n<p><small>⬆️ 148 points | 💬 46 comments</small></p>","metadata":{"score":160,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1swsb7o","title":"Glance Dashboard V.2 | GA","link":"https://www.reddit.com/r/selfhosted/comments/1swsb7o/glance_dashboard_v2_ga/","author":"ginesjunior11","published_at":"2026-04-27T03:36:19+00:00","content":"\n\n<p><a href=\"https://www.reddit.com/gallery/1swsb7o\" rel=\"noopener noreferrer\">https://www.reddit.com/gallery/1swsb7o</a></p>\n\n\n<div><p>After a lot of trial &amp; error (and a few <code>docker restart</code> moments 😅), I finally got my dashboard where I want it:</p>\n\n<ul>\n<li>Full monitoring (Docker, services, network)</li>\n<li>Tailscale + WireGuard integration</li>\n<li>Custom API widgets (live stats &amp; device tracking)</li>\n<li>Home Assistant + automation layer</li>\n<li>Custom themes &amp; UI tweaks</li>\n</ul>\n\n<p>All running on a Raspberry Pi 5 with a clean and optimized Docker stack.</p>\n\n<p>Still a work in progress (because let’s be honest… a homelab is never “finished”), but it’s already my daily control center.</p>\n\n<p>What would you add next? Any ideas for the next upgrade?</p>\n</div>\n\n<p><small>⬆️ 333 points | 💬 37 comments</small></p>","metadata":{"score":1524,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1swkj5a","title":"It’s a mess, but it’s my mess.","link":"https://www.reddit.com/r/selfhosted/comments/1swkj5a/its_a_mess_but_its_my_mess/","author":"New-Raspberry3572","published_at":"2026-04-26T21:42:59+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/e4/e4e76e237b80ffd82ecbffc58d9f269d4e7ea7cbd85dc83fca9fc2852765b0de.jpg\" alt=\"image\"></p>\n\n\n\n<div><p>Finally moved my Pi 3+ off the desk and mounted it (with hopes and dreams) next to my ISP's WiFi 6 router. I know the cables are a crime scene, but everything is working 24/7 so I’m afraid to touch it lol.</p>\n\n<p>Running a pretty packed stack for a Pi 3+:</p>\n\n<ul>\n<li>Pi-hole + Unbound for DNS</li>\n<li>Wireguard for VPN</li>\n<li>Smokeping + Uptime Kuma </li>\n<li>Vaultwarden</li>\n<li>2Fauth</li>\n</ul>\n</div>\n\n<p><small>⬆️ 199 points | 💬 43 comments</small></p>","metadata":{"score":257,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1svz9fi","title":"My self-hosted website ran on my Pi Zero 2W","link":"https://www.reddit.com/r/selfhosted/comments/1svz9fi/my_selfhosted_website_ran_on_my_pi_zero_2w/","author":"mads_5489","published_at":"2026-04-26T06:00:55+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/67/678baf07a849739bf087dd505d71551d91d4ed7b9e76691b009b07e5a22b5106.jpg\" alt=\"image\"></p>\n\n\n\n<div><p>So I have been working on my personal portfolio website for some time now, I had since forgot about it and had no motivation to expand it. I have been looking for the perfect usecase for my Pi Zero 2W, after moving my Pi-hole server from it onto my new Pi 5.</p>\n\n<p>I then saw this post: <a href=\"https://www.reddit.com/r/selfhosted/comments/1sqvujn/selfhosted_public_website_running_on_a_10_esp32/\" rel=\"noopener noreferrer\">https://www.reddit.com/r/selfhosted/comments/1sqvujn/selfhosted_public_website_running_on_a_10_esp32/</a></p>\n\n<p>And it honestly got me excited to update my site, and move everything over the Pi. It is certainly much easier to run my site, from a measly 512KB of ram to just about 512MB.</p>\n\n<p>The site is finally in a state where I feel comfortable sharing it, and I hope you guys like the aesthetic. There is a guestbook to sign as well :)</p>\n\n<p>Site:<br>\n<a href=\"https://spellbound.sh\" rel=\"noopener noreferrer\">https://spellbound.sh</a></p>\n</div>\n\n<p><small>⬆️ 220 points | 💬 19 comments</small></p>","metadata":{"score":410,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1svxsx1","title":"MinIO repository was archived on Apr 25, 2026","link":"https://www.reddit.com/r/selfhosted/comments/1svxsx1/minio_repository_was_archived_on_apr_25_2026/","author":"Kitchen-Patience8176","published_at":"2026-04-26T04:43:43+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/4d/4d70c0de1e8217986c645dcfb10dac7a67de76949d1a8232ca18f1330cf09f5c.png\" alt=\"image\"></p>\n\n\n\n<div><p>Just learned about S3-style object storage and was looking into self-hosted options for my homelab. Came across MinIO and got pretty excited because it seemed like exactly the kind of thing I’d want to learn and maybe use.</p>\n\n<p>Then I noticed the repo is archived, which was a bit discouraging.</p>\n\n<p>I know that doesn’t necessarily mean the software is dead, but it made me pause before building around it.</p>\n\n<p>For those using MinIO, would you still adopt it today for a homelab? Or would you look at alternatives instead?</p>\n\n<p>Curious what people here are doing.</p>\n</div>\n\n<p><small>⬆️ 310 points | 💬 62 comments</small></p>","metadata":{"score":582,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1svq0y0","title":"PSA: if you’re running iSponsorblockTV you’ll need to pair your devices again","link":"https://www.reddit.com/r/selfhosted/comments/1svq0y0/psa_if_youre_running_isponsorblocktv_youll_need/","author":"dmunozv04","published_at":"2026-04-25T22:31:58+00:00","content":"\n\n\n<div><p>Hi there, I’m iSponsorblockTV’s maintainer.</p>\n\n<p>If you’re running iSponsorblockTV, you’ll need to pair your devices again since YouTube have changed the screenId format and are on the process of revoking all older codes.</p>\n\n<p>For those of you that don’t know, iSponsorblockTV allows you to use SponsorBlock on all YouTube TV devices (TVs, sticks and consoles). It can also click the skip button for you and mute native YouTube ads.</p>\n\n<p>Sadly there’s nothing that can be done on my part other than pairing devices again.</p>\n\n<p>EDIT: the new screen id will be 64 hex digits long, compared to the old 26 characters</p>\n</div>\n\n<p><small>⬆️ 197 points | 💬 18 comments</small></p>","metadata":{"score":360,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1svh7km","title":"Responsibility and Ownership: You Can’t Vibe‑Code Your Way Around It","link":"https://www.reddit.com/r/selfhosted/comments/1svh7km/responsibility_and_ownership_you_cant_vibecode/","author":"SigsOp","published_at":"2026-04-25T16:46:38+00:00","content":"\n\n\n<div><p>The title took me a while to land on, but the thoughts behind it have been sitting in my head for months. I've been into homelabing since early 2020 with my first build, then a second, then a third, then whatever my bank account allowed after that. It's been a lot of fun and tears. But lately browsing this community has had a edge to it, a lot of AI negativity that I mostly understand, and that's what I want to write about.</p>\n\n<p>I'm a programmer by trade, actually army before, but released and went back to school. The usage of AI at work has increased and I don't see that trend stopping quite yet, AI is useful as a companion to handle tedious tasks like documentation, reviewing SQL, tedious front-end markup, one shot scripts etc.. But using it to one-shot a whole application is risky and if published downright irresponsible and this is where I think most of the friction is happening, at least for me.</p>\n\n<p>When I see the AI projects posted here, with my experience, I think I can separate the wholly vibe coded ones from those that AI was used to assist, the latter I don't mind, despite what some Luddites say, that's what the industry is like now. When you code something for your own use, the blast radius is limited, thing could run horribly and it won't matter, you are the only one that suffers the consequences, but if you publish this code you need to take ownership of it and ownership brings responsibilities that you need to shoulder. Even as a programmer I don't take this lightly, this is not something people should dismiss with the command <code>git push origin main.</code></p>\n\n<p>It's one of the reasons I don't publish my stuff, or at the very least don't advertise it, not because it's vibe coded, it isn't, it's because I still need to take responsibility for it, that's time, effort and commitment that shouldn't be underestimated (many seems to). Maintenance is not a trivial affair, thinking about current and future users, how you approach breaking changes, how you architect things to avoid breaking changes as much as possible. Continuity of the project is also important, if you take your project seriously and your user base seriously you should have this in mind : \"What if I can't continue the project?\", archiving the repo and disappearing is not the right way to do things.</p>\n\n<p>So, before publishing and parading your project, you just need to ask yourself a simple question : \"Can I take ownership and responsibility for this code\". The answers will depend on your definitions of these concepts, but if you think about it for more than 5 minute, you might just realise your project should stay private.</p>\n\n<p>PS: Here I am talking more about the moral/ethical implications when I talk about responsibilities and ownership, you are obviously responsible for what run in your machine. Excuse some awkward syntax or phrases, non native English speaker.</p>\n</div>\n\n<p><small>⬆️ 230 points | 💬 87 comments</small></p>","metadata":{"score":526,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1su3pp2","title":"Turned my broken Steam Deck into a low-power 2.5GbE NAS (Debian + rsync + Glances)","link":"https://www.reddit.com/r/selfhosted/comments/1su3pp2/turned_my_broken_steam_deck_into_a_lowpower_25gbe/","author":"Decker_Bazzite","published_at":"2026-04-24T03:05:25+00:00","content":"\n\n<p><a href=\"https://www.reddit.com/gallery/1su3pp2\" rel=\"noopener noreferrer\">https://www.reddit.com/gallery/1su3pp2</a></p>\n\n\n<div><p>My Steam Deck LCD screen died, so I repurposed it as a headless Debian 12 NAS.</p>\n\n<p>Current setup:</p>\n\n<p>- Debian 12 minimal (no GUI)</p>\n\n<p>- 2.5GbE USB NIC</p>\n\n<p>- 6TB (main storage) + 4TB (backup)</p>\n\n<p>- rsync-based incremental backups (~280MB/s)</p>\n\n<p>I added a small sub display running Glances for real-time monitoring (CPU / RAM / network / processes).</p>\n\n<p>This lets me check system status instantly without SSH.</p>\n\n<p>Also integrated some controls via Stream Deck:</p>\n\n<p>- One-button safe shutdown (sync + poweroff)</p>\n\n<p>- HDD temperature check</p>\n\n<p>- SSH access</p>\n\n<p>The NAS is not always-on.</p>\n\n<p>I power it on only when needed (backups / file access).</p>\n\n<p>So far it's stable and surprisingly fast for a Steam Deck.</p>\n\n<p>Happy to answer any questions 👍</p>\n</div>\n\n<p><small>⬆️ 213 points | 💬 59 comments</small></p>","metadata":{"score":660,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1stjtay","title":"Bitwarden CLI has been compromised. Check your stuff.","link":"https://www.reddit.com/r/selfhosted/comments/1stjtay/bitwarden_cli_has_been_compromised_check_your/","author":"RedTermSession","published_at":"2026-04-23T14:07:55+00:00","content":"\n\n<p><a href=\"https://socket.dev/blog/bitwarden-cli-compromised\" rel=\"noopener noreferrer\">https://socket.dev/blog/bitwarden-cli-compromised</a></p>\n\n\n\n<div><p>Same as the title. The Bitwarden CLI has been compromised and it would be good to check your stuff. I know how popular Bitwarden is around here. </p>\n</div>\n\n<p><small>⬆️ 723 points | 💬 152 comments</small></p>","metadata":{"score":1478,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1st7c1x","title":"In which folder do you keep your Docker stack?","link":"https://www.reddit.com/r/selfhosted/comments/1st7c1x/in_which_folder_do_you_keep_your_docker_stack/","author":"Artistic_Quail650","published_at":"2026-04-23T03:32:33+00:00","content":"\n\n\n<div><p>I keep my entire Docker stack in /opt/docker/ and all my external volumes in /mnt/hdd_1tb/{nextcloud, jellyfin, immich, etc.}</p>\n\n<p>I'm curious to hear about other ways people store their files. </p>\n</div>\n\n<p><small>⬆️ 177 points | 💬 212 comments</small></p>","metadata":{"score":186,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1ss09kn","title":"Self-hosted personal finance automation: n8n + Actual Budget + SimpleFIN + Claude on my homelab.","link":"https://www.reddit.com/r/selfhosted/comments/1ss09kn/selfhosted_personal_finance_automation_n8n_actual/","author":"Hail_2_Victors","published_at":"2026-04-21T20:47:25+00:00","content":"\n\n\n<div><p>Sharing something I've been running for a few months that's become one of the most useful things on my homelab.</p>\n\n<p><strong>The stack:</strong></p>\n\n<ul>\n<li>Actual Budget (self-hosted, Docker)</li>\n<li>actual-auto-sync bridge for SimpleFIN bank sync</li>\n<li>n8n (self-hosted) as the automation backbone</li>\n<li>Claude Haiku via Anthropic API for AI categorization (~$0.01/100 transactions)</li>\n<li>Telegram for notifications</li>\n<li>Notion for rule logging (optional)</li>\n</ul>\n\n<p><strong>What it does:</strong></p>\n\n<p>Six n8n workflows that run on schedules and replace what I used to do manually every week:</p>\n\n<ul>\n<li><strong>Auto-categorizer:</strong>&nbsp;Fetches uncategorized transactions every 4 hours, sends to Claude with my full category list as context, applies the category if confidence ≥ 85%, creates a permanent payee rule so that merchant never hits the API again. Flags low-confidence items via Telegram.</li>\n<li><strong>Monthly envelope funder:</strong>&nbsp;Fires on the 1st, funds every budget category from a template I configured once. Fixed amounts first, remainder goes to debt payoff.</li>\n<li><strong>Sunday briefing:</strong>&nbsp;Claude reads my month-to-date budget and sends a plain-English summary — what's over, what's under, one focus for the week.</li>\n<li><strong>Friday paycheck check:</strong>&nbsp;Detects paycheck deposits, sends budget snapshot.</li>\n<li><strong>Rule digest:</strong>&nbsp;Monthly analysis of spending patterns using Claude, logs suggestions for new categorization rules.</li>\n<li><strong>Discovery:</strong>&nbsp;One-time run that prints all your Actual Budget account/category IDs. Saves significant setup time.</li>\n</ul>\n\n<p><strong>Architecture notes:</strong></p>\n\n<ul>\n<li>All credentials are in n8n's native credential store (Anthropic, Notion, Telegram API types) — nothing hardcoded</li>\n<li>Bridge key uses Custom Auth credential type</li>\n<li>Telegram nodes use n8n's native Telegram integration</li>\n<li>Config node at the top of each workflow — one place to edit, everything else references it</li>\n</ul>\n\n<p>The stack runs entirely on self-hosted n8n. No recurring SaaS costs beyond SimpleFIN (~$15/year) and Anthropic API calls (~$0.01/100 transactions). Everything else runs on your own infrastructure.</p>\n\n<p><a href=\"https://github.com/hail2victors/n8n-Actual-Automation\" rel=\"noopener noreferrer\">https://github.com/hail2victors/n8n-Actual-Automation</a></p>\n</div>\n\n<p><small>⬆️ 307 points | 💬 106 comments</small></p>","metadata":{"score":339,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1srtphr","title":"LubeLogger, Self-Hosted Vehicle Maintenance and Fuel Mileage Tracker, has some Important Quality of Life Improvements You Should Know About","link":"https://www.reddit.com/r/selfhosted/comments/1srtphr/lubelogger_selfhosted_vehicle_maintenance_and/","author":"ChiefAoki","published_at":"2026-04-21T16:59:42+00:00","content":"\n\n\n<div><p>Hi all, it's been a few months and we've made some incremental updates to LubeLogger over that time.</p>\n\n<p>In case you've never heard of LubeLogger, it's a self-hosted vehicle maintenance and fuel mileage tracker, you can log your service records and fillups in here and it will tell you exactly how much you've spent your vehicles.</p>\n\n<p><a href=\"https://lubelogger.com\" rel=\"noopener noreferrer\">Website</a></p>\n\n<p><a href=\"https://docs.lubelogger.com\" rel=\"noopener noreferrer\">Documentation</a></p>\n\n<p><a href=\"https://github.com/hargata/lubelog/\" rel=\"noopener noreferrer\">Git Repository</a></p>\n\n<p><strong>First</strong>, as stated in our <a href=\"https://www.reddit.com/r/selfhosted/comments/1r1j4lm/lubelogger_selfhosted_vehicle_maintenance_tracker/\" rel=\"noopener noreferrer\">previous post here</a> with the big UI update, we were going to start converting the grids in mobile views to cards, which makes it a lot easier to see all data without horizontal scrolling on small vertical screens, and that's finally delivered. If you prefer the older grid view in mobile, there is an option to revert in the Settings page.  </p>\n\n<p><a href=\"https://preview.redd.it/13txlwifkkwg1.png?width=800&amp;format=png&amp;auto=webp&amp;s=74c3eae6a1750460529764ff9fa047c0ceeab0b7\" rel=\"noopener noreferrer\">https://preview.redd.it/13txlwifkkwg1.png?width=800&amp;format=png&amp;auto=webp&amp;s=74c3eae6a1750460529764ff9fa047c0ceeab0b7</a></p>\n\n<p><strong>Second, there are now real-time notifications</strong> built within the app, if you follow us on the <a href=\"/r/lubelogger\" rel=\"noopener noreferrer\">r/lubelogger</a> subreddit, you might have heard of a daemon service that needed to be deployed separately, well that's no longer the case as we have integrated the daemon features into the LubeLogger app itself. Real-time notifications will allow you to immediately be notified when a reminder has its urgency changed to an urgency that you're tracking(i.e.: a reminder went from Not Urgent to Urgent), and it can be integrated with nearly every notification service out there as long as they take a HTTP POST request(there are samples for NTFY, Gotify, and Discord in the Documentation), if you don't wish to use an external notification service, it can also be configured to use the pre-existing SMTP settings.</p>\n\n<p><a href=\"https://www.youtube.com/watch?v=HuMbkwJs-K4\" rel=\"noopener noreferrer\">Video Walkthrough</a></p>\n\n<p><a href=\"https://docs.lubelogger.com/Installation/Server%20Settings/\" rel=\"noopener noreferrer\">Documentation</a></p>\n\n<p>As part of this, there are also Automated Events that you can now configure, some examples of what you can do with Automated Events:</p>\n\n<ul>\n<li>Send an email to vehicle collaborators at a fixed time everyday containing a list of all reminders in specific urgencies(even if their urgency hasn't changed)</li>\n<li>Create and backup and send it in an email to the root user at a fixed time everyday</li>\n<li>Clean up temp folders or unlinked documents and vehicle thumbnails at a fixed time everyday</li>\n</ul>\n\n<p>Here's what the automated backup email looks like:</p>\n\n<p><a href=\"https://preview.redd.it/q4mgykzzmkwg1.png?width=1363&amp;format=png&amp;auto=webp&amp;s=1175e815a0ff23837cf3ed7192087fcb83c6c39c\" rel=\"noopener noreferrer\">https://preview.redd.it/q4mgykzzmkwg1.png?width=1363&amp;format=png&amp;auto=webp&amp;s=1175e815a0ff23837cf3ed7192087fcb83c6c39c</a></p>\n\n<p>Third, there is now a smoother way to onboard OIDC users with SSO-specific registration options</p>\n\n<p><a href=\"https://docs.lubelogger.com/Advanced/OpenID/#oidc-user-registration\" rel=\"noopener noreferrer\">Documentation</a></p>\n\n<p><strong>Misc. Improvements:</strong></p>\n\n<p>CSV's are now validated before any imports are performed, and it will tell you what went wrong/was formatted wrong:</p>\n\n<p><a href=\"https://preview.redd.it/k0okuk9unkwg1.png?width=525&amp;format=png&amp;auto=webp&amp;s=ef159f8174acd22b83a9f1814127d2d16c0a5ae3\" rel=\"noopener noreferrer\">https://preview.redd.it/k0okuk9unkwg1.png?width=525&amp;format=png&amp;auto=webp&amp;s=ef159f8174acd22b83a9f1814127d2d16c0a5ae3</a></p>\n\n<p>You can now add multiple recurring reminders to Plan Records and you can modify which reminders are tied to these plan records all the way up until the plan is marked as done</p>\n\n<p><a href=\"https://preview.redd.it/04ptjed3okwg1.png?width=421&amp;format=png&amp;auto=webp&amp;s=6e521ee9c1226a22f44ee2426b25c59ffea8b378\" rel=\"noopener noreferrer\">https://preview.redd.it/04ptjed3okwg1.png?width=421&amp;format=png&amp;auto=webp&amp;s=6e521ee9c1226a22f44ee2426b25c59ffea8b378</a></p>\n\n<p>On that note, there are now QR Codes that you can generate that can either take you to a specific record or to add a new record:</p>\n\n<p><a href=\"https://www.youtube.com/watch?v=dkFRbWtm0Gs\" rel=\"noopener noreferrer\">Video Walkthrough</a></p>\n\n<p>If you want realtime events coming from LubeLogger but you don't want a webhook integration, you can now use web sockets which works on a pub-sub model.</p>\n\n<p><a href=\"https://docs.lubelogger.com/Advanced/Webhook/#websocket\" rel=\"noopener noreferrer\">Documentation</a></p>\n\n<p>Anyways, that's it from us for this update, have a great Summer and we'll see you in Fall.</p>\n</div>\n\n<p><small>⬆️ 339 points | 💬 47 comments</small></p>","metadata":{"score":397,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1srmjht","title":"Twenty v2.0: Self-hosted CRM","link":"https://www.reddit.com/r/selfhosted/comments/1srmjht/twenty_v20_selfhosted_crm/","author":"charlesBochet","published_at":"2026-04-21T12:43:03+00:00","content":"\n\n\n<div><p>Hi everyone,   </p>\n\n<p>We're an open-source CRM (<a href=\"https://github.com/twentyhq/twenty\" rel=\"noopener noreferrer\">https://github.com/twentyhq/twenty</a>). It's been a while since I last posted here, but today we're shipping our biggest update yet, so I wanted to give a heads-up.  </p>\n\n<p>Twenty 2.0 lets you build apps on top of the CRM without forking the codebase. The idea is a framework one level above web frameworks, tailored specifically for enterprise SaaS. Roughly Salesforce's original idea from 20 years ago, but built from a clean slate in 2026, and self-hostable. </p>\n\n<p>In practice: you can build (or ask Claude Code) a call recording feature or anything you'd like, using an SDK. It creates custom objects, React components, server-side logic. Your code but get everything Twenty already ships: permissions, dashboards, workflows, API, AI chat, webhooks, audit logs. That way, you can ship quickly on top of the engine and still keep version control, CI/CD, and so on.</p>\n\n<p>On the technical side, building extensibility into an enterprise app surfaced interesting problems:</p>\n\n<ul>\n<li>Isolating untrusted React on the frontend. Users can write UI code that renders inside the app, which means real sandboxing — no access to the host app's auth context, no escape from the mount point.</li>\n<li>Per-workspace data models at scale. Every workspace can have a completely different schema. Thousands of migrations running with no shared \"master\" schema to reason about.</li>\n<li>Streaming interfaces for long-running background processes. We rebuilt the AI harness 3 times, solving context pollution and building resilient jobs so AI chat tasks can keep running in the background.</li>\n</ul>\n\n<p>Happy to answer any questions and would love to hear your feedback!</p>\n\n<p>Charles (CTO) </p>\n\n<p>All the code is available here on Github</p>\n</div>\n\n<p><small>⬆️ 152 points | 💬 60 comments</small></p>","metadata":{"score":152,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sqvujn","title":"Self-hosted public website running on a $10 ESP32 on my wall","link":"https://www.reddit.com/r/selfhosted/comments/1sqvujn/selfhosted_public_website_running_on_a_10_esp32/","author":"Techtoshi","published_at":"2026-04-20T17:19:17+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/54/544976dea98930eea084b0cb51139de1d176355b8b22b5dcc5d469c4f435bf30.jpg\" alt=\"image\"></p>\n\n\n\n<div><p>My homelab does have the usual rack of stuff (Dell Poweredge R730s and ECU servers), but this one ESP32 sits separately on the wall and serves a public website entirely by itself. No nginx or apache, no Pi, no container... just a $10 microcontroller holding an outbound WebSocket to a Cloudflare Worker that fronts the traffic.</p>\n\n<p>The original launch of this back in 2022 ran for ~500 days before the original board burned out in 2023. The site sat as a read-only archive until now. I relaunched it after rebuilding it from the ground up with a lot of redundancy in mind such as a Worker relay, daily off-site backups to R2, and more, check out the project's <a href=\"https://github.com/Tech1k/helloesp/blob/master/README.md\" rel=\"noopener noreferrer\">README</a>.</p>\n\n<p>Site: <a href=\"https://helloesp.com\" rel=\"noopener noreferrer\">https://helloesp.com</a></p>\n\n<p>Code: <a href=\"https://github.com/Tech1k/helloesp\" rel=\"noopener noreferrer\">https://github.com/Tech1k/helloesp</a></p>\n\n<p>---</p>\n\n<p>Update: So slight miscalculation on how popular this was going to get, this was a good stress test of the ESP to say the least. The hug of death hit way harder than I anticipated lol</p>\n\n<p>I believe the ESP32 has fully crashed or it's exhausting heap in a loop. It's not even showing up on my router now. The Cloudflare Worker is still serving the offline page in the meantime which is expected. Probably not the best idea to have made this post while I was at work and away from it. I will reboot and investigate this when I'm home and make adequate changes to get it back online and stable!</p>\n\n<p>Update to the update: it has risen from the cold grasp of offline darkness and reconnected as the WiFi watchdog kicked in and rebooted it automatically. Requests are getting served again and I managed to regain access to it on LAN. Cloudflare is back to showing timeouts for some while others get through (expected behavior). I may lower the SSE cap and raise the min heap threshold. It's back to just getting overloaded at the moment. I will investigate further and see what I can make changes on later to help keep it afloat and serve more requests on 520KB of ram lol</p>\n</div>\n\n<p><small>⬆️ 467 points | 💬 35 comments</small></p>","metadata":{"score":3274,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sqi04i","title":"Beyond the Basics: What are your non-negotiable Linux server hardening steps before exposing a service to the web?","link":"https://www.reddit.com/r/selfhosted/comments/1sqi04i/beyond_the_basics_what_are_your_nonnegotiable/","author":"Browndude345","published_at":"2026-04-20T07:09:10+00:00","content":"\n\n\n<div><p>Most of us start by slapping a reverse proxy (like Nginx Proxy Manager or Traefik) and maybe Tailscale or Wireguard on our setups. But for those of you exposing specific services directly to the web, how far do you take your server hardening?</p>\n\n<p>I usually stick to a strict baseline (Fail2Ban/Crowdsec, UFW, disabling root SSH, key-only auth, and isolating apps in Docker containers), but I’m curious about the more advanced layers. Are any of you actively running SOC-level monitoring, Wazuh, or strict SELinux/AppArmor profiles on your homelabs?</p>\n\n<p>What is the one security measure you think the average self-hoster overlooks until it's too late?</p>\n</div>\n\n<p><small>⬆️ 364 points | 💬 187 comments</small></p>","metadata":{"score":446,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sq2xkt","title":"My retired gaming-rig became a mediaserver","link":"https://www.reddit.com/r/selfhosted/comments/1sq2xkt/my_retired_gamingrig_became_a_mediaserver/","author":"Drummerrob666","published_at":"2026-04-19T19:42:33+00:00","content":"\n\n<p><img src=\"https://rssglue.subdavis.com/media/5d/5d34899fe1237f9074646029fbb8f6ecee7cbdb8a6b1cc342f6b31bec63fbeb5.jpg\" alt=\"image\"></p>\n\n\n\n<div><p>I just wanted to share my two weeks of progress and configuration that I am quite happy about.</p>\n\n<p>It all started with installing Jellyfin on a outdated machine with Windows 10 to be able to play music and movies from my own collection.</p>\n\n<p>Two weeks later the same computer is now running Proxmox VE with a single VM that is running an arr-stack.</p>\n\n<p>I also took ownership of all my 40K of photos and videos through Immich and said goodbye to Apple and Google.</p>\n\n<p>The picture shows the whole setup and I just wanted to share this because I had so much fun setting this up and wanted to take the opportunity to say thank you for this subreddit, it’s been an inspiration!🙏</p>\n\n<p>**EDIT** I just tried starting up a brand new VM to test my Immich-backup and it worked flawlessly. Database and photos intact and the full 38K of photos with correct metadata was read from my backup. Happy guy!</p>\n</div>\n\n<p><small>⬆️ 253 points | 💬 97 comments</small></p>","metadata":{"score":254,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sps0gj","title":"Reitti v4.0.2: A New Map Experience and Update Progress","link":"https://www.reddit.com/r/selfhosted/comments/1sps0gj/reitti_v402_a_new_map_experience_and_update/","author":"_daniel_graf_","published_at":"2026-04-19T12:45:37+00:00","content":"\n\n<p><a href=\"https://www.reddit.com/gallery/1sps0gj\" rel=\"noopener noreferrer\">https://www.reddit.com/gallery/1sps0gj</a></p>\n\n\n<div><p>Hey everyone, I’m Daniel.</p>\n\n<p>It's been 103 days since I last posted about Reitti, and what a journey it's been! What started as a personal project on June 5, 2025, has grown immensely. In that time, Reitti has seen exactly 52 releases, culminating last week in the biggest and most ambitious update yet: Reitti 4.0! Today, I want to recap everything that's happened since my last post.</p>\n\n<p>The past few months have been dedicated to transforming how I interact with my movement data, and the community's support has been incredible:</p>\n\n<ul>\n<li><strong>1,979 Stars</strong> on GitHub.</li>\n<li><strong>467 Commits</strong> to main with <strong>419 PRs</strong> merged.</li>\n<li><strong>374 Issues</strong> closed.</li>\n<li><strong>25 Contributors</strong> on GitHub.</li>\n<li><strong>13 Languages</strong> supported.</li>\n</ul>\n\n<h1>What is Reitti?</h1>\n\n<p>\"Reitti\" is Finnish for \"route\" or \"path.\" It’s a personal location tracking and analysis application. It is fully local and private, and no data ever leaves your server. You own the database, and you own the memories.</p>\n\n<h1>Reitti 4.0: A New Map Experience</h1>\n\n<p>This release focuses on taking your map experience to the next level. I've completely rebuilt the map from the ground up, switching to a foundation powered by MapLibre GL JS and deck.gl. This enables a new level of visualization for your movements, even with millions of data points from years of tracking, it remains blazingly fast and responsive!</p>\n\n<ul>\n<li><strong>Rewind &amp; Replay Your Journeys:</strong> You can now watch your past movements unfold. This allows you to see how you moved through a specific day or trip.</li>\n<li><strong>New Map Layers:</strong> I've added new map layers that enhance your data visualization:\n\n<ul>\n<li><strong>Terrain Layer:</strong> See the elevation changes along your paths. This adds a new dimension to your movement data.</li>\n<li><strong>Globe Projection:</strong> Zoom out and view your entire journey across a 3D globe.</li>\n<li><strong>Satellite View:</strong> Get a real-world perspective with high-resolution satellite imagery.</li>\n<li><strong>3D Buildings:</strong> In supported areas, watch your paths weave through 3D building models.</li>\n</ul></li>\n<li><strong>The Aggregate View:</strong> This feature helps understand your routine. The new aggregate view condenses all your movement data into a 24-hour window, allowing you to visualize your typical movements. Ever wondered where you usually are at 8 PM, or what your most common morning commute looks like?</li>\n<li><strong>Fast Performance for Years of Data:</strong> Displaying multiple years of movement data used to be a challenge. Not anymore! Reitti 4.0 has been heavily optimized to handle vast amounts of historical data without breaking a sweat, ensuring a smooth and responsive experience even for the most avid trackers. The timeline will also see improvements in an upcoming release, as simply displaying all trips and visits for a given time range doesn't always yield meaningful information.</li>\n<li><strong>Flexible Path Visualizations:</strong> Now you can choose between:\n\n<ul>\n<li><strong>Raw Paths:</strong> See every single point as recorded.</li>\n<li><strong>Default Paths:</strong> My improved, cleaned-up path rendering.</li>\n<li><strong>Edge Bundling:</strong> A new option that reduces visual clutter by bundling nearby paths together, making trends and frequent routes easier to spot.</li>\n</ul></li>\n</ul>\n\n<h1>Other New Functionality</h1>\n\n<h1>Expanded Language Support</h1>\n\n<p>Thanks to the incredible dedication of the community translators, Reitti has expanded its global reach and now officially supports more languages, including:</p>\n\n<ul>\n<li><strong>¡Hola! Spanish!</strong></li>\n<li><strong>こんにちは (Konnichiwa)! Japanese!</strong> (special thanks to @GunseiKPaseri!)</li>\n<li><strong>Привіт (Pryvit)! Ukrainian!</strong></li>\n<li><strong>Merhaba! Turkish!</strong></li>\n</ul>\n\n<p>These additions are a huge step towards making Reitti accessible to even more users worldwide.</p>\n\n<h1>Place Editing with Geocoding</h1>\n\n<p>When editing a place, you can now directly request geocoding suggestions and select the most accurate result from various available providers. This makes managing your locations much more intuitive and precise.</p>\n\n<h1>Faster &amp; More Robust Visit and Trip Detection</h1>\n\n<p>I've completely overhauled the algorithms for detecting visits and trips. The new system is not only significantly faster but also much more robust, leading to more accurate and reliable insights into your time spent and journeys taken.</p>\n\n<h1>New Dedicated Open-Source Services!</h1>\n\n<p>As part of this update, I'm introducing two new, free-to-use services that power Reitti 4.0 and are available for everyone:</p>\n\n<ul>\n<li><strong>My Own Reverse Geocoder (Paikka):</strong> I've developed my very own reverse geocoder, free for everyone to use at <a href=\"https://geo.dedicatedcode.com\" rel=\"noopener noreferrer\">https://geo.dedicatedcode.com</a>. You can find its source on <a href=\"https://github.com/dedicatedcode/paikka\" rel=\"noopener noreferrer\">GitHub (Paikka)</a>. This provides fast, reliable reverse geocoding directly from my infrastructure.</li>\n<li><strong>My Own Tile Server:</strong> To complement the new map experience, I've also launched my own tile server at <a href=\"https://tiles.dedicatedcode.com\" rel=\"noopener noreferrer\">https://tiles.dedicatedcode.com</a>, based on the fantastic <a href=\"https://openfreemap.org/\" rel=\"noopener noreferrer\">OpenFreeMap</a> data. This ensures consistent, high-performance map tiles for all Reitti users.</li>\n</ul>\n\n<h1>BREAKING CHANGES – Please Read Carefully</h1>\n\n<p>While Reitti 4.0 added new features, there are a couple of crucial changes you need to be aware of for a smooth upgrade:</p>\n\n<ul>\n<li><code>rabbitmq</code> has been fully removed. This simplifies the stack and reduces dependencies.</li>\n<li><code>photon</code> has been removed from the default <code>docker-compose</code> file. While it's still supported if you wish to use it, it's no longer a default component thanks to my new open-source geocoding service!</li>\n</ul>\n\n<p><strong>It is absolutely essential that you update your</strong> <code>docker-compose</code> <strong>file during the upgrade process.</strong> Please visit <a href=\"https://www.dedicatedcode.com/projects/reitti/4.0/upgrade/\" rel=\"noopener noreferrer\">https://www.dedicatedcode.com/projects/reitti/4.0/upgrade/</a> for the necessary steps to get your Reitti instance running seamlessly on 4.0.</p>\n\n<p><strong>Full v4.0.0 Release Notes:</strong> <a href=\"https://github.com/dedicatedcode/reitti/releases/tag/v4.0.0\" rel=\"noopener noreferrer\">https://github.com/dedicatedcode/reitti/releases/tag/v4.0.0</a></p>\n\n<h1>Thank You</h1>\n\n<p>This project thrives because of its community. Thank you to everyone who contributed this year. To the new contributors like <a href=\"/u/Jonsen94\" rel=\"noopener noreferrer\">u/Jonsen94</a>, <a href=\"/u/GunseiKPaseri\" rel=\"noopener noreferrer\">u/GunseiKPaseri</a>, <a href=\"/u/sieren\" rel=\"noopener noreferrer\">u/sieren</a>, <a href=\"/u/wjansenw\" rel=\"noopener noreferrer\">u/wjansenw</a>, <a href=\"/u/subha0319\" rel=\"noopener noreferrer\">u/subha0319</a>, and <a href=\"/u/per_terra\" rel=\"noopener noreferrer\">u/per_terra</a> your code, ideas, and dedication are invaluable. Special thanks go to the translators who ensure Reitti is accessible worldwide, and to everyone who posts issues, suggests features, and supports the project indirectly.</p>\n\n<h1>What’s Next?</h1>\n\n<p>Thanks to the incredible support from my Ko-fi supporters, I've recently acquired a dedicated GPS logger! This means I'm now setting my sights on bringing multi-device support to Reitti. Imagine this: you use your phone for day-to-day tracking, while simultaneously logging a run or ride with another device, leaving your phone at home. My goal is to seamlessly bring these timelines back together into one cohesive view. Along with this, I'll be introducing more powerful editing capabilities, such as defining \"no-visit\" areas and the ability to remove individual GPS points.</p>\n\n<p>For the Memories feature I explored local AI for natural-language travel diaries, it's still very much on my mind. However, I haven't yet managed to get decent results with a small, local LLM that supports multiple languages. Time will tell if this ever happens, as I only want to introduce massive new requirements when they can deliver a truly tremendous impact for all of you. If anyone has a tip, please drop me a message.</p>\n\n<h1>Development Transparency</h1>\n\n<p>I use AI as a development tool to accelerate certain aspects of the coding process, but all code is carefully reviewed, tested, and intentionally designed. AI helps with boilerplate generation and problem-solving, but the architecture, logic, and quality standards remain entirely human-driven.</p>\n\n<p>I appreciate your feedback and support! Here are a few ways to connect:</p>\n\n<ul>\n<li><strong>Support My Work:</strong> If you find this project useful, you can support my efforts by buying me a coffee on <a href=\"https://ko-fi.com/danielgraf\" rel=\"noopener noreferrer\">Ko-fi</a>.</li>\n<li><strong>Report Issues:</strong> Encountered a bug? Open an issue on <a href=\"https://github.com/dedicatedcode/reitti/issues\" rel=\"noopener noreferrer\">GitHub Issues</a>.</li>\n<li><strong>Discuss on Lemmy:</strong> Join the conversation or reach out on <a href=\"https://discuss.tchncs.de/u/danielgraf\" rel=\"noopener noreferrer\">Lemmy</a>.</li>\n<li><strong>Connect on Reddit:</strong> Find me here.</li>\n<li><strong>Join us on IRC:</strong> Chat with us live in my IRC channel <code>#reitti</code> on <code>libera.chat</code>.</li>\n<li><strong>Github:</strong> <a href=\"https://github.com/dedicatedcode/reitti\" rel=\"noopener noreferrer\"><strong>https://github.com/dedicatedcode/reitti</strong></a></li>\n</ul>\n\n<p>I'll be in the comments to answer your questions.</p>\n</div>\n\n<p><small>⬆️ 284 points | 💬 49 comments</small></p>","metadata":{"score":295,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sp1bvz","title":"n8n dropped every webhook at 3am for two weeks and I only noticed because a client asked where his invoice was","link":"https://www.reddit.com/r/selfhosted/comments/1sp1bvz/n8n_dropped_every_webhook_at_3am_for_two_weeks/","author":"Ambitious-Garbage-73","published_at":"2026-04-18T15:54:04+00:00","content":"\n\n\n<div><p>So this is either useful or embarrassing depending on who's reading, probably both.</p>\n\n<p>Running n8n on a mini PC under my desk (NUC clone, 16GB, Debian 12, docker compose). Been up around 8 months, mostly boring. A couple weeks ago I noticed the invoice-reminder flow had silently stopped firing on a few contacts. Poked it for ten minutes, blamed a flaky SMTP relay I'd swapped the week before, moved on.</p>\n\n<p>Yesterday a client DMs me basically asking if I'd ghosted him because he hadn't heard anything since late March. I open the executions tab and there's this neat little gap every single night between roughly 02:50 and 03:30 where literally nothing ran. Fourteen nights of it. The dashboard I never close had been showing a green checkmark the whole time because whatever executions happened outside the gap worked fine.</p>\n\n<p>The actual bug, for the record: logrotate. The postrotate hook was doing <code>docker kill -s HUP</code> on the n8n container to make it reopen log files. n8n apparently does not take SIGHUP well and just dies. The restart policy brought it back, but only after the rest of logrotate finished whatever else it was rotating, which is why the gap drifted a little each night. Fix was switching to <code>copytruncate</code>, ugly but it works.</p>\n\n<p>the thing I actually can't get over is that uptime-kuma was green for all fourteen days. container up. HTTP port open. /healthz returning 200. every layer of my \"monitoring\" was technically correct and also completely lying about whether the thing n8n exists to do was happening. I'd built a setup that told me what I asked instead of what I needed to know.</p>\n\n<p>so I'm looking at bolting on a synthetic check that actually fires a test webhook into one flow and asserts on the expected execution ID in the DB a few seconds later. feels like something that should already exist as a Docker sidecar or whatever but I haven't found it. anyone here doing real end-to-end synthetic monitoring on self-hosted workflow stuff, or am I about to spend a Saturday writing something mediocre?</p>\n\n<p>(also yes I know about Healthchecks.io, I use it for cron, but for a webhook-&gt;DB assertion I'd need something slightly more)</p>\n</div>\n\n<p><small>⬆️ 210 points | 💬 43 comments</small></p>","metadata":{"score":213,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1sockr8","title":"My lab domain got added to a DNS blocklist and broke my whole setup.","link":"https://www.reddit.com/r/selfhosted/comments/1sockr8/my_lab_domain_got_added_to_a_dns_blocklist_and/","author":"FanClubof5","published_at":"2026-04-17T20:27:02+00:00","content":"\n\n\n<div><p>I setup the hagezi ultimate adblock list in pihole a few months ago and didnt think much of it after that. Today I am chilling and trying to avoid working too much on a Friday afternoon when I get an alert from uptime kuma that my nginx-proxy-manager stopped responding.</p>\n\n<p>I check the docker container first, everything is green and logs look fine, weird but lets restart it just to be sure. No change, hmmm well I can access the demo page at the direct IP so maybe its not this, lets check the DNS resolve.</p>\n\n<pre><code>    &gt; nslookup proxy.homelab.com\nServer:         10.0.1.66\nAddress:        10.0.1.66#53\n\nName:   proxy.homelab.com\nAddress: 0.0.0.0\nName:   proxy.homelab.com\nAddress: ::\n</code></pre>\n\n<p>Odd that should be resolving to the 10.0.1.66 server not 0.0.0.0 I wonder what changed. I dig around in the Pihole logs for a bit and discover that my domain was actually added to the offical blacklist. I am not really sure how since my public footprint is minimal, gets virtually zero traffic except for some bots to the root domain, and definitely doesn't serve ads. Either way I was able too lookup the commands to white list my domain in Pihole and bam everything was back to normal.</p>\n\n<p>Just some friday fun.</p>\n</div>\n\n<p><small>⬆️ 333 points | 💬 54 comments</small></p>","metadata":{"score":339,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}},{"id":"1soavp6","title":"Migrated a client off shared hosting to a VPS last week, the difference was embarrassing","link":"https://www.reddit.com/r/selfhosted/comments/1soavp6/migrated_a_client_off_shared_hosting_to_a_vps/","author":"Own_Addition_7619","published_at":"2026-04-17T19:24:44+00:00","content":"\n\n\n<div><p>so i've been telling this client for 2 years their site was slow because of shared hosting<br>\nthey finally listened after a competitor started ranking above them on google</p>\n\n<p>moved them to a KVM VPS, same wordpress stack, nothing else changed<br>\npage load went from 3.2 seconds to 0.9 seconds. that's it. that's the whole story</p>\n\n<p>the amount of money they lost over 2 years because they didn't want to spend an extra 15€ a month is genuinely painful to think about</p>\n\n<p>if your site is on shared hosting and you're wondering why it feels slow, it's that. it's always that</p>\n</div>\n\n<p><small>⬆️ 338 points | 💬 32 comments</small></p>","metadata":{"score":815,"source_feed_id":"r-selfhosted","source_feed_type":"reddit"}}]