Subsample: Kronis Dev Blog

ID Type Limit Status Last Update Next Update
kronis-dev-blog rss 50 Enabled 9 minutes ago 6 hours from now
Posts History Gallery Config RSS JSON

Posts (20)

Apple is increasing my cortisol levels

Published: 4 days ago

I'm creating a simple developer utility to make managing Claude Code profiles (e.g. running it with DeepSeek, or some OpenRouter models) a little bit easier.

Edit: I just did the first release, which you can check out on ccode.kronis.dev, or go directly to the Itch.io page to either download or buy the pre-built binaries or look at the source code. It's a simple utility and it's early on (consider getting it for free first and only paying later, if it feels useful), but currently the code is not signed.

The utility is written in the Go language, and the tooling there makes it really easy to compile for various platforms - I get a static executable that I can put anywhere I want. Even before the release, I wanted to see how easy it would be to ship it.

It works just fine for distributing Linux software (same deal, after chmod +x).

It works sort of fine for distributing Windows software (I get an .exe, SmartScreen might have a word or two, though you can click through it in the same pop-up).

Distributing Mac software

It does not just work for macOS and my MacBook instead shows me this:

01-quarantine

What you see is their quarantine kicking in for downloaded software, even if I share it with myself over Nextcloud.

Technically, you can ask your users to override it manually, in the terminal:

02-manual-override

Most developers might be willing to do that. It is not, however, good user experience and might raise some eyebrows.

Doesn't seem like such a big deal, right? I'll just enroll in their Apple Developer Program, sign the executable and be on my way, right?

03-enrollment-requirements

Giving Apple money, and failing

Wait, they want how much money for the account?

04-the-pricing

And it's a yearly subscription? My brother in Christ, I intend to release a utility maybe a dozen or two dozen people are going to download, tops, for like 7 USD on Itch.io with a pay-what-you-want model, meaning that most of those people will probably choose the price of 0 USD instead (since I don't intend to be like Apple, people have various circumstances).

That means that even if it works out that much, there's going to be VAT and Itch.io will also take a cut so out of those maybe 50 USD I'll get about 25 USD, which funds me about 3 months of that Apple Developer Program price. I guess the reason for it being priced like that lies somewhere between greed and wanting to gatekeep hobbyists out and only support Serious Users™, but it seems a bit stupid. Oh well, I already had to get the overpriced MacBook for another freelance thing, because they also won't let me compile macOS/iOS apps on Windows or Linux, so I guess this is just them spitting on me after slapping me in the face.

What I get from that is that articles like An app can be a home-cooked meal are cool but don't take the economics of wanting to release something publicly into account -...

Zed is pretty nice

Published: 6 days ago

Recently, the Zed editor had its 1.0 release. While I can't say what it's going to be like in a few years, even now it has largely replaced Visual Studio Code as my editor for non-IDE tasks (though I still keep Notepad++ around for some more persistent tabs, like notes).

This won't be a super formal or structured review, but just my first impressions, setup and some thoughts so far. I will probably test drive it for a few months and see how it goes! It might also be a really nice replacement for Visual Studio Code on my M1 MacBook Air, because it doesn't have that much memory and Zed even on Windows is using about 159 MB of RAM in total right now.

If anyone is curious, I set it up in a pretty comfy way and here's what it looks like:

01-zed-is-pretty-nice

So, what's the selling point?

Essentially, a lot of why I was looking in the direction of CudaText, a self-contained editor that doesn't need a frickload of plugins to be useful, while at the same time having most of the features you'd want out of the box, alongside great performance. CudaText had a few oddities about it and while I liked that project, it felt more like a replacement for Notepad++, while Zed feels like a replacement for Visual Studio Code - most of the stuff you're used to there, is also available here. In addition, it is also really fast (personally I already found Visual Studio Code to be nicely optimized, but this is one step further).

Some AI features

They even support a bunch of AI stuff out of the box, if you're into that kind of a thing - rather than just running TUI/GUI based agents separately, to have something right in your editor. It's nice to have an editor where you don't need to switch between a bunch of awkward plugins (also RIP RooCode), I really liked that one to be fair, felt like a step up above Cline, but not as cluttered as KiloCode:

02-integrated-ai-chat

Of course, it isn't exactly perfect in all the ways. For example, when using their integrated Zed agent, it seems to have trouble doing file edits with some models, like DeepSeek in particular kept corrupting fairly simple HTML templates more than once (while "LSP Edit" would show up as tabs):

03-zed-isnt-perfect

It's odd, because the DeepSeek V4 Pro model itself is quite good - to the point, where I might replace my current Anthropic 100 USD subscription with it (or probably downgrade to the 20 USD tier and use it more sparsely), given that their API prices are also really, really great. After some digging around, it looks like the issues are all on the default Zed agent, because the code edits work just fine in OpenCode, it can even fix the previous corruption:

04-deepseek-works-fine-though

I do have to note that running OpenCode inside of WSL is an exercise in frustration in and of itself if you have to use Windows, though, so if you're curious about...

Setting up a Git Bash alias in Windows

Published: 6 days ago

Here's a quick tutorial on how to work around an annoying issue on Windows.

Suppose that I have Git Bash installed on Windows that I got with my install of Git, and that I like to use to run various shell scripts. The problem there, is that if I also have installed WSL, then I have approximately 0 idea how to run Git Bash from a PowerShell terminal session, for example, if a development tool has a terminal tab and opens that directly, or if I'm in a regular terminal session and just want to run a Bash script, BUT maintain access to the tooling I have on the system directly, NOT inside of WSL.

In this case, opening bash opens WSL, not Git Bash:

01-bash-opens-wsl

At the same time, maybe I don't want to mess around and overwrite the bash behavior, so I need a new alias. Here's how we can do that.

First up, we make sure that there is a PowerShell profile set up (but don't overwrite it if it already exists) and open it for editing:

New-Item -ItemType File -Path $PROFILE
notepad $PROFILE

There, we add our new alias:

function gitbash { & 'C:\Program Files\Git\bin\bash.exe' @args }

Then we save the file, and reload the profile (needs to be done once, no restart of the whole PC necessary):

. $PROFILE

Once done, now you can open Git Bash with the aliased command:

gitbash

Here's what it looks like when you run it:

02-what-it-looks-like

This will also work in any terminal sessions inside tools, for example Zed, which I also wrote about today, but which doesn't seem to support easy terminal switching on a per-tab basis:

03-works-inside-tools

Not the longest blog post, but I'm writing this down here, because I can guarantee that I'll forget what the exact commands and syntax was for all this. Either way, at least here's something annoying about my setup out of the way!

[LV] LATA 2026 Konference

Published: 3 weeks ago

This post is in Latvian, because the conference and the accompanying slides were also in Latvian.

Nesen bija kārtējā LATA konference, nu jau 2026. gada versija: Digitālā burbuļošana. Procesi, datu centri, mākslīgais intelekts.

Slaidus viņi vēl nenopublicēja, bet šo to norises laikā piefiksēju. Iedomājos, ka uztaisīšu sava veida kopsavilkumu par to, kas šķita vissaistošākais. Nebūs te tik daudz info par konkrētajiem runātājiem vai LATA gada balvu, vairāk par pašu saturu.

Savukārt, ja interesē pilnais video ieraksts, to var apskatīties šeit: YouTube: "Digitālā burbuļošana. Procesi. Datu centri. Mākslīgais intelekts."

Jēgpilna IKT pārvaldība: no birokrātiskas kontroles līdz attīstības ceļakartēm

Prezentēja: Gatis Ozols, VARAM, Valsts sekretāra vietnieks digitālās transformācijas jautājumos

No sākuma bija prezentācija par IKT pārvaldību valstī. Vārdu sakot, VARAM (Viedās Administrācijas un Reģionālās Attīstības Ministrija) mēģina izstrādāt plānu, kā valstī pārvaldīt sistēmu izveidi un uzturēšanu, viņi to nosauca par "IKT būvvaldi", ar domu to pārvaldi arī taisīt vairākos slāņos:

01_01-ikt-buvvalde

Ja interesē, tad atradu prezentāciju kur par to var vairāk paskatīties: IKT būvvalde un digitālās pārvaldes arhitektūra

Papildus tam, arī pašā mājaslapā ir vairāk info: Digitālās pārvaldes arhitektūra

Protams, ikdienā gan jau tā ir birokrātija, bet censties sakārtot to kā valstī sistēmas tiek izstrādātas, vismaz ideja ir atzīstama!

Paši arī par to varēja palielīties:

01_02-ikt-buvvalde

Papildus tam, viss izskatās ka ir ne tikai "top-down" pieejā, bet arī cenšas drusku dabūt info no nozares:

01_03-ikt-buvvalde

No vienas puses, lietas centralizēt ir riskanti, jo var rasties situācijas kur domēnam piedāvā/rekomendē/uzspiež nepiemērotus risinājumus kas no tehniskās puses vienkārši nedarbotos labi (OS, DB, API). Bet no otras puses, reizēm lietas izdodas tīri smuki. Var paskatīties uz to, kā Lielbritānijā izveidoja priekš valdības saitiem konsistentu dizaina sistēmu, lai nebūtu problēmas ar accessibility un lietotājiem būtu vieglāk strādāt (UI elementi, izskats un darbība laika gaitā pazīstami): Design your service using GOV.UK styles, components and patterns

Get ahead. Stay ahead.

Prezentēja: Aigars Mačiņš, Emergn, Risinājumu arhitektūras prakses vadītājs

Pēc tam bija viena praktiskāka prezentācija, kas runāja par to, cik daudz ir mainījies tas kā uzņēmumi strādā, populārākiem kļūstot AI rīkiem. Detaļas bija visai daudz, bet lielās līnijās viss vērsts uz to, ka uzņēmumi un vispār izstrādātāji kas izmanto AI, var lietas apgūt un iterēt daudz ātrāk:

02_01-ai-impact

Laika gaitā tas visticamāk novedīs pie tā, ka tie uzņēmumi kam nav Claude/Codex/Gemini subscription vienkārši atpaliks, nespēs taisīt prototipus, nespēs ātri taisīt jaunus projektus (īpaši startup). Projektu pārvaldes un ieguldījumu sakarā arī uzsvars uz to, ka vairāk jāeksperimentē un jādarbojas iteratīvi, nevis jāizplāno kaut kādi ļoti lielie projekti daudziem gadiem uz priekšu:

02_02-portfolio-management

Nevajag arī pārsarežģīt, jāskatās kāds risinājums ir spējīgs ģenerēt vērtību, būtībā Minimum Viable Product pieeja. Varbūt pat ne tikai tas, bet arī lielām sistēmām jāskatās kura funkcionalitāte reāli ir vajadzīga, kuru izmanto, cik lielus uzturēšanas riskus nevajadzīgā rada. Būtībā no tehniskās puses to pašu gribētos teikt par arhitektūru un DevOps lietām reizēm:

02_04-overall-view

Vispār interesanti, viņi uztaisīja (būtībā vibe coded) produktu projektu pārvaldībai, palaida tirgū un jau par to pelna naudu: Praxis by Emergn

02_05-what-others-are-doing

Par koda un kopējā risinājuma kvalitāti un to biznesa ideju neteikšu pats neko ne labu, ne sliktu, bet pajautājiet sev:

Cik bieži mēs uzņēmumā palaižam paši savus SaaS projektus publiskai lietošanai?

Tas...

On Anthropic

Published: 2 months ago

It's been a while since I last looked over the pond, at what the Americans are doing. The problem is that they just keep doing more stuff, to the point where if I were to write about them, it would be a long post and it becomes less and less possible to hide behind sarcasm and not condemn that whole administration.

At the same time, in absolute terms I am nobody so that gives me some safety, but as a European, it's also hard to hide my disgust at blatant disregard for human rights, human life and our European values - especially when there's foreign interference that aims to meddle with the EU, NATO and also support far right, and borderline fascist, parties like AfD.

A measured take on AI and safety

Today, however, I'm not doing a long political post, rather I'd like to just reference this one post that Anthropic recently made: Statement from Dario Amodei on our discussions with the Department of War

In the post, they do a little patriotic blurb about wanting to help Americans and keep them safe, which I don't take an issue with. Even though I'd say that the administration is trampling over the legacy of what was once a democratic ally, I doubt anyone would take an issue with wanting to keep your people safe - I don't have a problem with Americans in general, the same way I have some friends in Israel while holding similar disdain for the actions of that state.

Anthropic then proceeds to reiterate that they don't have "ad hoc" limitations on the use of their models, but that they take an issue with two particular use cases, where it is impossible to use the technology responsibly:

01-anthropic-limitations

They do share a considerable amount of detail, but I'd say that their assessment is correct. With the current state of AI, using it for either mass surveillance or fully autonomous weapons systems would be deeply problematic and in quite a few cases, also severely illegal - though it's not that it has stopped the current administration before.

No matter how you look at it, their response is fairly measured and in any other governmental system, that should be met with nothing other than agreement from the powers that be, while a lot of other use cases could be pursued, money exchanged, and Americans kept safe. However, that was not the response that they got. Almost immediately, there was backlash from the government, which is ridiculous - since the only things they could have opposition on is that they do, as a matter of fact, actually want to use AI for those very unsafe and illegal use cases.

Luckily, a lot of people from both OpenAI and Google got together and signed an open letter in protest of that: We Will Not Be Divided

Here's a snapshot:

02-we-will-not-be-divided

Again, the position of not letting technologies with no fundamental possibility to take responsibility over its actions kill people feels like the bare minimum. How did that old warning from IBM in the late...

Buying some drives from Datablocks

Published: 3 months ago

A while back, I learnt of the practice of buying white label and recertified hard drives: the idea being that large batches of good drives sometimes get returned to OEMs by hyperscalers. They can then be retested and sold either with the branding, or with the branding removed (although can sometimes have small scratches and such).

Essentially, if I decide to get a white label drive, then the original manufacturer is off the hook in regards to what happens with it, however, it can still very much be a good drive with a low RMA rate (less than a percent), so I can actually get a really good deal.

One such vendor that is available in Europe is Datablocks, I think I heard about them on HackerNews and saved the link for later. Eventually, I decided to get a few 1 TB HDDs from them because I found out that the regular Seagate 1 TB drives I usually got from a local e-commerce store went up in price from like 40-50 EUR all the way to 110 EUR, which I think is pretty insane. Not sponsored by them in any way, just decided to share my experience.

The good

I don't really use high capacity drives, because I get them in multiples for backup reasons and having spares, and since I compress my data and don't really have that much of it, 1 TB drives still make sense (unless there's a good deal on 2 TB drives, which there wasn't).

There were still some in stock, so I went for the desktop drives:

00-datablocks-drives

Look at that: that's 35 EUR for a 1 TB drive, even cheaper than Seagate drives used to be back when the prices were actually decent. There was shipping I had to pay to get them delivered to Latvia, which was short of 30 EUR total - that wasn't too pleasant, but since I got two drives for about 70 EUR, the total came out to around 100 EUR, which is still somehow cheaper than a single drive from a local store.

Shipping was done through DPD and the package arrived within a week, safely:

01-the-box

There was some of that packing paper inside (maybe the box itself is a bit too big for the contents), and the drives themselves were wrapped in bubble wrap nicely:

02-the-packaging

Here are the drives, nothing special about them, they came in sealed packages, just like new ones do:

03-the-drives

For comparison's sake, here's one of the Seagate drives (actually the last new one I had), as you can see, they look pretty similar and there's also no obvious scratches or defects that I can notice:

04-drive-comparison

Same on the back, visually, everything looks okay:

05-drive-comparison-back

Now, at work I've run into issues with using HDDs, mostly due to random reads and writes being bad due to their nature: in particular, when trying to host an instance of S3 compatible software, such as Garage. There, the read and write performance plummets when the files are chunked, or alternatively, you have to give up storage efficiency by...

Why Europe needs open source

Published: 3 months ago

Okay, so I'm a little bit late with this one.

A while back, there was a post on LWN about the European Commission calling for evidence on open source. If you want, you can have a look at the EUR-Lex post for yourself:

01-eur-lex

The gist of it is, that Europe has become quite dependent on foreign tech, which means risks both in regards to the supply chain itself, as well as the overall governance - and given the state of the world, isn't a really good situation to be in:

The EU faces a significant problem of dependence on non-EU countries in the digital sphere. This reduces users' choice, hampers EU companies' competitiveness and can raise supply chain security issues as it makes it difficult to control our digital infrastructure (both physical and software components), potentially creating vulnerabilities including in critical sectors.

In the last few years, it has been widely acknowledged that open source – which is a public good to be freely used, modified, and redistributed – has the strong potential to underpin a diverse portfolio of high-quality and secure digital solutions that are valid alternatives to proprietary ones. By doing so, it increases user agency, helps regain control and boost the resilience of our digital infrastructure.

For a long time, it has been happening silently in the background, various systems being built, integrated and operated, or sometimes changing ownership to that of presumed allied countries, with nobody paying it much attention - such as the Netherlands DigiD identity system almost getting sold to the US.

It's not even the case of various large systems that might get treated like public utilities, but rather even the foundational building blocks, from OSes like Windows Server and RHEL (still controlled by a foreign company, despite being more open than Windows), to databases like DB2, SQL Server, Oracle and others, alongside a huge amount of proprietary solutions, frameworks and even libraries.

The thing is, that the risks of vendor-lock have been known for a long time, though sadly we have to contend with adages such as:

Nobody ever got fired for choosing IBM.

Replace IBM in that sentence with any mainstream large tech company, be it Microsoft, Google, or maybe even entire platforms like AWS. People keep waving their hands around and saying that proprietary technology is good, actually, since it often comes with support (not that you can't be the support for an open-source solution, or even pay someone to support you, but apparently that eludes most people; it might just be about covering your ass in case something goes wrong, but I'll explain why it doesn't actually work). Couple that with the sales departments of those companies having a lot of resources at their disposal and large govt. contracts being right up their alley, and the friction for using that tech as opposed to FOSS or even source-available software will often be lower.

Now, four weeks have passed since the call was open, so it's not like I would want to submit anything formal, but at the same time I at...

My dad passed away

Published: 3 months ago

Near the end of the last year, my father, Pēteris Kronis, passed away. It has been months since and I was thinking about whether I should write about it or not, but in the end decided to put this out there anyway. It was a sad, humbling and human experience. I wanted to write something, to both share in that experience and to also remember him, in writing.

What happened

I live in his old city apartment and look after it, while I work in the city as a software developer, whereas he and mom lived together in the countryside home. It just so happened that this was one of the few times when mom had been out of the house in recent memory - visiting another lady, a family friend, in Italy for a week or two, to see the sights and nature. I had joked that this was a sort of "vacation" for her, because otherwise a lot of her time was spent looking after the house, keeping it more clean than both me and dad would otherwise, cooking and looking after the dogs. Some time after her leaving, dad sounded more sickly on the phone.

I was planning to visit him at the end of the work week, for the weekend, as I often did with my parents, being on the phone with both him and mom in the days leading up to it - how he was feeling, what medicine to better bring to him and so on. Me and mom had both told him that if it gets bad, then he should drive to the nearby city (a 15 minute drive, approximately) and see the doctor there, yet he had pretty consistently responded with: "I'll wait and see." It wasn't completely out of the ordinary, because we all had previously had things like a common cold, that did not seem to actually require much medical attention, and he wouldn't really say that it was much different this time.

I had been on the phone with him just that Friday, but by Saturday, the day when I was actually getting into the bus and going to visit him, he would no longer pick up his phone. A part of me felt like this shouldn't be that big of a deal, because mom would sometimes also not pick up when busy with something or outside, yet a part of me couldn't shake the feeling that it was a sort of bad omen. By the time I arrived, he had already passed. I found him in his bed.

I called mom about what to do. I called the emergency number after, they sent an ambulance, they confirmed that he had passed away, I guess "dead with no signs of violence" is the term that gets put on the document in such cases. One of them told me that his chest was blue in color, and that it was most likely something to do with his lungs - where a person tries to take a breath but doesn't get enough oxygen,...

Sometimes Dropbox is just FTP: building a link shortener

Published: 3 months ago

There is this one old HackerNews post that sometimes people reference, when talking about how engineers view software:

01-dropbox-is-just-ftp.jpg

It's an observation of how the trend goes:

There's even an XKCD about this and you can look at the Dropbox revenue for yourselves:

02-dropbox-revenue.jpg

Sometimes these products or services more or less outlive the relevance of the software that already existed and could be used to build something like that at the time. Why? Well, in part, I don't doubt that engineers underestimate the staying power of something that is pleasant to use and also solves a problem that the user has, even if it's nothing groundbreaking on a technical level.

Sometimes it's not even completely new ideas or software, either. For example, look at Linear, it is just a slightly better Jira that already existed before, which in turn is maybe a bit better than Redmine. Similarly, Obsidian is just Notepad++ with Dropbox, which could just be Notepad++ with SFTP.

And yet, all of those are useful and are doing pretty well! Except in some cases, the engineer is right and Dropbox is, indeed, just SFTP.

My previous setup

Let's talk about link shorteners. I use them on this very blog, as you can see above - in part due to the CMS that's running this blog sometimes having weird behavior around complex URLs, other times because a URL gets super long and I want to include something nice instead, e.g. a URL to a map to tell a delivery driver exactly how to get to my place when delivering something, or maybe some instructions for an event, or how to access a particular document.

There are some out there that you can just use, however they might not be around forever, might inject ads and other stuff before the redirect (they have to earn money somehow as well), in addition to there always being the option of self-hosting something yourself. For a while, that's exactly what I did. There is a pretty cool open source project out there, that is called YOURLS. It has served me well for a while and was pretty much perfect for not making me rely on external services.

Here's what it looks like, in case you're curious:

03-yourls-main-ui.jpg

It even supports a pretty nice statistics view, in case you're curious about where your traffic is coming from and all sorts of other stuff:

04-yourls-statistics.jpg

At the same time, as an engineer, I view it as a liability. I trust that the developers are doing their best, but what if there's a vulnerability, maybe not even in their code, but in one of the dependencies? What about keeping the database that it needs up to date? What if one day it decides to break, like I've had happen plenty of times with PostgreSQL WAL getting corrupted and needing manual intervention? Even if nothing breaks there, sooner or...

I blew through 24 million tokens in a day

Published: 6 months ago

Suppose I have a legacy project, or even just an older project that I worked on previously and now want to carry over the mechanisms that made it work, to a new one.

All sorts of custom functionality, wrappers around the underlying library components with custom functionality, like being able to tell when any input field or input element has been touched and a form should be considered dirty (prompt before navigating away), as well as links and navigation logic that integrates with this, custom utilities for i18n built on top of pre-existing solutions, validators, date and number formatting utilities (maybe with moment.js and currency.js integrated) and so much more.

There might be dozens, or sometimes hundreds of files in a project that otherwise has over a thousand source files, obviously not coupled as loosely as it might be (e.g. a collection of separate packages), because clearly nobody has the time for setting that up and that's never how these things evolve over projects that span 5 or more years.

Yet, I don't want the business functionality. I don't want the constants from that project. I don't want the router routes, but I want the logic for having nested routes, I want the logic for highlighting the route group in the navbar/sidebar when a route below that is active, I want at least some of the permission checks and the more generic bits of redirects to the login page, error handlers and error boundaries and so much more.

You'll notice that this example is front end centric, but it might as well be back end centric as well. In either case, what I have a lot of are requirements, since otherwise I'd be writing everything from scratch and that never works out well, especially when there are perfectly serviceable implementations somewhere for me to reference, instead of rediscovering the edge cases anew. But maybe I want to migrate it all to TypeScript, maybe I want to move from Vuetify to Quasar, perhaps I even want to explore implementing the same functionality with Pinia, or get rid of it.

The one thing I don't have, however, is time. Nor do I think I necessarily have enough motivation or working memory to go through 100 components and update all of them in a specific way. Everyone has deadlines and even if it's a personal project, the evenings are only so long. So, it's the perfect use case for generative AI, right?

Generative AI and its main problem

Well, sort of. Having generative AI at your fingertips does a few good things:

In other words, it's like having a very motivated junior developer that sometimes does completely erroneous things, but at the same time has like no ego and will carry...

AIOs are superb, thermal pastes are the same?

Published: 7 months ago

Here's a short post: I think AIOs or All-In-One coolers are the superior cooling solution, both to air coolers and custom liquid cooling loops for the average user!

Recently, I got some new thermal paste, because my CPU temps were still a bit higher than I'd like under full load. While gaming they'd hover around 70 C and under some benchmarks they'd go up to 80 C whereas with Prime95 they'd rise so far that the CPU would thermal throttle to prevent damage. This is on a Ryzen 7 5800X, a series which is known to run a bit hot, but at the same time I wondered whether I could improve things a bit.

I realized that I more or less dreaded the idea of having to take off the CPU cooler and to repaste it and assemble everything again, mostly due to the mounting mechanisms that I've seen a lot of AM4 coolers use - those annoying pressure latches that are hard to secure properly, as well as the fact that good air coolers can be quite bulky and I sometimes just get cut or scraped by the fins on them.

AIOs are very nice, you should get one

But then I remembered that I moved over to having an AIO which leaves it as a really simple errand and the case doesn't even feel all that crowded:

01-aio

Now, the fact that everything is a bit dusty and that I use tape for managing some cables (don't worry, nothing has melted yet, it might look like a mess but it works) aside, you can see for yourself - there are some screws that I can easily unscrew or rescrew with my hands without getting a screwdriver out. It's about as easy as installing an AIO can get.

I got the Aigo ACSE 240 on AliExpress a while back for about 54 EUR and I have to say that it was money well spent, they also have or at least had some of the more affordable case fans as well! This setup pretty much replaced me needing like 5 case fans and the temps are pretty good, all while there isn't too much noise.

I will admit that I wouldn't recommend them over other brands at the time because it seems like most prices on AliExpress went up a good 40% and I've literally 0 idea why - for that price you can probably find something on local e-commerce stores for cheaper, even from more reputable or at least mainstream brands:

01-aio-store

Either way, the argument remains that as long as you go for literally any AIO with good reviews that fits in your case and your budget, you'll probably have a pretty decent time. And yes, although they might need a refill in like 5 years or might need a replacement if and when the pump dies, they are still affordable enough for this not to be a super big problem.

So what's this whole thing about the thermal paste? Well, the temps were still a bit high for my liking, so...

I'm an e-waste consumer

Published: 7 months ago

In my previous article about my investments on Revolut in 2025 (plot twist: AMD's value just spiked and fell a bit afterwards, that's how those things go), I mentioned that I live a fairly Spartan lifestyle and get the things I need, rather than the ones I might want.

A part of this is finding the pieces of hardware that fit my needs without being exorbitantly expensive. For example, my main PC currently has these parts in it:

I've settled on the setup with incremental updates and replacing failing parts over the years, without spending bank on any single part. My previous CPUs (and the ones still running in my homelab servers, a pair of low TDP 200GE Athlons that draw up to just 35W) sometimes came off of AliExpress, whereas sometimes there was new old stock in some e-commerce stores in my country for good prices as well.

In a word, I try to get parts with good value and then to squeeze as much out of them as possible - tuning the CPU OC so it can punch a little bit above its weight class, picking up the new faster RAM sticks when the DDR4 prices are finally nice (in opposition to DDR5 prices right now), as well as experimenting with a dual GPU setup, which sadly didn't work out well, but let me settle on a setup that's either way hopefully good enough to last me until around 2030.

However, I've got an admission to make.

I am responsible for some e-waste

Things are way more shaky when it comes to peripherals. While getting a mainstream CPU or HDD/SSD will generally be a fairly consistent experience, when it comes to buying keyboards, mice, headsets and microphones, as well as webcams, it's like the wild west out there. There are decent quality products that you can get for cheap out there: and I don't mean

"Oh hey, spend 100 EUR on this Logitech keyboard."

cheap, instead I mean

"Spend <60 EUR for a keyboard that will last you for years"

cheap. A good tradeoff between the value the product brings and its actual cost, which is harder to do than you might imagine. Some of the things I've found out were good purchases in that regard include:

My investments in 2025 so far

Published: 7 months ago

I thought I'd make a casual blog post and talk a little bit about the investments I have done in 2025 so far and how they've turned out. I guess sometimes talking about our finances is a bit of a faux pas, but I don't really care about it that much and I think it's an interesting topic. Secondly, I am obviously not a financial advisor and none of what I say is actual financial advice, just my personal look at things.

In general, I live a fairly Spartan lifestyle - most of my money goes into either my savings or investments and I generally only buy the things I really need. For example, even though I use a computer daily, it's not some top of the line rig, but rather a setup with a Ryzen 7 5800X (OCed a bit and with a budget AIO, but still) and an Intel Arc B580, by all accounts pretty mid or entry level hardware. I did end up upgrading to RAM that's a bit faster at 3600 MHz and also had to move over to an NVMe boot drive, but that's because the old SSD I was running on melted. There will be a blog post about this later, but my expectations are that this exact setup will last me until 2030. Similarly, I don't really wear designer clothes or even have a car and most of the games and other entertainment I enjoy is also somewhat budget oriented (like buying games on a Steam sale a few years after release).

Why live frugally? Because my expectations are that the economy won't always be in a very good state and I generally value the ability to not stress over what I will do financially next month or even year more than I do about having a lavish lifestyle. Secondly: I live in Latvia and as a consequence most people here, myself included, aren't doing amazingly financially. You can compare the average developer salaries in Latvia with either the rest of the EU or even US and weep for me. There's of course the ability to start your own business, but my risk tolerance is a bit too low for that, so I generally look for ways to invest money without doing too high risk investments, nor let inflation eat it all up.

And it doesn't end with just me. If my parents or even friends need that sort of a help, I'm happy to be able to help them - for example, one of my friends ended up with a cancer diagnosis and while she's undergoing chemo, she can't really work and in her part of the world the treatment itself is covered by insurance, but it's not like the world around her just stands still and she couldn't really cover everything without a bit of help. Where governments and the systems around them fail, people have to just pick up the slack.

But back to the investments, in the January of this year, I put around 14'000 EUR into a bunch of different...

The great container crashout

Published: 8 months ago

At the start of this month, I had an outage where more or less everything I host went down. In part, it was due to failing builds, in part due to bad networking, in part due to broken cloud dependencies, alongside software just being plain finicky. The good news is that I resolved most of the issues, everything was back up and running and eventually I even figured out that you need to setup static routes if you want servers on Contabo to be able to talk to one another directly.

Unfortunately, this weekend, everything broke again. My homepage was down. My blog was down. I couldn't even connect to some of my servers. For a second I thought that maybe it was finally the time for me to be hacked and someone to steal all of my data or something and take over the servers... but nope, it was just more of software being obtuse garbage. Today, I'll tell you a bit more about the perils of self-hosting, though admittedly situations like this make me write increasingly profanity laden posts, which I don't normally do.

Tailscale is broken

Either way, allons-y, everything is down:

01-no-containers

No containers are running, quite possibly because the servers can't reach the leader node for the Docker Swarm setup, whereas Tailscale shows that we're logged out, which explains that. I've also got host records setup so that nodes can access sites hosted on each other directly through the Tailscale IP addresses (if I ever decide to cut off public access to anything I host, so things would keep working), which is a bit of a problem because suddenly they also can't pull new container versions.

Why are we logged out? The most I've done is update the Tailscale version a few times, but it's kind of horrible when such a base networking component is pulled out from under your feet, not unlike someone doing that to a rug that you're standing on. It seems like there are a few people experiencing similar issues with the latest version, but I honestly couldn't tell you the cause at the moment.

Either way, we log in:

02-tailscale-login

Great! Except it would be nice if a service that's supposed to be fairly automated wouldn't randomly need human intervention. Nothing changed about the nodes in the question, yet I was still logged out. I didn't even get an e-mail along the lines of: "Hey, nodes A, B and C have been disconnected from the tailnet due to reason X, please log back in if necessary." Nor could I even connect to some of the nodes because me primarily using Tailscale to access them in the first place.

I did work around all that, but it's pretty clear that Tailscale can't be the backbone of all my networking.

Docker is broken

Except even after a server restart (just in case), the Docker service shows inactive (dead) under its status. It's not even that the containers won't run, but rather the solution to run the containers is dead. Another server restart did eventually help...

Building brittle software

Published: 9 months ago

So, for most of the week, my blog, homepage and some other sites have been down, so let's talk about brittle software!

It all started, when I wanted to have a quick look at the font stack on my homepage, since I've forgotten exactly what it was and there's this one lovely site that gave me some inspiration. Unfortunately, when I went to open the site, my browser refused to do that:

01-site-down

(you'll notice that a lot of the details in this post are redacted, I got a bit curious about how many environment details I'd still accidentally let slip by if I tried to do that for a post, so let's see)

It wasn't an issue of the site loading slowly, it wasn't a database connection failing, or even Apache2 failing to reverse proxy the requests to the Docker container that is responsible for the exact site. Instead, the whole thing was just down.

That's quite odd, since I do have monitoring set up, Uptime Kuma, which has generally worked pretty well and previously had alerts to Mattermost set up, but since I no longer run my own Mattermost instance (so I'd have less software to keep patching constantly), was hooked up to my mail server.

Was the mail server also down, that notifications had failed to appear in my inbox? Aside from the fact that in the future I might also need to hook the monitoring up to some 3rd party mailbox (rate limits and privacy be damned), opening the monitoring site itself on another server showed that everything had been down more or less since the start of the month:

02-down-for-a-while

Normally, that'd be pretty terrible! It's like most of my online presence had been wiped out for a little bit, but the good news is that it's not particularly important - it's not like people can't receive healthcare or other essential services during this downtime, it mostly just hosts this blog, my homepage, as well as a few sideprojects here and there.

In that sense, not having SLAs is freeing, but on the other hand - also annoying. You see, I purposefully pick software and stacks that are quite boring, with the idea that I'll have fewer surprises along the way.

Docker Swarm issues

Unfortunately, there are still plenty of those, regardless of what I do:

03-no-such-image

So, it was complaining that the Docker images for my software could not be found neither locally, nor in my custom Docker Registry. That should never happen. Even if the registry is down, there's no reason for the local image to disappear.

Yet, this had been just one such time. Digging around in the logs, I could also see this for some of the services:

network sandbox join failed: subnet sandbox join failed for "10.0.2.0/24": error creating vxlan interface: file exists

For what it's worth, it seems like folks online have run into similar issues, seems like it's just one of those things that you run into if you have a cluster for long enough, not exactly a point in...

Stop killing games and the industry response

Published: 10 months ago

Recently, there's been a European Citizens' Initiative called "Stop Destroying Videogames" which by now has hit the milestone of 1'000'000 signatures. You're still encouraged to sign it if you care about its goals and are a EU citizen, since not all of those are likely to be valid signatures, but overall this is a pretty positive trend:

01-stop-killing-games

If it does indeed get enough valid signatures, it will get passed on to the European Commission and new laws might get passed as a consequence. However, there has been some opposition to it, so today I'd like to briefly describe what it's about, as well as why some of the people disagree with it, and why they might be quite wrong in doing so.

Why are video games being killed

First up, the actual objectives are pretty concise, here they are:

This initiative calls to require publishers that sell or license videogames to consumers in the European Union (or related features and assets sold for videogames they operate) to leave said videogames in a functional (playable) state.

Specifically, the initiative seeks to prevent the remote disabling of videogames by the publishers, before providing reasonable means to continue functioning of said videogames without the involvement from the side of the publisher.

The initiative does not seek to acquire ownership of said videogames, associated intellectual rights or monetization rights, neither does it expect the publisher to provide resources for the said videogame once they discontinue it while leaving it in a reasonably functional (playable) state.

It was all more or less kicked off by a YouTube creator named Accursed Farms, who illustrated why this matters with the example of the game The Crew:

02-videos

So what does this mean on a practical level?

Suppose you bought a game like The Crew back when it came out, in 2014. The game is about racing around in cars and can be played both in singleplayer modes and multiplayer modes, with other players. However, the game is made in such a way, that the account management functionality depends on servers, hosted by Ubisoft. This means that once some time passes and supporting those servers is no longer viable (once that AWS bill starts racking up and there are no new sales), Ubisoft is going to turn them off, just like they did in 2024, making the game unplayable.

Even the singleplayer components: you just wanting to race around cars in a world with you and other NPCs in it, is no longer viable. Essentially, you didn't "buy" the game, but in a sense were "renting" it for an indeterminate amount of time, a lease that expired due to the publishers and developers no longer wanting to provide that service for you. Yet, it wasn't marketed to you as a subscription, you thought that you were buying something, but were instead just being misled.

I don't think that's acceptable, not even remotely. Objectively, there are also no good reasons (there are reasons, just not good ones) for things to be that way: singleplayer games in general have been around...

AI, artisans and brainrot

Published: 10 months ago

Recently, I went to a software development event in Germany, where, at the conclusion of one of the days, I was hanging out in a hotel room with a friend of mine. Earlier, she had been exploring ways to get full VMs running on her Linux system easily, for doing some development work and experimentation with Docker on them.

Another friend had suggested VirtualBox as a starting point.

I looked at what she was trying to do, and suggested that maybe Vagrant might also be a good fit, due to her wanting to make some of the output of her experimentation easily transferable and reproducible elsewhere (the Vagrant files might be a little bit better for that, than just some Bash scripts that talk to VirtualBox), in addition to wanting a bunch of stuff to be executed during startup.

Obviously, both of us were actually kind of new to Vagrant, due to me having found Docker (and OCI containers in general) more suitable for my needs, also using cloud VPSes for anything longer term and also Ansible for automating configuring my boxes as needed, whereas she hadn't worked that much on the infrastructure side and didn't really feel like paying for cloud resources for something like this. There's nothing wrong with VMs and her choice here was nice, especially because she felt like wanting to learn more about various setups and experiment.

That's where we hit our first roadblock. I also told her about Docker Swarm as a pretty lightweight and simple to use, setup and operate orchestrator. There is nothing wrong with Swarm itself, but we did hit a snag when trying to get the cluster initiated. What we needed to do was:

She was going through the docs bit by bit and trying things that didn't really work (yet) and that were frustrating. At one point, she had decided to take a pause for a bit and suggested I give it a shot. Admittedly, I didn't care much about Vagrant: I knew what we wanted to do and the exact details of how we'd do that on the line-by-line level felt less important, so I just threw a prompt, a bit like the list above, at one of the AI tools at my disposal.

For this, I needed a few iterations and I got decently far: the AI generated code used the correct IP address within the script with #{ip}, initialized the cluster correctly AFTER actually waiting for Docker to startup in the container (another snag which I hit along the way, which needed another iteration) and could seemingly retrieve the join token. Where I had gotten stuck was that I'd still need to make it available to the other nodes and they were sitting in a waiting loop in their own script, waiting to...

More PC shenanigans: my setup until 2030

Published: 11 months ago

Previously, I explored why having two Intel Arc cards in the same PC is a bit of a mess: Two Intel Arc GPUs in one PC: worse than expected

A little bit later, I discovered that Windows 11 fixes some of the issues and that most games and programs, but not all, mostly work: What is ruining dual GPU setups

Now that I had an OS that's a little bit more cooperative, I decided to go from my setup with the Intel Arc B580 and AMD Radeon RX 580 to a dual Intel Arc setup, with both the A580 and B580 in the same computer, and see if I can fix the problematic software myself:

01-intel-arcs

So, without further ado, let's get into it!

A slightly cursed setup

You might have noticed in the previous examples that the inside of my case was quite messy. While I did attend to it later and managed to mostly improve everything, for running two GPUs I needed to make sure that all of the cables at the bottom of the case stay there, otherwise they risk hitting the fans and both creating noise, as well as preventing them from working:

02-using-tape

At that point I just looked at some of the other places where I have cables that are too long and just decided to tape them down as well, since it's quite unlikely that the tape would melt and since it doesn't conduct electricity either, there are very few risks related to doing that:

03-using-more-tape

As for the GPU itself, you can see how little clearance there is and why using tape might not have been the worst solution:

04-gpu-clearance

Of course, when two of those are added in the same case, things do get pretty crowded, which makes me think that perhaps I'd need a bigger case for this setup to truly work:

05-new-setup

The good news is that the setup itself works and a little bit better than before: Windows 11 lets me choose which games and programs should run on what GPU and since there is no AMD/Nvidia GPU in there, games like Delta Force also cannot decide on their own whim that they don't want to run on an Intel GPU, since those are the only ones that are available!

Aside from that, you will notice that I got rid of the top case fans, because those were pretty messy and it turns out that the CPU cooler is not good enough either way:

06-working-so-far

I would have actually stuck with this setup, if not for some rather interesting problems down the road...

There is some quite cursed software out there

The first issue, was that even when all of the monitors are plugged into the main card, the framerate of videos and such (on YouTube, for example) is really bad when the browser is running with hardware acceleration on and is using the secondary GPU, my old A580. The fact that this doesn't work well kind of defeats the whole point of the setup: if I can't play a game on the B580 while watching...

It works on my Docker

Published: 12 months ago

I like the concept of containers and I like Docker. Despite jails and LXC also having been around for a while, Docker has really nice DX and no wonder that it's pretty widely used nowadays (as well as the other OCI compatible solutions).

It's a pretty good way to get rid of some of the environment specific requirements and also have a consistent way of packaging applications, giving and limiting their resources, providing configuration and all that. I use it to run most of the software on my servers and some of the software locally nowadays, when it works, it's great.

However, they are lying to you. The core premise is that if you build a container and it runs, then the same container with the same configuration will run elsewhere, for example: locally, on your CI server and also on your environments.

That is false.

There is no software reproducibility in the real world (your enterprise Java app at $DAYJOB)

Have a look at this, I have an application that runs locally, in a container that I built:

01-app-is-running

Then, when the same Dockerfile is used to build it on a CI server, when deployed on the environment, I get a circular dependency error:

02-circular-dependency-error

You might think that there are configuration changes or something else that is causing this issue. Nope, because if I change the configuration:

03-configuration-changes

Then the error also changes:

04-another-error

The error itself is cause by working on a legacy project that has circular dependencies, but Spring Boot gives you a way around that until you can fix the code (that time hopefully not being never), where you can enable that configuration, as mentioned above:

spring.main.allow-circular-references=true
spring.main.lazy-initialization=true

Of course, that does nothing for explaining why it works locally in a container, but not in the same sort of container if it's built on a CI server.

What a mess

And honestly? I don't care.

If it doesn't work, then it doesn't work and no amount of appeal to complexity of modern software will make me not hate this.

That said, I don't hate Docker. I hate Spring Boot.

Maybe not for creating a viable alternative to the likes of ASP.NET but in the JVM ecosystem, that part is actually really nice, especially because of all the integrations and how productive it can make you when things work. Maybe not even for the performance impact it has compared to some of the other options since the likes of Dropwizard can be slower and I still enjoy those, since productivity and ease of use do often trump raw performance.

But definitely for their approach to dependency injection, where a lot of it is done at runtime, as well as all of the proxying garbage that goes on under the hood, often ruining not just your stack traces, not only your transactional code, but also plenty of your sanity along the way. If you have a framework that throws runtime errors for code that you could have checked at runtime, you have largely failed to create a good DI...

Windows bootloader: it just works

Published: 1 year ago

Here's a blog post of something positive that I recently experienced: the Windows bootloader is actually kinda cool!

I did move from Windows 10 to Windows 11 by installing it on a new 1 TB SSD, which will give me way more space than my old ~240 GB SSD had, in addition to future proofing it a bit more, as well as allowing my dual GPU setup to mostly work (even despite the interesting title of that blog post, it turned out to be mostly fine).

However, there were also some arguably stupid things that the Windows 11 installer did, which ended up with me having to move the bootloader over to another drive, in addition to shrinking the partitions to accommodate it. But let's not get ahead of ourselves here. Let's start by illustrating that the typical partition layout on a drive for a Windows 11 install might look like the following:

When you install Windows 11 on a blank drive, typically all of these will be created.

My situation

In my case, while I did install the OS on a blank drive, I still had a drive with Windows 10 connected to it (my old SSD), so the installer basically saw that there already is an EFI partition on the old drive and reused it, instead of creating another one on the new drive. Basically, this saved about 500 MB of space (which I don't care about) but also means that I can't wipe the old drive, because then the system would no longer boot.

Obviously, this was not acceptable, so I came up with a slightly wild plan: instead of reinstalling the whole thing again, I would just shrink the data partition on the new drive, move it a bit to the right, also move the MSR partition, and then squeeze the EFI partition from the old drive (straight up cloning it) in at the very start of the new drive. Then, all I'd have to do is wipe the old one clean and tell the BIOS to use the bootloader on the new drive:

00-the-setup

In general, this is something that you'd regard as a "bad idea", because there's a serious chance of things going wrong and experiencing data loss. Thankfully, I also had some Seagate 1 TB HDDs laying around as well as both some 2.5" and 3.5" enclosures, so I could just flash Rescuezilla on a USB drive and use it to backup the entire drive that I was about to mess around with:

01-backups

Honestly, it is great software! It's based on Clonezilla except also has a nice GUI and a desktop environment alongside tools like GParted...