Digest: Hacker News

ID Type Limit Status Last Update Next Update
digest-hn digest 10 Enabled 9 hours ago 14 hours from now
Posts History Gallery Config RSS JSON

Posts (10)

Digest: Hacker News: May 12 - May 13, 2026

Published: 9 hours ago | Author: System

Bambu Lab is abusing the open source social contract

Bambu Lab is abusing the open source social contract

477 points | 168 comments

Full disclosure: I've never owned a Bambu because I've never loved the idea of a "closed" ecosystem 3D printer, however I have used them, and am very familiar with the 3d printing space beyond Bambu.

For anyone considering alternatives: You should know that almost all other 3D printers expect you to know a little more about how they actually work than Bambus. Bambus are as close as you can get to a "just works" type experience, but modern alternatives from others are nowhere near as hard as they used to be.

The closest "easy" alternative is probably Prusa, but you'll pay significantly more for a Prusa machine than you would a Bambu. They're an excellent company, and the complete opposite of Bambu when it comes to Openness. If money is no object, Prusa is highly recommended.

Beyond Prusa, there's a lot of other options. https://auroratechchannel.com/#section2 This list is a good one.

I personally run an old Elegoo Neptune 4 pro - but my needs are quite low. If I were buying today, a Snapmaker U1 or the Creality K2 Plus is probably where I'd end up going. — kn100


Googlebook

Googlebook

https://www.reddit.com/r/Android/comments/1tb8xls/introducin...

555 points | 890 comments

Gross. This is just more proof that corporations simply don't know how to market AI. Everything is an ad for an ad at this point. The very first thing they show this new machine doing is helping people shop for clothes using AI.

No one is doing that, these people don't exist. No matter how hard corporate America wishes they did. This is why AI doesn't sell. This is why companies like Microsoft and Dell are pulling back on their AI claims and why Apple has nearly wiped it off their site all together, seriously go check out apple.com, not a single mention of Apple Intelligence.

At this point I'm convinced that marketing has been completely taken over by shareholder shills, marketing to customers they wish they had instead of the real customers that exist. — Jzush


Why senior developers fail to communicate their expertise

Why senior developers fail to communicate their expertise

339 points | 162 comments

Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.

An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.

Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.

Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer. — hamstergene


EU to crack down on TikTok, Instagram's 'addictive design' targeting kids

EU to crack down on TikTok, Instagram's 'addictive design' targeting kids

378 points | 320 comments

This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0. — conception

Rendering the Sky, Sunsets, and Planets

Rendering the Sky, Sunsets, and Planets

392 points | 34 comments

I saw this a while ago so it might not be totally related, but Sebastian Lague did a video on atmospheres for his planet generation experiment which was also very entertaining to watch [1].

There's something particularly entertaining on developing visuals and watching them come a reality — I hope at some point be able to experiment in this field.

[1] https://www.youtube.com/watch?v=DxfEbulyFcY — etra0


Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model

Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model

Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.

We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.

Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).

Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)

You can test it right now and finetune on your Mac/PC: https://github.com/cactus-compute/needle

The full writeup on the architecture is here: https://github.com/cactus-compute/needle/blob/main/docs/simp...

We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.

While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.

This is part of our broader work on Cactus (https://github.com/cactus-compute/cactus), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: https://news.ycombinator.com/item?id=44524544

Everything is MIT licensed. Weights: https://huggingface.co/Cactus-Compute/needle GitHub: https://github.com/cactus-compute/needle

240 points | 86 comments

Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.

But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.

E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom` — ilaksh


Restore full BambuNetwork support for Bambu Lab printers

Restore full BambuNetwork support for Bambu Lab printers

352 points | 144 comments

This looks to be a clone of the prior state of the repository that caused all the Bambu drama earlier this week.

I did a ton of research because I didn't understand what people wanted here, and this is what's going on:

Right now, Bambu have adjusted their system into two modalities:

* "default" or "Cloud" mode, where you get an app, remote monitoring, but you have to use Bambu Studio or Bambu Connect to send prints. They implemented this by adding cloud auth to their "internal API;" the client application has to get a token from Bambu's servers, even if the request it eventually makes is a "local" one.

* LAN / Developer mode, where the device displays a token and you put it into your app. This disables all of the remote monitoring but in exchange, clients can send prints locally.

What users want is to "have their cake and eat it too;" they want the local token authentication _and_ the cloud authentication enabled at the same time. This isn't actually possible, so this plugin approximates it by emulating the interface to the cloud authentication to make the "Bambu Network" cloud RPC calls from a local slicer (one of these calls is a local_print call, so ostensibly this allows you to send prints without running them through the cloud, although with all of the online functionality still enabled and required, this seems like a pretty brave thing to trust).

Personally, I find the Bambu reaction distasteful, and there's an argument that the offline mode only exists due to similar outrage, but I don't see the current system as particularly bad and find the appetite to restore "untrustworthy" cloud functionality a bit amusing. — bri3d


The Future of Obsidian Plugins

The Future of Obsidian Plugins

281 points | 116 comments

Obsidian CEO here. We've been working for nearly a year to launch this new Community site and review system. I'm very excited about this first version but there are many more improvements to come.

I've tried to be exhaustive with the blog post, FAQs, and next steps on our roadmap, but I am sure I forgot some things, so feel free to ask!

This has been an incredibly challenging project for a number of reasons. We're only seven people but we have thousands of plugin developers and millions of users. There are many competing priorities to balance.

We wanted to make sure the new system would be easy to adopt, backwards compatible, and not completely break people's workflows, while still being a major improvement over the old approach, and allow us to gradually continue enhancing security and discoverability of plugins.

Consider it a work in progress. We're listening to everyone's ideas and gripes, and will keep iterating :) — kepano


Operation: Epic Furious

Operation: Epic Furious

323 points | 111 comments

It's great except the war is obviously for Israel not oil, we had more access to oil before the war — an0malous

How to make your text look futuristic (2016)

How to make your text look futuristic (2016)

306 points | 36 comments

Does the Back To The Future logo really count? Raiders of the Lost Ark as a very similar style but does not evoke "future". Yes, there are subtle differences. My point is, if you divorced them from the connection to their content I think it would be hard to point to one as "future" and the other as "not future" — socalgal2

Digest: Hacker News: May 11 - May 12, 2026

Published: yesterday | Author: System

TanStack NPM Packages Compromised

TanStack NPM Packages Compromised

410 points | 125 comments

Please be careful when revoking tokens. It looks like the payload installs a dead-man's switch at ~/.local/bin/gh-token-monitor.sh as a systemd user service (Linux) / LaunchAgent com.user.gh-token-monitor(macOS). It polls api.github.com/user with the stolen token every 60s, and if the token is revoked (HTTP 40x), it runs rm -rf ~/.

https://github.com/TanStack/router/issues/7383#issuecomment-... — cube00


Ratty – A terminal emulator with inline 3D graphics

Ratty – A terminal emulator with inline 3D graphics

449 points | 156 comments

This reminds me of when compiz came out and everyone was like MY WINDOWS ARE ON A CUBE and I NEED WOBBLY WINDOWS.

So anyway, being that guy, I immediately installed it. — ghostoftiber


GitLab Announces Workforce Reduction and End of Their CREDIT Values

GitLab Announces Workforce Reduction and End of Their CREDIT Values

240 points | 207 comments

Their old CREDIT values: Collaboration, Results for Customers, Efficiency, Diversity, Inclusion & Belonging, Iteration, and Transparency.

New values: Speed with Quality, Ownership Mindset, Customer Outcomes.

In other words, work harder, not smarter, and no more DEI. — Animats


If AI writes your code, why use Python?

If AI writes your code, why use Python?

394 points | 401 comments

Not just for LLMs, but in general if code is produced automatically by a tool and isn't going to be a hundred percent proofread and tested by humans who could have written it manually, it's always better to use the safest possible language so that the compiler can catch most of the errors. So yeah, Rust or OCaml are good candidates. Performance is also a good point but it's a secondary issue in my opinion. — p4bl0

Software engineering may no longer be a lifetime career

Software engineering may no longer be a lifetime career

335 points | 579 comments

Multiple times per week I have the same conversation. It goes something like this:

  - AI will make developers irrelevant
  - Why?
  - Because LLMs can write code
  - Do you know what I do for a living?
  - Yes, write code?
  - Yes, about 2-5% of the time.  Less now.
  - But you said you are a developer?
  - I did
  - So what do you do 95-98% of the time?
  - I understand things and then apply my ability to formulate solutions
  - But I can do that!
  - So why aren't you?
The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.

Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course. — bborud


CUDA-oxide: Nvidia's official Rust to CUDA compiler

CUDA-oxide: Nvidia's official Rust to CUDA compiler

347 points | 107 comments

This is amazing.. ive been working with custom CUDA kernels and https://crates.io/crates/cudarc for a long time, and this honestly looks like it could be a near drop-in replacement.

im especially curious how build times would compare? Most Rust CUDA crates obv rely on calling CMake or nvcc, which can make compilation painfully slow. coincidentally, just last week i was profiling build times and found that tools like sccache can dramatically reduce rebuild times by caching artifacts - but you still end up paying for expensive custom nvcc invocations (e.g. candle by hugging face calls custom nvcc command in their kernel compilation): https://arpadvoros.com/posts/2026/05/05/speeding-up-rust-whi... — arpadav


UCLA discovers first stroke rehabilitation drug to repair brain damage (2025)

UCLA discovers first stroke rehabilitation drug to repair brain damage (2025)

306 points | 62 comments

My understanding was that strokes caused brain cell death, and that there was no coming back from that, but my neurologists would speak of 'bruised' brain cells, and that after weeks or months or even years you can see recovered function. UCLA's work here is targeting this disconnection and the lost rhythm in the surviving, distant networks. However there is, as yet, NO concievable intervention that could recover function from cell death at that center of the infarct. — padolsey

Can someone please explain whether Cloudflare blackmailed Canonical?

Can someone please explain whether Cloudflare blackmailed Canonical?

229 points | 136 comments

"Renting attack capacity from [cloudflare]" is inaccurate as I understand things. That group hosts their site behind cloudflare but I have not seen anyone claim that cloudflare's infra is used for the attacks.

This whole article seems conflate hosting an informational site run by the attackers and hosting the attack itself. — jwitthuhn


A.I. note takers are making lawyers nervous

A.I. note takers are making lawyers nervous

224 points | 163 comments

https://archive.is/wPKhf — Tistron

They Live (1988) inspired Adblocker

They Live (1988) inspired Adblocker

240 points | 76 comments

Replacing ads reminds me of the eye tap AR stuff by Steve Mann

https://news.ycombinator.com/item?id=44406552 — riedel


Digest: Hacker News: May 10 - May 11, 2026

Published: 2 days ago | Author: System

Hardware Attestation as Monopoly Enabler

Hardware Attestation as Monopoly Enabler

748 points | 279 comments

The EU Digital (identity) Wallet EUDI requires hardware attestation by Google or Apple, effectively tying all the digital EU identities to American duopoly. Talk about digital sovereignity. Apparently protecting the children > sovereignity.

https://gitlab.opencode.de/bmi/eudi-wallet/wallet-developmen... — miohtama


Local AI needs to be the norm

Local AI needs to be the norm

434 points | 220 comments

For the mainstream audience, the sentiment around local ai today is the same that they had around open source a few decades ago. For a few products, some paid solutions were so much more advanced that open source were very often completely overlooked. Why bother ? And the like. Then we had captive SaaS and other plateforms and now it's obviously wrong for most of us.

The dependency we have with anthropic and openai for coding for instance is insane. Most accept it because either they don't care, or they just hope chinese will never stop open weights. The business model of open weights is very new, include some power play between countries and labs, and move an absurd amount of money without any concrete oversight from most people.

It's a very dangerous gamble. Today incredible value is available for nearly everyone. But it may stop without any warning, for reason outside our control. — TheJCDenton


Louis Rossmann offers to pay legal fees for a threatened OrcaSlicer developer

Louis Rossmann offers to pay legal fees for a threatened OrcaSlicer developer

442 points | 236 comments

I made the tragic mistake of getting a Bambu printer (an X1C, with AMS even...) right before they gave all of us the middle finger. I now have it offline, running out of date firmware, connected to a special WiFi network that is isolated from the Internet.

That upset me, but now I'm pissed. Now I don't even care about their stupid printers. Now I'd like to waste Bambu Lab's time and cause problems for them.

And also, while this X1C should be going strong for years, my eyes are on Prusa should I want another printer any time soon for any reason. Less polished or not, they seem like they're still better for consumers even though they are apparently less open than they used to be. But I'm of course interested in hearing what people recommend, too. (I got an X1C because I knew it would be simple, but I don't particularly mind getting my hands dirty or anything. I did build an Ender 3 kit before that.) — jchw


Incident Report: CVE-2024-YIKES

Incident Report: CVE-2024-YIKES

345 points | 84 comments

For anyone confused, this is (very good imo) fiction about supply-chain incidents. It had me very worried during a brief scan that it was real though, which made me read it more attentively :) — lynndotpy

I'm going back to writing code by hand

I'm going back to writing code by hand

245 points | 101 comments

I've set a few rules for working with coding agents:

1. If I use a coding agent to generate code, it should be something I am absolutely confident I can code correctly myself given the time (gun to my head test).

2. If it isn't, I can't move on until I completely understand what it is that has been generated, such that I would be able to recreate it myself.

3. I can create debt (I believe this is being called Cognitive Debt) by breaking rule 2, but it must be paid in full for me to declare a project complete.

Accumulating debt increases the chances that code I generate afterwards is of lower quality, and it also feels like the debt is compounding.

I'm also not really sure how these rules scale to serious projects. So far I've only been applying these to my personal projects. It's been a real joy to use agents this way though. I've been learning a lot, and I end up with a codebase that I understand to a comfortable level. — baddash


Remind HN: Today is Mother's Day, call your moms

And for any mothers here, happy Mother's Day.

340 points | 135 comments

This is the first year when I can’t do that.

Please go do it on my behalf, while it’s possible. — kstrauser


Space Cadet Pinball on Linux

Space Cadet Pinball on Linux

228 points | 66 comments

It's ridiculous how accurate this recreation is to the original, it looks and feels identical.

The author was able to do this just decompiling the exe files, without looking at the original source code. Basically, completely blind.

So it goes without saying: The deaf, dumb and blind kid sure makes a mean pinball. — s20n


Running local models on an M4 with 24GB memory

Running local models on an M4 with 24GB memory

249 points | 79 comments

Getting so close to good!

I consider Gemma 4 31B (dense / no MoE), the new baseline for local models. It's obviously worse than the frontier models, but it feels less like a science experiment than any previous local model I’ve run, including GPT OSS 120B and Nemotron Super 120B.

On my M5 Max with 128 GB of RAM and the full 256K context window, I see RAM use spike to about 70 GB, with something like 14 GB of system overhead. A 64 GB Panther Lake machine with the full Arc B390, or a 48 GB Snapdragon X2 Elite machine, could probably run it with a 128K to 256K context window. Maybe you can squeeze it into 32GB (27.5GB usable) with a 32K context window?

Even last year, seeing this kinda performance on a mainstream-ish/plus configuration would have seemed like a pipe dream. — soganess


Maryland citizens hit with $2B power grid upgrade for out-of-state AI

Maryland citizens hit with $2B power grid upgrade for out-of-state AI

222 points | 123 comments

It seems that big money can overrule local government regulators at will.

Here in Nevada, (Warran Buffet owned) NV Energy already has approval for a "Demand Charge" that will increase rates for everyone, and further reduce the ridiculously low amount of money that consumers get for selling their excess solar power back to the grid.

The regulators didn't even resist, but there has now been so much backlash that they're finally scheduling public hearings after the fact. The announcement doesn't even mention the Demand Charge by name, and many consumers aren't even aware they they're about to be screwed.

One of the more obscene things about this new charge is that people with PV arrays will pay a fee for demanding more power from their own grid-tied systems.

https://www.nvenergy.com/publish/content/dam/nvenergy/bill_i... — anonymousiam


YC's Biggest Scandals

YC's Biggest Scandals

217 points | 76 comments

YC has funded over 5000 companies, and this page catalogs 39 that failed, many of which, on the sites own terms, are simply business failures, with no additional drama. I don't think the authors of the site realize the case they're actually making here. — tptacek

Digest: Hacker News: May 09 - May 10, 2026

Published: 3 days ago | Author: System

Internet Archive Switzerland

Internet Archive Switzerland

285 points | 35 comments

Relevant blog post: https://blog.archive.org/2026/05/06/internet-archive-switzer...

> Internet Archive Switzerland joins a growing group of mission-aligned organizations, alongside Internet Archive, Internet Archive Canada, and Internet Archive Europe. Together, these independent libraries strengthen a shared vision: building a distributed, resilient digital library for the world. — input_sh


Bun's experimental Rust rewrite hits 99.8% test compatibility on Linux x64 glibc

Bun's experimental Rust rewrite hits 99.8% test compatibility on Linux x64 glibc

https://xunroll.com/thread/2053047748191232310

Recent and related: Zig → Rust porting guide - https://news.ycombinator.com/item?id=48016880 - May 2026 (540 comments)

354 points | 349 comments

From 4 days ago: https://news.ycombinator.com/item?id=48019226

  > I work on Bun and this is my branch
  >
  > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
  >
  > I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
— legerdemain

I’ve banned query strings

I’ve banned query strings

Related: https://susam.net/no-query-strings.html

233 points | 126 comments

You know I was actually really curious about this so I went back to the HTML and URL W3C standards and surprisingly they don't actually have any definitions of format other than being percent encoded. One might conflate query strings with "form-urlencoded"[0] query strings, which is one potential interoperability format, but in general a queries string is just any percent encoded string following a "?" in a url[1], and just another property in the "URL" HTML object that can be used in the generation of a response. While additionally there is a URLSearchParams object that is the result of parsing the query string with the form-urlencoded parser, this is simply an interoperability layer for JavaScript.

I'm going to be honest, I was pretty geared up to have a contrarian opinion until I looked at the standards but they're actually pretty clear, a 404 could be a proper response to unexpected query string; query string is as much part of the URL API as the path is and I think pretty much everyone can acknowledge that just tacking random stuff onto the path would be ill advised and undefined behavior.

[0]: https://url.spec.whatwg.org/#application/x-www-form-urlencod...

[1]: https://url.spec.whatwg.org/#url-class — jedimastert


Meta's embrace of A.I. is making its employees miserable

Meta's embrace of A.I. is making its employees miserable

225 points | 206 comments

https://archive.is/JUPmz — joenot443

The hypocrisy of cyberlibertarianism

The hypocrisy of cyberlibertarianism

244 points | 193 comments

I was a great admirer (and later friend) of Barlow, and I'm still very deeply influenced by the Declaration and many adjacent phenomena. I agree with some fraction of this post in terms of seeing many people shelving these principles when it gets inconvenient for them.

In the past few months, I've been troubled by one specific part of the Declaration, in the final paragraph:

> We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.

Specifically, I think the cyberspace civilization, to the extent that it exists, has been a failure lately on "humane" in the broad sense. The author of the linked post might say that this has to do with the need for moderation (indeed this is a big surprise from the 1996 point of view, as there were still unmoderated Usenet groups that people used regularly and enthusiastically, and spam was a recent invention).

I think there are lots of other things going on there over and above the moderation issue, but one is that the early Internet culture was very self-selected for people who thought that the ability to talk to people and the ability to access information were morally virtuous. I was going to say that it was self-selected for intellectualism but I know that early Internet participants were often not particularly scholarly or intellectually sophisticated (some of our critics like Langdon Winner, quoted here, or Phil Agre, were way ahead on that score).

So, I might say it was self-selected in terms of people who admired some forms of communicative institutions, maybe like people whose self-identity includes being proud of spending time in a library or a bookstore, or who join a debate club. (Both of those applied to me.) This is of course not quite the same thing as intellectual sophistication.

People were mean to each other on the early Internet, but ... some kind of "but" belongs here. Maybe "but it was surprising, it wasn't what they expected"? "But it wasn't what they thought it was about"?

Nowadays "humane" feels especially surprising as a description of an aspiration for online communications. It's kind of out the window and a lot of us find that our online interactions are much less humane that what we're used to offline. More demonization of outgroups, more fantasies of violence against them, more celebration of violence that actually occurs, more joy that one's opponents are suffering in some way. (I see this as almost fully general and not just a pathology of one community or ideology.)

I'm troubled by this both because it's unpleasant and even scary how non-humane a lot of Internet communities and conversation can be, and because it's jarring to see Barlow predict that specific thing and get it wrong that way. Many other things Barlow was optimistic about seem to me to have actually come to pass, although imperfectly or sometimes corruptly, but not this one. — schoen


Distributing Mac software is increasing my cortisol levels

Distributing Mac software is increasing my cortisol levels

263 points | 177 comments

Any user who does not like Gatekeeper can turn it off on their machine in ten seconds by running this in a Terminal:

    sudo spctl —-master-disable
People will say, no, that’s too big a hammer, it’s not safe… but then, like, what do you actually want? Either you keep Gatekeeper because you like the friction it introduces, or you don’t like that friction and you should go turn it off. Pick one, you obviously can’t have both!

Of course, you as the developer can’t make this choice for your users… but isn’t that as it should be? The user decides what code is allowed to run on their machines. And the default setting is restrictive because anyone who knows what they’re doing can easily change it.

P.S. Meanwhile, on iOS there’s no way to install unsigned software at all, and on Android (starting soon) the process takes 24 hours instead of ten seconds. That is actually ridiculous because it’s taking away user choice.

P.P.S. To be clear, modern macOS has plenty of other restrictions which can’t really be turned off and which I find super annoying. Gatekeeper just isn’t one of them.

Edit: I’ve just learned that as of Sequoia, you have to also tick a box in Settings after running the Terminal command. So maybe it takes 30 seconds instead of ten seconds. That’s mildly more annoying, but still doesn’t really seem like a big deal to me. — Wowfunhappy


GrapheneOS fixes Android VPN leak Google refused to patch

GrapheneOS fixes Android VPN leak Google refused to patch

261 points | 88 comments

> Because system_server operates with elevated networking privileges and is exempt from VPN routing restrictions

So a VPN isn't a VPN on Android? Regardless of this bug. Do other locked down operating systems act the same? — nottorp


Show HN: Building a web server in assembly to give my life (a lack of) meaning

Show HN: Building a web server in assembly to give my life (a lack of) meaning

This is ymawky, a static file web server for MacOS written entirely in ARM64 assembly. It supports GET, PUT, DELETE, HEAD, and OPTIONS requests, and supports Range: bytes=X-Y headers (which allows scrubbing for video streaming). It decodes percent-encoded URLs, strictly enforces docroot, serves custom error pages for any HTTP error response, supports directory listing, and has (some) mitigations against slowloris-like attacks.

I’ve also written a more detailed writeup here: https://imtomt.github.io/ymawky/

283 points | 127 comments

It's a beautiful project, well crafted. To reflect to the other comments, projects like this are more like a Minecraft map for me. There are giant and amazing maps, small survival maps, local hosted for my friends and myself, and commercial focused high scale servers. Building a house, or designing a new road in the server became extremely easy with AI, put the value created in the world depends on the original purpose of the server and whether creating more houses and roads actually makes sense. I think it's a super thing that commercial server can build out faster and be bigger with more houses and roads on it, but The love an art project creates in the world is incomparable. — matteohorvath

Zed Editor Theme-Builder

Zed Editor Theme-Builder

221 points | 64 comments

I'm extremely glad to see something like this. I've tried to use Zed so many times, and this might sound neurotic -- but there are just so many little theming things that make a difference to me.

For example, https://imgur.com/a/ia2GCgg -- top is VSCode, bottom is Zed. Both using Svelte, and using a similar theme.

- Angle brackets are a different color

- Capitalized built-in components are a different color

- Boolean props are a different color

- Brackets are colored differently than text.

The inspector is a game changer, clicking into these specific things in the preview they provide is super helpful. — thecatapps


Digest: Hacker News: May 08 - May 09, 2026

Published: 4 days ago | Author: System

Google broke reCAPTCHA for de-googled Android users

Google broke reCAPTCHA for de-googled Android users

Related: Google Cloud fraud defense, the next evolution of reCAPTCHA - https://news.ycombinator.com/item?id=48039362

also: Google Cloud Fraud Defence is just WEI repackaged - https://news.ycombinator.com/item?id=48063199

460 points | 154 comments

My understanding is that this new reCAPTCHA is basically just remote attestation.

Remote attestation doesn't use blind signatures (as that would be 'farmable') so tying the device to the 'attestee' is technically possible with collusion of Google servers: EK (static burned-in private key) -> AIK (ephemeral identity key in secure enclave signed by a Google server) -> attestation (signed by AIK). As you can see if the Google server logs EK -> AIK conversions an attestation can be trivially traced to your device's EK. This is also why we don't really see and probably never will see online services which offer fake remote attestations, as it will be pretty obvious that the next step of running such a service is getting Google as a customer and having all your devices blacklisted. Private farms probably won't last long either as I'm sure Google logs everything and will correlate.

Unless something special is done with this new reCAPTCHA not only are you locking internet services behind TPM chips but you are also surrendering anonymity to Google. Unless you acquire untraceable burners for every service, the new reCAPTCHA will be technically capable to tying all your accounts across all these services together. Much like age verification. It may appear that the service would need to cooperate to link the reCAPTCHA session to your registration but the registration time alone will likely be sufficient (the anonymity set will be all but destroyed). — coppsilgold


Poland is now among the 20 largest economies

Poland is now among the 20 largest economies

616 points | 527 comments

The story is longer: Poland was the first country to make a remarkable peaceful transition from a bankrupt, failed Soviet satellite state. The shock therapy, plus NATO and EU aspirations, paved the way.

It is a story of a country that made a lot of the right decisions along the way. Managed to keep consistent high growth, not a pony trick or boom/bust mode.

Poland should be a role model for many other countries.

Recommend a book: https://www.amazon.com/Europes-Growth-Champion-Insights-Econ...

And Noah's blog post: https://www.noahpinion.blog/p/the-polandmalaysia-model — jakozaur


Google Cloud Fraud Defence is just WEI repackaged

Google Cloud Fraud Defence is just WEI repackaged

637 points | 321 comments

I saw this coming from miles away. Computers are better at solving CAPTCHAs than people are and people can be bribed or convinced to join botnets so IP whitelisting doesn't work either. Now we have tons of fingerprinting and behaviour analysis but governments are cracking down on that. Plus, YouTube had a massive ad fraud problem with ads being played back in the background in embedded videos, so their detection clearly wasn't good enough.

There aren't many good ways to prove you're not a bot and there are even fewer that don't involve things like ID verification.

Their opt-in approach helps shift the blame to individual web stores for a while, so who knows if this will take off. But either way, in the long term, the open, human internet is either going away or getting locked behind proofs of attestation like this.

Apple built remote attestation into Safari years ago together with Cloudflare and Google is now going one step further, as Apple's approach doesn't work well against bots that can drive browsers rather than scripted automation tools.

Luckily, their current approach can be worked around because it's only targeting things like stores now and you can buy things from other stores. Once stores find out that click farms have hundreds of phones just tapping at remotely served content, uptake will probably be limited.

It'll be a few years before this is everywhere, but unless AI suddenly isn't widely available anymore, it's going to be inevitable. — jeroenhd


David Attenborough's 100th Birthday

David Attenborough's 100th Birthday

401 points | 80 comments

Top man, lives up on Richmond Hill and absolutely loves it - when asked about his travels and adventures and where he would choose to live, he replied, "I already live there"

Fairly well-known locally is that my favourite bookshop, The Open Book in Richmond, stocks signed copies of all his books. They used to be signed directly on the page, but since he got to the mid-to-late nineties in age, tons of hardbacks are too much, so Helena wanders up there to get a load of bookplates signed these days.

Apart from that, I order all my books from them when I'm in London and a subsequent chat with Madeleine usually lasts ten times as long as the book shopping.

Anyway, I digress, yes, Sir David, amazing body of works and the books are wonderful. — vr46


A web page that shows you everything the browser told it without asking

A web page that shows you everything the browser told it without asking

519 points | 262 comments

* I'm not in that city.

* It's running a kind of Chrome on a kind of Linux, at a stretch.

* Nobody can infer when I work and when I sleep. That includes me.

* The recent, high-end display is the screen of a low-end tablet I bought in a supermarket five years ago.

* But yes, browser fingerprinting is annoying.

* Since you can detect light mode, would it kill you to honor it? — card_zero


An Introduction to Meshtastic

An Introduction to Meshtastic

362 points | 136 comments

I had never heard of this before, then last week I watched a video about it and was hooked. Now I'm seeing it everywhere!

Meshtastic and Meshcore are both cool LoRa-based mesh text messaging that operate in an no-license-required band. While this limits your transmit power, it doesn't prohibit encryption - the inverse of most ham radio rules!

Some cities have thriving communities of Meshtastic and/or Meshcore. You can look at maps of coverage to get a very general idea - in my experience, most Meshtastic nodes are NOT listed, while a good number of Meshcore nodes are.

Meshtastic treats the mesh as dynamic - clients are assumed to always be moving, so transmissions flood between different nodes that are in eachother's reach.

Meshcore has a static layer - repeaters that are assumed to be in fixed positions - and a dynamic layer - companions that move. With fixed and hopefully reliable connections between repeaters, routing paths between two users can be 'cached', which avoid the bandwidth overhead of flood routing.

You can get started with a low cost ($30) transceiver board and an SMA antenna ($10) for the ISM band of your region. Stick it in a box an mount it somewhere high up, and see if you can pick up any other nodes! — Cyan488


Cartoon Network Flash Games

Cartoon Network Flash Games

261 points | 89 comments

Doh, I did some work on some CN games back in the day -- but don't see any of those here. Hopefully they keeping adding to it! — darkmarmot

AI is breaking two vulnerability cultures

AI is breaking two vulnerability cultures

310 points | 128 comments

This has been a very long time coming and the crackup we're starting to see was predicted long before anyone knew what an LLM is.

The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.

This has been playing out in slow motion ever since BinDiff: you can't patch software without disclosing vulnerabilities. We've been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.

It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.

The norms of coordinated disclosure are not calibrated for this environment. They really haven't been for the last decade.

I'm weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches. — tptacek


A recent experience with ChatGPT 5.5 Pro

A recent experience with ChatGPT 5.5 Pro

https://twitter.com/wtgowers/status/2052830948685676605

https://xcancel.com/wtgowers/status/2052830948685676605

300 points | 153 comments

I am a physics professor and often use Gemini to check my papers. It is a formidable tool: it was able to find a clerical error (a missing imaginary unit in a complex mathematical expression) I was not able to find for days, and it often underlines connections between concepts and ideas that I overlooked.

However, it often makes conceptual errors that I can spot only because I have good knowledge of the topic I am discussing. For instance, in 3D Clifford algebras it repeatedly confuses exponential of bivectors and of pseudoscalars.

Good to know that ChatGPT 5.5 Pro can produce a publishable paper, but from what I have seen so far with Gemini, it seems to me that it is better to consider LLMs as very efficient students who can read papers and books in no time but still need a lot of mentoring. — ziotom78


US Government releases first batch of UAP documents and videos

US Government releases first batch of UAP documents and videos

https://apnews.com/article/trump-ufos-uap-aliens-pentagon-re...

https://www.war.gov/UFO/#release

273 points | 424 comments

Several of these look like balloons and birds.

Two of them have already leaked before. Both of those are missiles being viewed with an infrared camera. One of them shows a missile passing through the field of view rapidly with a motion blur streak behind it. The other shows a missile performing maneuvers and a camera artifact showing a star-like diffraction+aperture artifact around the bright IR light source.

None of these pieces of imagery look like something doing something particularly interesting. What happens is a military personnel records a video. They don't know what it is in the moment. It gets labeled "unknown" and put on a DoD file server, and then either they or someone else who stumbles across it clips out part of it and starts to spread rumors about this amazing video of a UAP they saw. There are people who work for the DoD who appear to spend a great deal of their free time scrolling around internal DoD file servers looking for anything they can portray as proof of aliens, and sometimes they leak their stories and even clips to public UFO influencers like Jeremy Corbell. — krferriter


Digest: Hacker News: May 07 - May 08, 2026

Published: 5 days ago | Author: System

Cloudflare to cut about 20% workforce

Cloudflare to cut about 20% workforce

https://blog.cloudflare.com/building-for-the-future/

580 points | 344 comments

This is awkward.

Exhibit A - September 2025 - "Help build the future" - Cloudflare hires 1111 interns to "help build the future" [https://blog.cloudflare.com/cloudflare-1111-intern-program/]

Exhibit B - May 2026 - "Building for the future" - Cloudflare lays off 1100 people, about 20% of their workforce to "continue building the future" [https://blog.cloudflare.com/building-for-the-future/]

I'll finish on this quote: "The future ain't what it used to be." — Yogi Berra — AloysB


AI slop is killing online communities

AI slop is killing online communities

386 points | 376 comments

I have largely written Reddit off and no longer visit it after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.

I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.

Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake. — carlgreene


The map that keeps Burning Man honest

The map that keeps Burning Man honest

281 points | 101 comments

Last year was tough - it rained for hours 5 nights in a row and the first rain night was accompanied by 70 mile an hour winds that did a massive amount of damage to camp infrastructure throughout the city. The roads in half the city were ruined by emergency traffic that kept on running throughout the storms, and the result was a lumpy nightmare that shook things loose from cars and bikes at a much higher rate than most years. The mud absorbed and hid things and made cleanup a far more grueling process than it usually is. We endured and did our best to still find and remove everything - breaking up mud clumps and raking/sifting through the dirt at the end of the week to find all that embedded trash. There are no public trash cans, no event dumpsters, etc. I can say from having been there almost every year since 07 that this was by far the hardest year for "mooping" - the process of spotting and picking up any item that shouldn't be on the ground - but that the group mindset endured and we somehow still trended downward in terms of overall trash.

I think the main difference between this and 2023 (the previous "mud burn") was that this time we had all the rain in the first half of the event, and then had relatively great weather for the second half. In 23, it closed out with the mud and people fleeing, leading to a spike. — ruleryak


Canvas is down as ShinyHunters threatens to leak schools’ data

Canvas is down as ShinyHunters threatens to leak schools’ data

https://thetech.com/2026/05/07/canvas-breach-26

https://techcrunch.com/2026/05/07/hackers-deface-school-logi...

538 points | 337 comments

Perspective from the trenches: I teach at a university that uses Canvas. We are in our final exams period right now.

We got our first email (from Academic Affairs) notifying us that it was down at 5:17pm EDT this afternoon, with little info; followup emails were sent at 6:24 and 6:57 with more info, but mostly about how we would be compensating for it and not about what actually was going on (other than, "nationwide shutdown" and "cybersecurity attacks", no further detail). I don't get a sense that they know much more than that, not that I would expect them to.

A perhaps telling detail: they're instructing us to have students email us directly with any work that had been submitted via Canvas. That suggests that they have no particular confidence that it will come back up soon.

I personally am only slightly affected; as a CS professor a lot of my students' work is done on department machines, and submitted that way, and I do the actual exams on paper. More importantly, I've never liked or trusted Canvas's gradebook, and so although I do upload grades to Canvas so students can see them, my primary gradebook is always a spreadsheet I maintain locally.

But I have a lot of colleagues for whom this is catastrophic at a level of "the whole building burnt down with all my exams and gradebooks in it"---even many of those that teach 100% in person have shifted much or all of their assessment into Canvas (using the Canvas "quiz" feature for everything up to and including final exams), and use the Canvas gradebook as their source-of-truth record. We've been encouraged to do so by our administration ("it makes submitting grades easier"). For faculty in that situation, they have few or zero artifacts that the students have produced, the students themselves don't have the artifacts to resubmit via email because they were done in Canvas in the first place, and they have no record of student grades or even attendance (because they managed that all inside Canvas). I guess they have access to the advisory midterm grades from March, if they submitted them (most do, some don't), but that might be it.

My gut feeling on this is that this is either resolved in hours (they have airgapped backups and can be working as soon as they can spin up new servers), or weeks (they don't). Very little in-between. And if that's true and we wake up tomorrow with this unresolved, I really have no idea what a lot of professors at my university and across the country are going to do to submit grades that are fair and reasonable. In the extreme case, they may have to revert to something we did in the pandemic semester (and before that, at my school, in the semester that two major academic buildings actually did burn to the ground a week before finals): let classes that normally count for a grade just submit grades as pass-fail. Because what else can you do?

(Well, one thing you can do is not put your eggs all in one basket, and not trust "the cloud" quite so much, but that ship's already sailed. I do wonder if in the longer term, anybody learns any lessons from this....)

UPDATE: As of 11:45pm EDT, my university's canvas instance is up and running! Here's hoping it stays (but I'll be downloading some stuff just in case...) — blahedo


Dirtyfrag: Universal Linux LPE

Dirtyfrag: Universal Linux LPE

344 points | 162 comments

This is very similar in root cause and exploitation to Copy Fail.

Which illustrates pretty well something that's lost when relying heavily on LLMs to do work for you: exploration.

I find that doing vulnerability research using AI really hinders my creativity. When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby. It's like a genie - you get exactly what you asked for and nothing more.

The researcher who discovered Copy Fail relied heavily on AI after noticing something fishy. If he had to manually wade through lots of code by himself, he would have many more chances to spot these twin bugs.

At the same time, I'm pretty sure that by using slightly less directed prompting, a frontier LLM would found these bugs for him too.

It's a very unusual case of negative synergy, where working together hurt performance. — firer


Chrome removes claim of On-device Al not sending data to Google Servers

Chrome removes claim of On-device Al not sending data to Google Servers

427 points | 161 comments

It seems to me that adding AI to desktop apps and sending the data back to the mothership for processing is an amazing way to collect data from people who, for the most part, would be completely unaware it's even happening.

Heck, most of them think the Internet is Chrome. — CrzyLngPwd


Maybe you shouldn't install new software for a bit

Maybe you shouldn't install new software for a bit

400 points | 200 comments

This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.

But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.

Well, now we're reaching the "find out" part of the process I guess. — marcus_holmes


Grand Theft Oil Futures: Insider traders keep making a killing at our expense

Grand Theft Oil Futures: Insider traders keep making a killing at our expense

404 points | 250 comments

The worst part is the sharp changes in the price being traded aren't achieved by magic but rather with guns & actual human suffering — Havoc

Agents need control flow, not more prompts

Agents need control flow, not more prompts

289 points | 157 comments

1000% agree. I am increasingly hesitant to believe Anthropic's continual war drum of "build for the capabilities of future models, they'll get better".

We've got a QA agent that needs to run through, say, 200 markdown files of requirements in a browser session. Its a cool system that has really helped improve our team's efficiency. For the longest time we tried everything to get a prompt like the following working: "Look in this directory at the requirements files. For each requirement file, create a todo list item to determine if the application meets the requirements outlined in that file". In other words: Letting the model manage the high level control flow.

This started breaking down after ~30 files. Sometimes it would miss a file. Sometimes it would triple-test a bundle of files and take 10 minutes instead of 3. An error in one file would convince it it needs to re-test four previous files, for no reason. It was very frustrating. We quickly discovered during testing that there was no consistency to its (Opus 4.6 and GPT 5.4 IIRC) ability to actually orchestrate the workflow. Sometimes it would work, sometimes it wouldn't. I've also tested it once or twice against Opus 4.7 and GPT 5.5; not as extensively; but seems to have the same problems.

We ended up creating a super basic deterministic harness around the model. For each test case, trigger the model to test that test case, store results in an array, write results to file. This has made the system a billion times more reliable. But, its also made the agent impossible to run on any managed agent platform (Cursor Cloud Agents, Anthropic, etc) because they're all so gigapilled on "the agent has to run everything" that they can't see how valuable these systems can be if you just add a wee bit of determinism to them at the right place. — 827a


DeepSeek 4 Flash local inference engine for Metal

DeepSeek 4 Flash local inference engine for Metal

259 points | 82 comments

Heh, I made something very similar for the Qwen3 models a while back. It only runs Qwen3, supports only some quants, loads from GGUF, and has inference optimized by Claude (in a loop). The whole thing is compact (just a couple of files) and easy to reason about. I made it for my students so they could tinker with it and learn (add different decoding strategies, add abliteration, etc.). Popular frameworks are large, complex, and harder to hack on, while educational projects usually focus on something outdated like GPT-2.

Even though the project was meant to be educational, it gave me an idea I can't get out of my head: what if we started building ultra-optimized inference engines tailored to an exact GPU+model combination? GPUs are expensive and harder to get with each day. If you remove enough abstractions and code directly to the exact hardware/model, you can probably optimize things quite a lot (I hope). Maybe run an agent which tries to optimize inference in a loop (like autoresearch), empirically testing speed/quality.

The only problem with this is that once a model becomes outdated, you have to do it all again from scratch. — kgeist


Digest: Hacker News: May 06 - May 07, 2026

Published: 6 days ago | Author: System

Valve releases Steam Controller CAD files under Creative Commons license

Valve releases Steam Controller CAD files under Creative Commons license

963 points | 325 comments

I love the readme on the gitlab page [1]. It feels so.. friendly :)

> This repository contains CAD files for the external shell (surface topology) of Steam Controller and the Steam Controller Puck, under a Creative Commons license. This includes an STP model of each, an STL model of each, and an engineering drawing with critical features/keep outs for each.

Feel free to use these to make your own Puck holders, Controller sweaters, or whatever else you want to create!

Your Steam Controller is yours, and you have the right to do with it what you want. That said, we highly recommend you leave it to professionals. Any damage you do will not be covered by your warranty – but more importantly, you might break your Steam Controller, or even get hurt! Be careful, and have fun.

[1] https://gitlab.steamos.cloud/SteamHardware/SteamController — roer


Appearing productive in the workplace

Appearing productive in the workplace

620 points | 246 comments

> "Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries. Retrospective notes, post-incident reports, design memos, kickoff decks: every artifact that can be elongated is, by people who do not read what they produce, for readers who do not read what they receive."

Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).

So now the "productivity-gain bottleneck" is people who still care enough to review manually. — wcfrobert


Red Squares – GitHub outages as contributions

Red Squares – GitHub outages as contributions

707 points | 158 comments

Every time one of these vibe coded meme sites gets posted there’re endless comments about how it’s not actually because of load, the GitHub team is shit, their tech stack is shit, Microsoft is shit, Azure is shit, etc.

Just compare the GitHub status page for public GitHub vs the enterprise cloud pages.

Enterprise has much better numbers and I’ve personally can’t remember the last time there was an outage that prevented me from doing work.

If the problems didn’t revolve around load, I’d expect to see the same uptime problems reflected on the enterprise offering. — u_fucking_dork


Vibe coding and agentic engineering are getting closer than I'd like

Vibe coding and agentic engineering are getting closer than I'd like

341 points | 366 comments

> I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it’s just going to do it right. It’s not going to mess that up. You have it add automated tests, you have it add documentation, you know it’s going to be good.

I feel like this is just not true. An JSON API endpoint also needs several decisions made.

- How should the endpoint be named

- What options do I offer

- How are the properties named

- How do I verify the response

- How do I handle errors

- What parts are common in the codebase and should be re-used.

- How will it potentially be changed in the future.

- How is the query running, is the query optimized.

If I know the answer to all these questions, wiring it together takes me LESS time than passing it to Claude Code.

If I don’t know the answer the fastest way to find the answer is to start writing the code.

Additionally, whilst writing it I usually realize additional edge cases, optimizations, better logging, observability and what else.

The author clearly stated the context for this quote is production code.

I don’t see any benefits in passing it to Claude Code. It’s not that I need 1000s of JSON API endpoints. — jwpapi


Higher usage limits for Claude and a compute deal with SpaceX

Higher usage limits for Claude and a compute deal with SpaceX

350 points | 285 comments

Anthropic renting out the data center Elon built for Grok is the kind of plot twist you can't make up. — arian_

Programming Still Sucks

Programming Still Sucks

290 points | 118 comments

> AI didn't take our jobs. Greed did. Same greed that moved factories to Bangladesh and keeps slaves in cobalt mines in the Congo, wearing a new mask. Tell the nephew to do something else. Anything. It won't save him either, but at least he won't have to pretend the thing destroying his life is a robot.

This hit me hard. This article is art. I think I need to sleep on this and read it again in the morning. — fooqux


Inkscape 1.4.4

Inkscape 1.4.4

279 points | 84 comments

I have my first contribution to Inkscape in this release I think. It's quite a minor feature though, so I don't see it in the changelog. It allows the user to set their default saved file name. I was tired of drawing.svg :) — darknavi

Google Cloud fraud defense, the next evolution of reCAPTCHA

Google Cloud fraud defense, the next evolution of reCAPTCHA

277 points | 262 comments

The requirements for the mobile devices are listed here: https://support.google.com/recaptcha/answer/16609652

So it seems that you will need a modern Android device with Google Play Services installed or a modern iPhone/iPad to be allowed to browse the web in the future.

No mention of device integrity verification yet, but the writing is on the wall. — bramhaag


Ted Turner has died

Ted Turner has died

275 points | 218 comments

I remember around 2000 I read about how Ted Turner started his empire: he bought podunk local TV stations that had loose contracts with media owners that allowed them to broadcast shows as often as they wanted, with no restrictions. In the those days, local TV stations were broadcast just like radio and so the assumption was the contract only concerned the audience the TV station's antenna could reach. But the contract didn't specify this. Recognizing the loophole, he bought multiple stations and combined that content into its own cable channel(s) that played old movies and TV shows: https://en.wikipedia.org/wiki/Ted_Turner This was the basis that allowed him to branch into CNN and more.

When I learned about this, the story was very applicable to me at the time, as my startup had acquired licenses for content that was historically sold directly to libraries by a salesman who would negotiate with each library individually. He used a standard contract. When we contacted the company to license content for display on the internet, they gave us a ridiculous contract with a small one time fee and access to display the content forever. Only after reasoning through their business model and history did we understand how this occurred, which was exactly the same type of gap that Ted Turner had exploited. — lubujackson


Digest: Hacker News: May 05 - May 06, 2026

Published: 1 week ago | Author: System

.de TLD offline due to DNSSEC?

.de TLD offline due to DNSSEC?

487 points | 213 comments

https://ianix.com/pub/dnssec-outages.html — aboardRat4

Accelerating Gemma 4: faster inference with multi-token prediction drafters

Accelerating Gemma 4: faster inference with multi-token prediction drafters

413 points | 189 comments

I don't see it talked about much, but Gemma (and gemini) use enormously less tokens per output than other models, while still staying within arms reach of top benchmark performance.

It's not uncommon to see a gemma vs qwen comparison, where qwen does a bit better, but spent 22 minutes on the task, while gemma aligned the buttons wrong, but only spent 4 minutes on the same prompt. So taken at face value, gemma is now under performing leading open models by 5-10%, but doing it in 1/10th the time. — WarmWash


AI didn't delete your database, you did

AI didn't delete your database, you did

330 points | 170 comments

I think the perspective here is completely wrong. The problem is that people are now building our world around tooling that eschews accountability.

Over a decade ago now, I had a conversation with Gerald Sussman which had enormous influence on me: https://dustycloud.org/blog/sussman-on-ai/

> At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."

Years later, I found out that Sussman's student Leilani Gilpin wrote a dissertation which explored exactly this topic. Her dissertation, "Anomaly Detection Through Explanations", explores a neural network talking to a propagator model to build a system that explains behavior. https://people.ucsc.edu/~lgilpin/publication/dissertation/

There has been followup work in this direction, but more important than the particular direction of computation to me in this comment is that we recognize that it is perfectly reasonable to hold AI corporations to account. After all, they are making many assertions about systems that otherwise cannot be held accountable, so the best thing we can do in their stead is hold them accountable.

But a much better path would be to not use systems which fail to have these properties, and expand work on systems which do. — paroneayea


Three Inverse Laws of AI

Three Inverse Laws of AI

335 points | 239 comments

I strongly disagree with this framing. It's patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines, and it simply won't work in the majority of cases. Humans WILL anthropomorphize the AI, humans WILL blindly trust their outputs, and humans WILL defer responsibility to them.

Asimov's laws of robotics are flawed too, of course. There is no finite set of rules that can constrain AI systems to make them "safe". I don't have a proof, but I believe that "AI safety" is inherently impossible, a contradiction of terms. Nothing that can be described as "intelligent" can be made to be safe. — miyoji


iOS 27 is adding a 'Create a Pass' button to Apple Wallet

iOS 27 is adding a 'Create a Pass' button to Apple Wallet

371 points | 282 comments

The wallet app UI is the peak of Apple's 'single 20y/o in sf' design.

Anyone that has multiple card from the same bank (because, say, you have a personal account and a shared account with your partner) has to do the "pick between the two identical looking top 20px of cards" dance every time they use Wallet to pay for something. It is mind-boggling that the current UI persists. — kilian


Computer Use is 45x more expensive than structured APIs

Computer Use is 45x more expensive than structured APIs

379 points | 215 comments

Great guidance hidden in here for making it expensive for agents to navigate your website. Move elements on screen as the mouse moves, force natural mouse movement to make the UI work, change the button labels in the JS to be randomly named every visit, force scrolling to the bottom of the screen to check for hidden extra tasks...

Hang on, that sounds like common corporate SaaS apps. — angry_octet


Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement

Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement

https://apnews.com/article/meta-mark-zuckerberg-ai-publisher...

365 points | 325 comments

All those lawsuits against students who downloaded but didn't even redistribute mp3s. Less than a fair use transformation. Just the file download itself. ... Lesson learned: those students should have stolen millions instead! — glaslong

Today I've made the difficult decision to reduce the size of Coinbase by ~14%

Today I've made the difficult decision to reduce the size of Coinbase by ~14%

340 points | 517 comments

> Leaders will own much more, with as many as 15+ direct reports. [...] Every leader at Coinbase must also be a strong and active individual contributor. Managers should be like player-coaches, getting their hands dirty alongside their teams.

Oof. So not only are they giving their remaining managers more reports, but those managers will be expected to do lots of other, non-management work.

Sure, nothing can go wrong there... Even if they didn't have non-managerial work to do, 15+ direct reports is just too many. They're not going to get to spend enough time meeting each report's needs, not a chance.

I think as layoffs emails go, it's a pretty good one (as the current top comment points out[0]), but boy, I would not want to be working at a company like what Coinbase is turning into. Non-technical teams shipping code to prod? No thanks. "AI-native pods"? No thanks. I do like the idea of one-person teams; I was at my most productive when I was in that kind of role (though I'm not sure my experience generalizes). I get that companies are still struggling to figure out how to adapt to LLMs, but... damn.

Pretty solid severance package for the folks being laid off, though.

[0] https://news.ycombinator.com/item?id=48021843 — kelnos


IBM didn't want Microsoft to use the Tab key to move between dialog fields

IBM didn't want Microsoft to use the Tab key to move between dialog fields

329 points | 194 comments

IBM was legendarily over-managed. This is second-hand but a guy I used to work with told a story of when he interned for a summer at IBM in London during the mid-90s doing what would now be called a QA engineering. At that time everyone wore suits to work but the culture was changing so the interns put in a request to be allowed casual Fridays. Bear in mind that they were locked in a back room somewhere without any customer interaction so they didn't think it was a big deal.

Months later, just before the end of the internship, they received a reply. Their manager had forwarded their request up the chain of command and the email had the full quoted history. Their request had been bumped up 4 successive layers in the London office, then across to the US headquarters where it continued its upwards trajectory, finally alighting on the desk of a VP who, after thanking them for bring the issue to his attention, rendered an carefully considered opinion.

The whole process had taken weeks, presumably as each person in the hierarchy debated whether they had the authority to tackle such a weighty issue.

The email had then been inexplicably bounced back DOWN the chain one link at a time, back across the Atlantic Ocean, and through the local office, down to the suit-bound interns, again weeks later, who by this stage only had days left at the internship.

The answer was no. — AndrewStephens


California farmers to destroy 420k peach trees following Del Monte bankruptcy

California farmers to destroy 420k peach trees following Del Monte bankruptcy

318 points | 372 comments

People underestimate how difficult it is to seek buyers for the amount of produce we are talking about here.

Farmers are specialists at growing things, not at moving them across great distances, marketing them to dozens small buyers and or starting up packing plants from scratch. They don't have enough trucks, people or packaging machines to move them around.

Maybe, they can take some portion for local use. But the rest will spoil, and rest of the land will be effectively unused, and a burden. The best option is to cut that as much as possible, and plant something else that actually sells.

Of course, people who never approached agriculture will be appalled at this, and call it great injustice. — clarionbell


Digest: Hacker News: May 04 - May 05, 2026

Published: 1 week ago | Author: System

Talking to 35 Strangers at the Gym

Talking to 35 Strangers at the Gym

620 points | 315 comments

One of the things I like about this is that OP is giving people genuine compliments without any particular agenda.

It reminds me of one of my favorite parts of How to Win Friends and Influence People by Dale Carnegie, where he tells a story about complimenting someone, and a student asks what he was hoping to gain from offering the compliment. Carnegie is incensed:

> I was waiting in line to register a letter in the Post Office at Thirty-Third Street and Eighth Avenue in New York. I noticed that the registry clerk was bored with his job[...] So while he was weighing my envelope, I remarked with enthusiasm: “I certainly wish I had your head of hair.”

> He looked up, half-startled, his face beaming with smiles. “Well, it isn’t as good as it used to be,” he said modestly. I assured him that although it might have lost some of its pristine glory, nevertheless it was still magnificent. He was immensely pleased. We carried on a pleasant little conversation, and the last thing he said to me was: “Many people have admired my hair.”

> I told this story once in public; and a man asked me afterwards: “What did you want to get out of him?”

> What was I trying to get out of him!!! What was I trying to get out of him!!!

> If we are so contemptibly selfish that we can’t radiate a little happiness and pass on a bit of honest appreciation without trying to screw something out of the other person in return—if our souls are no bigger than sour crab apples, we shall meet with the failure we so richly deserve.

> Oh yes, I did want something out of that chap. I wanted something priceless. And I got it. I got the feeling that I had done something for him without his being able to do anything whatever in return for me. That is a feeling that glows and sings in your memory long after the incident is passed. — mtlynch


Removable batteries in smartphones will be mandatory in the EU starting in 2027

Removable batteries in smartphones will be mandatory in the EU starting in 2027

278 points | 279 comments

In principle, this is the kind of right sentiment but for the wrong things.

I can't remember a phone that died because of the battery since the era of Ni-Cd cells in early cell phones. I don't think I've never discarded a phone with a li-ion battery because of the battery. It's always physical breakage or getting too slow to be usable, because of age.

Sure, I don't spend a cycle per day. Not even every other day. That's probably rare, I get that. But much rather than because of dying batteries I'd like EU to mandate

- the phone should come with full keys so that I can own the machine if I want to - or at the very least the hardware must become unlockable once the support period ends - individual components should be made available for independent repairs - repairs must not need software pairing of hardware components on unlocked devices

because of right to own and right to repair which shouldn't be "rights" but nonnegotiable traits of physical properties like they used to be. — yason


Microsoft Edge stores all passwords in memory in clear text, even when unused

Microsoft Edge stores all passwords in memory in clear text, even when unused

362 points | 139 comments

This feels like a case of "It rather involved being on the other side of this airtight hatchway"[1]. If you can read arbitrary process memory, you're probably also in a position to just dump out the passwords by pretending to be the user in question.

> If an attacker gains administrative access on a terminal server, they can access the memory of all logged‑on user processes.

If an attacker has administrative access, they can also attach a debugger to every chrome process and force it to decrypt all the passwords. The only difference this really makes is in coldboot attacks, but even then it's still not clear whether it makes the attacker's job slightly easier, or allows an attack that's otherwise not possible.

[1] https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31... — gruez


Bun is being ported from Zig to Rust

Bun is being ported from Zig to Rust

409 points | 271 comments

Interesting to see this when the current top post on HN is someone worrying about Bun as it was acquired by Anthropic. The top comment there describes “Anthropic does experiments on their own codebase, the Bun team is not gonna do the same vibe coding experiments”.

Yet here we are, what looks like a massive undertaking for vibe coding.

Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this. — stingraycharles


I am worried about Bun

I am worried about Bun

372 points | 251 comments

I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.

Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.

Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.

Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.

It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet. — AntonyGarand


US healthcare marketplaces shared citizenship and race data with ad tech giants

US healthcare marketplaces shared citizenship and race data with ad tech giants

392 points | 134 comments

I used a state (Colorado) healthcare marketplace website when I was going to take a break between jobs for a couple of months, and I feel very violated by the whole process. I entered a bunch of information to the website, knowing that the data could be expected to be shared for quotes, but I got no quote. The information didn't just flow between systems, it was just sent directly to a bunch of individuals. Instead of getting anything useful from the website, I just got told that agents would contact me, and then literally hundreds of agents were calling and texting me at all hours of the day and night for weeks. I asked one of them how to get it to stop and they said it was impossible during the government shutdown. — TallGuyShort

Incident with Issues and Webhooks – Resolved

Incident with Issues and Webhooks – Resolved

418 points | 252 comments

Github has published some incredible usage rate increase numbers, which they ascribe to the rise of agentic coding. At some point, they are going to have to change rate limits, cut free-tier usage, or find some other path to reducing load. It's clear that their infrastructure can't keep up with this significant increase, and it's unlikely that they're going to just absorb the increased costs themselves.

Very curious to see what the future holds for Github. — AlexB138


How OpenAI delivers low-latency voice AI at scale

How OpenAI delivers low-latency voice AI at scale

216 points | 86 comments

Very grateful that OpenAI published the article/publicized their usage of Pion[0] a library I work on. If you aren't familiar with WebRTC it's a super fun space. I work on a book WebRTC for the Curious [1] that details how it works.

[0] https://github.com/pion/webrtc

[1] https://webrtcforthecurious.com — Sean-Der


Days without GitHub incidents

Days without GitHub incidents

348 points | 147 comments

I recently moved all my projects to a self-hosted forgejo instance and have found it quite satisfactory so far. And it's fast! If you're in the market for a github alternative, take a look - there are options. — dpe82

Y Combinator's Stake in OpenAI (0.6%?)

Y Combinator's Stake in OpenAI (0.6%?)

274 points | 37 comments

"well-known AI expert Gary Marcus" — FergusArgyll

Digest: Hacker News: May 03 - May 04, 2026

Published: 1 week ago | Author: System

Mercedes-Benz commits to bringing back physical buttons

Mercedes-Benz commits to bringing back physical buttons

244 points | 134 comments

'He also explained that "I'm a big believer in screens, because I really believe if you want to connect, you have to make the magic work behind the screen." '

I am a big believer in keeping "product people" away from UI design for dangerous machinery.

The eyes and the attention of the driver should be on the road. All the audio visual noise from the car is just plain dangerous. I don't want my car to draw my attention to itself for anything less than a critical engine/tyre pressure failures. I do not want beeps on anything else distracting me while I am driving.

My Volvo will, for instance, flash the same type of visual alert when fuel level is low (permanent "do you want to navigate to a fuel station" modal window obscuring navigation, speedometer and so on) -- as when it encounters a serious engine malfunction. It will steal a bit of my attention when it pops up. One of those days, someone will have an accident because of this moronic design, its statistically certain.

Same with wipers fluid level low. I need to click on the button to hide the message.

It will on occasion beep very loud when it thinks I am not braking hard enough. The map in the google android car navi rotates when i am just trying to pan. When I want to select an alternative route I need to very precisely touch a very small area on the screen, and more often than not instead of selecting the alternative route it will actually rotate the map.

It is clear to me that either the people designing car UIs are staying away from those cars, or are just incompetent. (Or, I guess, both). — aenis


DeepClaude – Claude Code agent loop with DeepSeek V4 Pro

DeepClaude – Claude Code agent loop with DeepSeek V4 Pro

372 points | 139 comments

    #!/bin/sh
    export ANTHROPIC_BASE_URL=https://api.deepseek.com/anthropic
    export ANTHROPIC_AUTH_TOKEN=sk-secret
    export ANTHROPIC_MODEL=deepseek-v4-flash
    export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
    exec claude $@
— aftbit

New statue in London, attributed to Banksy, of a suited man, blinded by a flag

New statue in London, attributed to Banksy, of a suited man, blinded by a flag

237 points | 237 comments

The point is not just that he's blinded by the flag: He's boldly marching into the void, confident. "wrapped in the flag" is a great saying. — ggm

BYOMesh – New LoRa mesh radio offers 100x the bandwidth

BYOMesh – New LoRa mesh radio offers 100x the bandwidth

221 points | 74 comments

The "100x bandwidth" claim needs to be substantiated.

There are some significant regulatory issues with the current popular mesh network protocols in the USA, namely that neither MeshCore or Meshtastic are compliant with the actual FCC regulations. 100x bandwidth because you're breaking the rules isn't the same as 100x bandwidth legally.

Here is the issue discussing this in the MeshCore repository: https://github.com/meshcore-dev/MeshCore/issues/945 — AlphaWeaver


Agentic Coding Is a Trap

Agentic Coding Is a Trap

325 points | 232 comments

Interestingly I’ve learned more about languages and systems and tools I use in the last few years working with agentic coding than I did in 35 years of artisanal programming. I am still vastly superior at making decisions about systems and techniques and approaches than the agentic tools, but they are like a really really well read intern who knows a great deal of detail about errata but have very little experience. They enthusiastically make mistakes but take feedback - at least up front - even if they often forget because they don’t totally understand and haven’t internalized it.

The claim you should know everything about everything you work on is an intensely naive one. If you’ve worked on a team of more than one there’s a lot of stuff you don’t totally grok. If you work in an old code base there’s almost every bit of it that’s unfamiliar. If you work in a massive monorepo built over decades, you’re lucky if you even understand the parts everyone considers you an expert in it.

I often get the impression folks making these claims are either very junior themselves or work basically alone or on some project for 20 years. No one who works in a team or larger org can claim they know everything in their code base. No one doing agentic programming can either. But I can at least ask the agent a question and it will be able to answer it. And after reading other people’s code for most of my adult life, I absolutely can read the LLMs. The fact a machine wrote crappy code vs a human bothers me not in the least, and at least the machine will take my feedback and act on it. — fnordpiglet


A desktop made for one

A desktop made for one

213 points | 80 comments

This is really exciting.

Some of the folks who make things will go on to make things that suit not just their preferences but also those of a small audience.

Some of those audiences will go on to grow and grow and disrupt the big players.

The capital intensive part of software construction is melting away and being converted to opex (payg token costs and your time) and that will blast open the possibility space and lead to a massive new commons.

If the thing was so cheap to create why not open source it!

And if you like someone else’s open source thing but don’t want to take it wholesale why not give it to your agent and say “put the ideas from this onto my thing”!

It’s a new way of thinking about code too. — cadamsdotcom


Why TUIs Are Back

Why TUIs Are Back

239 points | 259 comments

I think part of it is also that we're able to still LARP as full developers of complex systems while vibe coding by seeing an interface that makes us look like l33t h4xx0rs even though we're just pressing continue 15 times — schmorptron

Let's Buy Spirit Air

Let's Buy Spirit Air

302 points | 284 comments

Fundamental problem: Flights don't make money. Airlines actually make all of their money through loyalty programs and credit card payments. They basically should have turned into regulated utilities long ago, but loyalty program revenue saved them.

Unless this initiative will turn into a credit card company (which nobody likes or wants to do) it won't go anywhere

Private equity will likely sell the company for parts. There is no operational improvements for cash flow that they can do.

Useful watch (skip to 2:20): https://youtu.be/ggUduBmvQ_4?si=cyysP7aH_CIEDZRq — rapatel0


Metal Gear Solid 2's source code has been leaked on 4chan

Metal Gear Solid 2's source code has been leaked on 4chan

215 points | 86 comments

Maybe with the source code, I'd be able to figure out what the hell happened in the last ~2 hours of the game. — tombert

Southwest Headquarters Tour

Southwest Headquarters Tour

After years of flying Southwest, I recently had the opportunity to tour the headquarters in Dallas. I particularly enjoyed seeing the full-motion 737 simulators, Network Operations Center, and TechOps maintenance hangar up close.

228 points | 71 comments

I adore behind-the-scenes tours. I get there's a lot of work that goes into making it happen, but when you drop into a place where people work, you'll learn so much about real life problems that never make it to the Internet.

The greatest tour I ever had was at the Smokejumper base in remote WA. At any time when they're open, you're allowed to drop in for a tour and whoever is there that day is obliged to give you one. Even in the height of fire season.

We got to see them pack parachutes, repair gear, coordinate parcel drops - everything. Our guide was a 3 year jumper veteran on summer break from his masters degree in linguistics. It was incredible.

Any org that's proud of what they do should aspire to have public tours. — legitster