Digest: Hacker News: May 12 - May 13, 2026
Published: 9 hours ago | Author: System
Bambu Lab is abusing the open source social contract
Bambu Lab is abusing the open source social contract
477 points | 168 comments
Full disclosure: I've never owned a Bambu because I've never loved the idea of a "closed" ecosystem 3D printer, however I have used them, and am very familiar with the 3d printing space beyond Bambu.For anyone considering alternatives: You should know that almost all other 3D printers expect you to know a little more about how they actually work than Bambus. Bambus are as close as you can get to a "just works" type experience, but modern alternatives from others are nowhere near as hard as they used to be.
The closest "easy" alternative is probably Prusa, but you'll pay significantly more for a Prusa machine than you would a Bambu. They're an excellent company, and the complete opposite of Bambu when it comes to Openness. If money is no object, Prusa is highly recommended.
Beyond Prusa, there's a lot of other options. https://auroratechchannel.com/#section2 This list is a good one.
I personally run an old Elegoo Neptune 4 pro - but my needs are quite low. If I were buying today, a Snapmaker U1 or the Creality K2 Plus is probably where I'd end up going. — kn100
Googlebook
555 points | 890 comments
Gross. This is just more proof that corporations simply don't know how to market AI. Everything is an ad for an ad at this point. The very first thing they show this new machine doing is helping people shop for clothes using AI.No one is doing that, these people don't exist. No matter how hard corporate America wishes they did. This is why AI doesn't sell. This is why companies like Microsoft and Dell are pulling back on their AI claims and why Apple has nearly wiped it off their site all together, seriously go check out apple.com, not a single mention of Apple Intelligence.
At this point I'm convinced that marketing has been completely taken over by shareholder shills, marketing to customers they wish they had instead of the real customers that exist. — Jzush
Why senior developers fail to communicate their expertise
Why senior developers fail to communicate their expertise
339 points | 162 comments
Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer. — hamstergene
EU to crack down on TikTok, Instagram's 'addictive design' targeting kids
EU to crack down on TikTok, Instagram's 'addictive design' targeting kids
378 points | 320 comments
This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0. — conception
Rendering the Sky, Sunsets, and Planets
Rendering the Sky, Sunsets, and Planets
392 points | 34 comments
I saw this a while ago so it might not be totally related, but Sebastian Lague did a video on atmospheres for his planet generation experiment which was also very entertaining to watch [1].There's something particularly entertaining on developing visuals and watching them come a reality — I hope at some point be able to experiment in this field.
[1] https://www.youtube.com/watch?v=DxfEbulyFcY — etra0
Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model
Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model
Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.
Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).
Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)
You can test it right now and finetune on your Mac/PC: https://github.com/cactus-compute/needle
The full writeup on the architecture is here: https://github.com/cactus-compute/needle/blob/main/docs/simp...
We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.
While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.
This is part of our broader work on Cactus (https://github.com/cactus-compute/cactus), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: https://news.ycombinator.com/item?id=44524544
Everything is MIT licensed. Weights: https://huggingface.co/Cactus-Compute/needle GitHub: https://github.com/cactus-compute/needle
240 points | 86 comments
Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.
E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom` — ilaksh
Restore full BambuNetwork support for Bambu Lab printers
Restore full BambuNetwork support for Bambu Lab printers
352 points | 144 comments
This looks to be a clone of the prior state of the repository that caused all the Bambu drama earlier this week.I did a ton of research because I didn't understand what people wanted here, and this is what's going on:
Right now, Bambu have adjusted their system into two modalities:
* "default" or "Cloud" mode, where you get an app, remote monitoring, but you have to use Bambu Studio or Bambu Connect to send prints. They implemented this by adding cloud auth to their "internal API;" the client application has to get a token from Bambu's servers, even if the request it eventually makes is a "local" one.
* LAN / Developer mode, where the device displays a token and you put it into your app. This disables all of the remote monitoring but in exchange, clients can send prints locally.
What users want is to "have their cake and eat it too;" they want the local token authentication _and_ the cloud authentication enabled at the same time. This isn't actually possible, so this plugin approximates it by emulating the interface to the cloud authentication to make the "Bambu Network" cloud RPC calls from a local slicer (one of these calls is a local_print call, so ostensibly this allows you to send prints without running them through the cloud, although with all of the online functionality still enabled and required, this seems like a pretty brave thing to trust).
Personally, I find the Bambu reaction distasteful, and there's an argument that the offline mode only exists due to similar outrage, but I don't see the current system as particularly bad and find the appetite to restore "untrustworthy" cloud functionality a bit amusing. — bri3d
The Future of Obsidian Plugins
The Future of Obsidian Plugins
281 points | 116 comments
Obsidian CEO here. We've been working for nearly a year to launch this new Community site and review system. I'm very excited about this first version but there are many more improvements to come.I've tried to be exhaustive with the blog post, FAQs, and next steps on our roadmap, but I am sure I forgot some things, so feel free to ask!
This has been an incredibly challenging project for a number of reasons. We're only seven people but we have thousands of plugin developers and millions of users. There are many competing priorities to balance.
We wanted to make sure the new system would be easy to adopt, backwards compatible, and not completely break people's workflows, while still being a major improvement over the old approach, and allow us to gradually continue enhancing security and discoverability of plugins.
Consider it a work in progress. We're listening to everyone's ideas and gripes, and will keep iterating :) — kepano
Operation: Epic Furious
323 points | 111 comments
It's great except the war is obviously for Israel not oil, we had more access to oil before the war — an0malous
How to make your text look futuristic (2016)
How to make your text look futuristic (2016)
306 points | 36 comments
Does the Back To The Future logo really count? Raiders of the Lost Ark as a very similar style but does not evoke "future". Yes, there are subtle differences. My point is, if you divorced them from the connection to their content I think it would be hard to point to one as "future" and the other as "not future" — socalgal2