Iran’s traditional naval fleet has been almost completely destroyed by US-Israeli raids. But Iran’s military has put a fleet of small vessels on the water that is crippling every passageway.
Teen trusted ChatGPT to help him “safely” experiment with drugs, logs show.
Early-stage venture firm A* Capital just took the wraps off its $450 million Fund III.
Google I/O is just days away, and that means it’s time for another Android Show event to unpack all the Android ecosystem highlights Google has in store. The highlight of this year’s Android Show was Googlebooks, a new line of laptops running Android. They can cast apps from Android phones, pull files directly from a […]
Sadi Thermal Machines is Drew Baglino's second startup since leaving Tesla in 2024.
It is, once again, Gemini season. Google is announcing a host of new Gemini features during its pre-I/O Android showcase, many of which aim to help use your phone for you. You'll find Gemini in more places, like Chrome on Android, in your autofill suggestions, and all up in your apps - if you want. […]
OpenAI CEO Sam Altman says Elon Musk did "huge damage" to the culture of the AI startup. During testimony as part of Musk's lawsuit against OpenAI, Altman said Musk required OpenAI president Greg Brockman and former chief scientist Ilya Sutskever to rank researchers by their accomplishments and "take a chainsaw through a bunch." Altman conceded […]
OpenAI's CEO recalls a "particularly hair-raising" conversation with the SpaceX founder.
Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.
We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.
Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).
Training:
- Pretrained on 200B tokens across 16 TPU v6e (27 hours)
- Post-trained on 2B tokens of synthesized function-calling data (45 minutes)
- Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)
You can test it right now and finetune on your Mac/PC: https://github.com/cactus-compute/needle
The full writeup on the architecture is here: https://github.com/cactus-compute/needle/blob/main/docs/simp...
We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.
While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.
This is part of our broader work on Cactus (https://github.com/cactus-compute/cactus), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: https://news.ycombinator.com/item?id=44524544
Everything is MIT licensed. Weights: https://huggingface.co/Cactus-Compute/needle
GitHub: https://github.com/cactus-compute/needle
Comments URL: https://news.ycombinator.com/item?id=48111896
Points: 31
# Comments: 5