Buy A Modem

Copy-left open-source license for AI code use

I'm thinking that we need a new open source license that copies an existing license, such as GNU AGPL license (or any flavor really), but has language specific to AI training:``` This code may be used by AI models freely, but any model trained with this code, or using this code as part of inference, in whole or in part, with or without modification, must be made public under a "Copyleft AI License". All trained model weights, as well as model training and inference source code, a

Show HN: Prompts are coupled to LLMs and nobody builds tooling for it

I went down a rabbit hole trying to understand why my Claude prompts turn to garbage on GPT-4 and vice versa. Not just "slightly worse" — fundamentally broken. Turns out researchers have already measured this: removing colons from a prompt template swings LLaMA-2-13B accuracy by 78 percentage points (Sclar et al., ICLR 2024). The format that works best on one model family overlaps less than 20% with what works best on another (He et al. 2024).So I went looking for a tool that handles t

Show HN: Local TTS for OpenClaw on Apple Silicon (MLX-Powered, Zero Setup)

I built an OpenClaw plugin that runs text-to-speech entirely on your Mac. No API keys, no cloud, no pre-installed Python required.It wraps mlx-audio and handles the full lifecycle: bootstraps its own Python environment via uv, downloads the model on first run, manages the server process, auto-restarts on crash, and exposes a standard OpenAI-compatible /v1/audio/speech endpoint.Installation:openclaw plugin install @cosformula/openclaw-mlx-audio Four models out of the box:• Kok

Project Paperclip. The time has come

Long rumored that the world would end due to a goal of "maximizing paperclips" it is time to start. Every good world-ending idea needs a plan.Given an infinite number of paperclips what are the design constraints?Paperclips designs are the "string theory" of physical construction.Ideas in 1 dimensionObviously you could make a single one-dimensional line of paperclips. This raises the question of multiples sizes of paperclips. Can we make an ever smaller paperclip progression

Show HN: Optimize_anything: A Universal API for Optimizing Any Text Parameter

We built optimize_anything, an API that optimizes any artifact representable as text — code, prompts, agent architectures, configs, even SVGs. It extends GEPA (our prompt optimizer, discussed here previously: https://arxiv.org/abs/2507.19457) far beyond prompts.The API is deliberately minimal. You provide what to optimize and how to measure it:import gepa.optimize_anything as oadef evaluate(candidate: str) -> tuple[float, dict]: result = run_my_system(candidate) re

Show HN: 17MB pronunciation scorer beats human experts at phoneme level

I built an English pronunciation assessment engine that fits in 17MB and runs in under 300ms on CPU.Architecture: CTC forced alignment + GOP scoring + ensemble heads (MLP + XGBoost). No wav2vec2 or large self-supervised models — the entire pipeline uses a quantized NeMo Citrinet-256 as the acoustic backbone.Benchmarked on speechocean762 (standard academic benchmark, 2500 utterances): - Phone accuracy (PCC): 0.580 — exceeds human inter-annotator agreement (0.555) - Sentence accuracy: 0.710 — exce

Show HN: Docdex – A local tool to reduce LLM tokens and make agents smarter

Hi HN,I use LLMs every day for software development, and wanted to have a better experience by reducing the token usage and providing digested information about the codebase to the agent.So I built Docdex. The idea was simple: a local, persistent layer that preprocesses and structures your project so the model can spend its context window on the actual problem, not on rediscovering what already exists.It started as a document indexer in Rust, built on Tantivy for proper ranked full-text search.T

Show HN: CRTX – AI code gen that tests and fixes its own output (OSS)

We built an open-source CLI that generates code, runs tests, fixes failures, and gets an independent AI review — all before you see the output. We started with a multi-model pipeline where different AI models handled different stages (architect, implement, refactor, verify). We assumed more models meant better code. Then we benchmarked it: 39% average quality score at $4.85 per run. A single model scored 94% at $0.36. Our pipeline was actively making things worse. So we killed it and rebuilt aro

Show HN: Preact Health

In 2018, I attended Start Up school with the intention of creating a health tech start up. It started as an EHR but in an overly crowded space and dwindling desire for “social media” platform, I trashed the idea and started something new - a basic way for people to understand their health; distilling a complex topic into something more simple and understandable.I’ve finally picked up enough technical skills (and with some coding help from AI) and data/business understanding to bring out Pre

Local iOS voice to text app (alternative to Wispr Flow)

I usually dictate for 2 to 3 hours everyday in Dragon dictation and until recently used Wispr Flow on my personal devices. Over the last few months, I realized that local Al models can give you the same quality as Wispr Flow with complete privacy and without the ongoing subscription cost. So I built an iOS app, a MacOS app and an Android app.Testflight link:https://testflight.apple.com/join/e5pcxwyqI am happy to offer the app for free to people who offer useful feedback for t

Show HN: LLMWise – Compare, Blend, and Judge LLM Outputs from One API

The core idea is that no single LLM is best at everything, so we built orchestration primitives that let you combine them intelligently via a single API.Mixture-of-Agents (MoA): Our /blend endpoint implements multi-layer MoA. You send a prompt to 2-6 models in parallel, then each model refines its answer using the other models' outputs as reference material. This runs for 1-3 configurable layers before a synthesizer model produces the final response. We also built a Self-MoA variant: a

EdgeQ Bases AI-Enhanced 5G Modems on RISC-V

5G and artificial intelligence (AI) startup EdgeQ today announced that its upcoming modems will be built on the RISC-V architecture. This approach allows machine learning inference capabilities to be ...

Qualcomm: From Modems To On-Device Intelligence - Time For Reassessment

Qualcomm is transforming from a smartphone chipmaker to a platform leader in automotive, IoT, and edge AI, driving robust non-handset growth. Automotive and edge AI are now core growth engines, with ...

Dance Dancing GIF

Dance Dancing GIF

Pixel Art Animated Gif GIF by Potatozzz by 9GAG

Pixel Art Animated Gif GIF by Potatozzz by 9GAG

Save Them All Best Friend GIF by Best Friends Animal Society

Save Them All Best Friend GIF by Best Friends Animal Society

Pixel Art Animated Gif GIF by Potatozzz by 9GAG

Pixel Art Animated Gif GIF by Potatozzz by 9GAG

Dial Up Mr Mackey GIF by South Park

Dial Up Mr Mackey GIF by South Park

Crypto Knights GIF by Blockchainff

Crypto Knights GIF by Blockchainff

Show HN: MicroGPT in 243 Lines – Demystifying the LLM Black Box

The release of microgpt by Andrej Karpathy is a foundational moment for AI transparency. In exactly 243 lines of pure, dependency-free Python, Karpathy has implemented the complete GPT algorithm from scratch. As a PhD scholar investigating AI and Blockchain, I see this as the ultimate tool for moving beyond the "black box" narrative of Large Language Models (LLMs).The Architecture of Simplicity Unlike modern frameworks that hide complexity behind optimized CUDA kernels, microgpt expose