Come On Waiting GIF
Come On Waiting GIF
Come On Waiting GIF
This Is The Best Rob Huebel GIF by Dark Web
The Net Dev GIF by The LSD Hotel
90s dial up GIF
Modem Falling GIF by Telletec
Hey HN! We're Nithin and Nikhil, twin brothers building BrowserOS (YC S24). We're an open-source, privacy-first alternative to the AI browsers from big labs.On BrowserOS, we provide first-class support to bring your own LLMs either local models or via API keys and run the agent entirely on the client side, so your data stays on your machine!Today we're launching filesystem access... just like Claude Cowork, our browser agent can read files, write files, run shell commands! But hon
the best model kept switching and I wanted to experiment with the models as soon as possible.
Writeup (includes good/bad sample generations): https://www.linum.ai/field-notes/launch-linum-v2-------We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly
I built Perspectives because I got tired of ChatGPT agreeing with everything I said.Ask any LLM to "consider multiple perspectives" and you get hedged consensus. The model acknowledges trade-offs exist, then settles on a moderate position that offends nobody. Useful for summaries. Useless for decision making.Perspectives forces disagreement. 8 personas with fundamentally incompatible frameworks debate your question through a structured protocol, then vote using Single Transferable Vote
Hi HN! We're Kamran, Raaid, Laith, and Omeed from Constellation Space. We built an AI system that predicts satellite link failures before they happen. Here's a video walkthrough: https://www.youtube.com/watch?v=069V9fADAtM.Between us, we've spent years working on satellite operations at SpaceX, Blue Origin, and NASA. At SpaceX, we managed constellation health for Starlink. At Blue, we worked on next-gen test infra for New Glenn. At NASA, we dealt with deep space com
Fiber and cable internet generally provide the fastest speeds, while 5G and fixed wireless are growing as flexible options. Fiber is ideal for heavy users like gamers and remote workers, while cable ...
I ran a systematic comparison of AI content detection and humanization tools after a client terminated a contract over an AI detection flag (87% AI-generated on content I'd manually edited).*Methodology:* - 31 tools tested over 90 days - 200+ content samples (technical docs, marketing copy, blog posts, academic-style) - Measured detection accuracy against known AI/human content - Measured humanization "bypass rate" against Originality.ai (industry standard) - Controlled for c
I recently tried building a small agent for coaching centers.The idea was simple: a teacher uploads a syllabus or notes, and the agent generates a test paper from that material. The hard requirement was reliability. No invented questions, no drifting outside the syllabus.Instead of trying to “fix” hallucinations with better prompts, I constrained the agent’s job very narrowly.I defined:a fixed knowledge base (only the uploaded syllabus)explicit tools the agent was allowed to usea structured outp
Tenmo A lightweight tensor library and neural network framework written in pure Mojo.https://github.com/ratulb/tenmoTenmo focuses on:SIMD-optimization explicit memory layout zero-copy views a minimal but practical autograd system Status: Tenmo evolves alongside Mojo itself. APIs may change. Not production-ready yet.Performance MNIST (4-layer MLP, 105K params, 15 epochs) Platform Device Avg Epoch Total Test Acc Tenmo CPU (Mojo) 11.4s 171s 97.44% PyTorch CPU 14.5s 218s 98.26%
I spent the last few months frustrated with RAG hallucinations and the cost of vector databases ($2k/mo for Pinecone).I realized that Cosine Similarity is often a weak proxy for semantic truth, so I built a memory protocol based on Wasserstein-2 Distance (Optimal Transport) instead.The core idea: Treat memory as a geometry problem. By measuring the transport cost between the "Stored State" and the "Retrieved State," we can mathematically enforce coherence. If the transpo
I ran a systematic comparison of AI content detection and humanization tools after a client terminated a contract over an AI detection flag (87% AI-generated on content I'd manually edited).*Methodology:* - 31 tools tested over 90 days - 200+ content samples (technical docs, marketing copy, blog posts, academic-style) - Measured detection accuracy against known AI/human content - Measured humanization "bypass rate" against Originality.ai (industry standard) - Controlled for c
I'm working on a project that needs to dynamically generate simple icons and diagrams. I've tried GPT-4 and Claude - they can output SVG code but the results are hit or miss, especially for anything beyond basic shapes.<p>Has anyone found a reliable workflow for this? I'm wondering if there are specialized models, better prompting techniques, or if I should just use a traditional graphics library and skip the LLM route entirely. What's actually working in production for you?
We ran a 500-cycle benchmark to test long-horizon coherence, reasoning stability, and identity persistence in large language models.The experiment used the Sigma Runtime, a model-agnostic control layer that adds long-term memory, structural coherence tracking, and adaptive equilibrium regulation to standard LLMs. It enables stable reasoning and personality continuity across hundreds of interactions without context resets.Protocol overview - 500 reasoning cycles divided into 10 blocks of 50 quest
Hi HN — completed a PhD and exploring a startup or lab-to-product path around next‑generation memory technologies and neuromorphic devices (compute‑in‑memory, memristive devices, analog/mixed-signal accelerators), with a strong emphasis on theory-driven materials discovery and device physics (modeling → candidate materials → prototype path).Is anyone here looking for a cofounder in this space, or interested in exploring one of these directions together?Technical cofounder: device physics, m
The goal is to make you pick one:1. Am I deeply delusional? 2. Am I hyper aware?Please pick one before you remove this post.There is no self running feedback loop that's why they hallucinate. THESE LLMs only know one direction and that is feed-forward. THINKING does not work this way and prompts are not how humans function. Emulating thinking also does works till only a certain limit, the original network does not know where the right answer is or where to stop thinking. these models do not