The Era of Large AI Models Is Over
Size doesn’t matter
Size doesn’t matter
A Beginner’s Guide To Using Generative AI
Exploring Random Forests from Machine Learning
We all know how AI is getting so popular and this is the time to ride the wave since there would be more and more things going to come…
Read on about how this new free app called HandMol achieves immersive, collaborative visualization and modeling of molecular structures by…
Remember the roaring ’90s when dial-up modems whined and Artificial Neural Networks (ANNs) dominated the AI scene? Fast forward to today…
Remember the symphony of angry modems known as dial-up internet? Today, it’s a (nostalgic) memory, gathering dust in the corners of our…
The internet has come a long way since the early days of dial-up modems and AOL chat rooms. We’ve moved from the static pages of Web 1.0…
The electronics industry lives in a constant incoherence: developing the latest generation of products (GPUs, 5G modems, satellites…) by…
Rand McNally maps in your car, the corded landline phone on your kitchen wall, dial up modems, brick sized car phones, 36 exposure camera…
If you’re just getting started in Internet marketing now, I envy you. Today there are so many exciting tools and resources that don’t cost…
Who doesn’t love faster and better performing internet? Especially when we are running high performance hardware?
A potential list of legally binding but to-be-determined “best practices” for internet companies could include almost anything.
This is another one of my automate-my-life projects - I'm constantly asking the same question to different AIs since there's always the hope of getting a better answer somewhere else. Maybe ChatGPT's answer is too short, so I ask Perplexity. But I realize that's hallucinated, so I try Gemini. That answer sounds right, but I cross-reference with Claude just to make sure.This doesn't really apply to math/coding (where o1 or Gemini can probably one-shot an excellent re
If we want to give an LLM access to external tools, it seems like the current best method is to use some form of function calling (tool-use in OpenAi’s case). This essentially provides the llm a context of a list of functions it can call and is prompted to return a structured output if it needs to call a function in the list. The backend code then catches these cases and renders the code and feeds in back to the llm. This is great for validating that LLMs can understand how to use different tool
Created a tool that lets you use LLMs to automate task across mobile (android) and computer. Currently, this uses screenshots and LLMs support for extracting screen UI elements effectively. This is still a work in progress and attempting to make this work with local models via Ollama (the code is in place with some issues). As of now, Gemini and GPT 4o works the best for finding UI elements and planning the task.Some examples that work as of now: 1. Use gmail and ask <friend>@example.com
Hi HN,I wrote a tool to scratch my own itch and decided to make it available to others.It's called ChatKeeper[0], and it syncs your ChatGPT export files to local Markdown files. This allows for easy and permanent local storage, searchability, and integration with note-taking applications like Obsidian (which I use). If you sync again after continuing conversations, ChatKeeper will find your conversation files — even if you've moved or renamed them — and update them in place, so you ca
Hello HN,I've recently been trying out and using the latest and greatest software development agents, including Windsurf and Cursor.Before using those, I had been using aider for everyday practical SWE tasks, but aider is not very agentic --it's best at one-off, scoped programming tasks.I did a quick test by putting aider in a ReAct loop using create_react_agent from langchain. To my surprise, it ended up working very well --it's able to solve programming tasks that Windsurf has f
Hi all, so I've been working on this low to no code platform that allows you to spin up deep learning workloads(I'm talking LLM's, Huggingface models, etc), interconnect a bunch of them, and deploy them as API's.The idea essentially came up early in September, when experimenting with combining a Huggingface based BERT model with an LLM at work, and I realized it would be cool if I could do that instantly(especially since it was a prototype). At the time, I was considering a p