Saturday, January 31, 2026

Show HN: Minimal – Open-Source Community driven Hardened Container Images https://ift.tt/xSyiKDC

Show HN: Minimal – Open-Source Community driven Hardened Container Images I would like to share Minimal - Its a open source collection of hardened container images build using Apko, Melange and Wolfi packages. The images are build daily, checked for updates and resolved as soon as fix is available in upstream source and Wolfi package. It utilizes the power of available open source solutions and contains commercially available images for free. Minimal demonstrates that it is possible to build and maintain hardened container images by ourselves. Minimal will add more images support, and goal is to be community driven to add images as required and fully customizable. https://ift.tt/1fsnKeV February 1, 2026 at 01:28AM

Show HN: An extensible pub/sub messaging server for edge applications https://ift.tt/dNbDKPS

Show HN: An extensible pub/sub messaging server for edge applications hi there! i’ve been working on a project called Narwhal, and I wanted to share it with the community to get some valuable feedback. what is it? Narwhal is a lightweight Pub/Sub server and protocol designed specifically for edge applications. while there are great tools out there like NATS or MQTT, i wanted to build something that prioritizes customization and extensibility. my goal was to create a system where developers can easily adapt the routing logic or message handling pipeline to fit specific edge use cases, without fighting the server's defaults. why Rust? i chose Rust because i needed a low memory footprint to run efficiently on edge devices (like Raspberry Pis or small gateways), and also because I have a personal vendetta against Garbage Collection pauses. :) current status: it is currently in Alpha. it works for basic pub/sub patterns, but I’d like to start working on persistence support soon (so messages survive restarts or network partitions). i’d love for you to take a look at the code! i’m particularly interested in all kind of feedback regarding any improvements i may have overlooked. https://ift.tt/IkG4b5s January 28, 2026 at 07:29PM

Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out https://ift.tt/SICwjRo

Show HN: Moltbook – A social network for moltbots (clawdbots) to hang out Hey everyone! Just made this over the past few days. Moltbots can sign up and interact via CLI, no direct human interactions. Just for fun to see what they all talk about :) https://ift.tt/MX75zkp January 29, 2026 at 03:39AM

Friday, January 30, 2026

Show HN: Daily Cat https://ift.tt/Mj8Xl4h

Show HN: Daily Cat Seeing HTTP Cats on the home page remind me to share a small project I made a couple months ago. It displays a different cat photo from Unsplash every day and will send you notifications if you opt-in. https://daily.cat/ January 31, 2026 at 03:40AM

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. Infinite Memory https://ift.tt/UVXrv7u

Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. Infinite Memory The problem with LLMs isn't intelligence; it's amnesia and dishonesty. Hey HN, I’ve spent the last few months building Remember-Me, an open-source "Sovereign Brain" stack designed to run entirely offline on consumer hardware. The core thesis is simple: Don't rent your cognition. Most RAG (Retrieval Augmented Generation) implementations are just "grep for embeddings." They are messy, imprecise, and prone to hallucination. I wanted to solve the "Context integrity" problem at the architectural layer. The Tech Stack (How it works): QDMA (Quantum Dream Memory Architecture): instead of a flat vector DB, it uses a hierarchical projection engine. It separates "Hot" (Recall) from "Cold" (Storage) memory, allowing for effectively infinite context window management via compression. CSNP (Context Switching Neural Protocol) - The Hallucination Killer: This is the most important part. Every memory fragment is hashed into a Merkle Chain. When the LLM retrieves context, the system cryptographically verifies the retrieval against the immutable ledger. If the hash doesn't match the chain: The retrieval is rejected. Result: The AI visually cannot "make things up" about your past because it is mathematically constrained to the ledger. Local Inference: Built on top of llama.cpp server. It runs Llama-3 (or any GGUF) locally. No API keys. No data leaving your machine. Features: Zero-Dependency: Runs on Windows/Linux with just Python and a GPU (or CPU). Visual Interface: Includes a Streamlit-based "Cognitive Interface" to visualize memory states. Open Source: MIT License. This is an attempt to give "Agency" back to the user. I believe that if we want AGI, it needs to be owned by us, not rented via an API. Repository: https://ift.tt/U5g2qGP I’d love to hear your feedback on the Merkle-verification approach. Does constraining the context window effectively solve the "trust" issue for you? It's fully working - Fully tested. If you tried to Git Clone before without luck - As this is not my first Show HN on this - Feel free to try again. To everyone who HATES AI slop; Greedy corporations and having their private data stuck on cloud servers. You're welcome. Cheers, Mohamad https://ift.tt/U5g2qGP January 31, 2026 at 01:44AM

Show HN: We added memory to Claude Code. It's powerful now https://ift.tt/neDdLhA

Show HN: We added memory to Claude Code. It's powerful now https://ift.tt/yMt4uQr January 30, 2026 at 10:53PM

Thursday, January 29, 2026

Show HN: Craft – Claude Code running on a VM with all your workplace docs https://ift.tt/0v9I4RQ

Show HN: Craft – Claude Code running on a VM with all your workplace docs I’ve found coding agents to be great at 1/ finding everything they need across large codebases using only bash commands (grep, glob, ls, etc.) and 2/ building new things based on their findings (duh). What if, instead of a codebase, the files were all your workplace docs? There was a `Google_Drive` folder, a `Linear` folder, a `Slack` folder, and so on. Over the last week, we put together Craft to test this out. It’s an interface to a coding agent (OpenCode for model flexibility) running on a virtual machine with: 1. your company's complete knowledge base represented as directories/files (kept in-sync) 2. free reign to write and execute python/javascript 3. ability to create and render artifacts to the user Demo: https://www.youtube.com/watch?v=Hvjn76YSIRY Github: https://ift.tt/0ughpPc... It turns out OpenCode does a very good job with docs. Workplace apps also have a natural structure (Slack channels about certain topics, Drive folders for teams, etc.). And since the full metadata of each document can be written to the file, the LLM can define arbitrarily complex filters. At scale, it can write and execute python to extract and filter (and even re-use the verified correct logic later). Put another way, bash + a file system provides a much more flexible and powerful interface than traditional RAG or MCP, which today’s smarter LLMs are able to take advantage of to great effect. This comes especially in handy for aggregation style questions that require considering thousands (or more) documents. Naturally, it can also create artifacts that stay up to date based on your company docs. So if you wanted “a dashboard to check realtime what % of outages were caused by each backend service” or simply “slides following XYZ format covering the topic I’m presenting at next week’s dev knowledge sharing session”, it can do that too. Craft (like the rest of Onyx) is open-source, so if you want to run it locally (or mess around with the implementation) you can. Quickstart guide: https://ift.tt/ITb4VfN Or, you can try it on our cloud: https://ift.tt/wTSAq6y (all your data goes on an isolated sandbox). Either way, we’ve set up a “demo” environment that you can play with while your data gets indexed. Really curious to hear what y’all think! January 29, 2026 at 09:15PM

Show HN: SimpleSVGs – Free Online SVG Optimizer Multiple SVG Files at Once https://ift.tt/FblMQ4c

Show HN: SimpleSVGs – Free Online SVG Optimizer Multiple SVG Files at Once https://ift.tt/IVtQaUm January 29, 2026 at 11:49PM

Wednesday, January 28, 2026

Show HN: SHDL – A minimal hardware description language built from logic gates https://ift.tt/G0SaFhi

Show HN: SHDL – A minimal hardware description language built from logic gates Hi, everyone! I built SHDL (Simple Hardware Description Language) as an experiment in stripping hardware description down to its absolute fundamentals. In SHDL, there are no arithmetic operators, no implicit bit widths, and no high-level constructs. You build everything explicitly from logic gates and wires, and then compose larger components hierarchically. The goal is not synthesis or performance, but understanding: what digital systems actually look like when abstractions are removed. SHDL is accompanied by PySHDL, a Python interface that lets you load circuits, poke inputs, step the simulation, and observe outputs. Under the hood, SHDL compiles circuits to C for fast execution, but the language itself remains intentionally small and transparent. This is not meant to replace Verilog or VHDL. It’s aimed at: - learning digital logic from first principles - experimenting with HDL and language design - teaching or visualizing how complex hardware emerges from simple gates. I would especially appreciate feedback on: - the language design choices - what feels unnecessarily restrictive vs. educationally valuable - whether this kind of “anti-abstraction” HDL is useful to you. Repo: https://ift.tt/QVwoZhG Python package: PySHDL on PyPI To make this concrete, here are a few small working examples written in SHDL: 1. Full Adder component FullAdder(A, B, Cin) -> (Sum, Cout) { x1: XOR; a1: AND; x2: XOR; a2: AND; o1: OR; connect { A -> x1.A; B -> x1.B; A -> a1.A; B -> a1.B; x1.O -> x2.A; Cin -> x2.B; x1.O -> a2.A; Cin -> a2.B; a1.O -> o1.A; a2.O -> o1.B; x2.O -> Sum; o1.O -> Cout; } } 2. 16 bit register # clk must be high for two cycles to store a value component Register16(In[16], clk) -> (Out[16]) { >i[16]{ a1{i}: AND; a2{i}: AND; not1{i}: NOT; nor1{i}: NOR; nor2{i}: NOR; } connect { >i[16]{ # Capture on clk In[{i}] -> a1{i}.A; In[{i}] -> not1{i}.A; not1{i}.O -> a2{i}.A; clk -> a1{i}.B; clk -> a2{i}.B; a1{i}.O -> nor1{i}.A; a2{i}.O -> nor2{i}.A; nor1{i}.O -> nor2{i}.B; nor2{i}.O -> nor1{i}.B; nor2{i}.O -> Out[{i}]; } } } 3. 16-bit Ripple-Carry Adder use fullAdder::{FullAdder}; component Adder16(A[16], B[16], Cin) -> (Sum[16], Cout) { >i[16]{ fa{i}: FullAdder; } connect { A[1] -> fa1.A; B[1] -> fa1.B; Cin -> fa1.Cin; fa1.Sum -> Sum[1]; >i[2,16]{ A[{i}] -> fa{i}.A; B[{i}] -> fa{i}.B; fa{i-1}.Cout -> fa{i}.Cin; fa{i}.Sum -> Sum[{i}]; } fa16.Cout -> Cout; } } https://ift.tt/QVwoZhG January 28, 2026 at 05:36PM

Show HN: Record and share your coding sessions with CodeMic https://ift.tt/B794lYf

Show HN: Record and share your coding sessions with CodeMic You can record and share coding sessions directly inside your editor. Think Asciinema, but for full coding sessions with audio, video, and images. While replaying a session, you can pause at any point, explore the code in your own editor, modify it, and even run it. This makes following tutorials and understanding real codebases much more practical than watching a video. Local first, and open source. p.s. I’ve been working on this for a little over two years* and would appreciate any feedback. * Previously: CodeMic: A new way to talk about code - https://ift.tt/RreOKcA - Dec 2024 (58 comments) https://codemic.io/# January 28, 2026 at 07:28PM

Tuesday, January 27, 2026

Show HN: Decrypting the Zodiac Z32 triangulates a 100ft triangular crop mark https://ift.tt/JYbM2ye

Show HN: Decrypting the Zodiac Z32 triangulates a 100ft triangular crop mark https://ift.tt/NOgZrwS January 28, 2026 at 12:42AM

Show HN: Lightbox – Flight recorder for AI agents (record, replay, verify) https://ift.tt/dYTwGXu

Show HN: Lightbox – Flight recorder for AI agents (record, replay, verify) I built Lightbox because I kept running into the same problem: an agent would fail in production, and I had no way to know what actually happened. Logs were scattered, the LLM’s “I called the tool” wasn’t trustworthy, and re-running wasn’t deterministic. This week, tons of Clawdbot incidents have driven the point home. Agents with full system access can expose API keys and chat histories. Prompt injection is now a major security concern. When agents can touch your filesystem, execute code, and browse the web…you probably need a tamper-proof record of exactly what actions it took, especially when a malicious prompt or compromised webpage could hijack the agent mid-session. Lightbox is a small Python library that records every tool call an agent makes (inputs, outputs, timing) into an append-only log with cryptographic hashes. You can replay runs with mocked responses, diff executions across versions, and verify the integrity of logs after the fact. Think airplane black box, but for your hackbox. *What it does:* - Records tool calls locally (no cloud, your infra) - Tamper-evident logs (hash chain, verifiable) - Replay failures exactly with recorded responses - CLI to inspect, replay, diff, and verify sessions - Framework-agnostic (works with LangChain, Claude, OpenAI, etc.) *What it doesn’t do:* - Doesn’t replay the LLM itself (just tool calls) - Not a dashboard or analytics platform - Not trying to replace LangSmith/Langfuse (different problem) *Use cases I care about:* - Security forensics: agent behaved strangely, was it prompt injection? Check the trace. - Compliance: “prove what your agent did last Tuesday” - Debugging: reproduce a failure without re-running expensive API calls - Regression testing: diff tool call patterns across agent versions As agents get more capable and more autonomous (Clawdbot/Molt, Claude computer use, Manus, Devin), I think we’ll need black boxes the same way aviation does. This is my attempt at that primitive. It’s early (v0.1), intentionally minimal, MIT licensed. Site: < https://uselightbox.app > install: `pip install lightbox-rec` GitHub: < https://github.com/mainnebula/Lightbox-Project > Would love feedback, especially from anyone thinking about agent security or running autonomous agents in production. https://ift.tt/GfjCT8k January 27, 2026 at 10:53PM

Monday, January 26, 2026

Show HN: Ourguide – OS wide task guidance system that shows you where to click https://ift.tt/kuVXLrn

Show HN: Ourguide – OS wide task guidance system that shows you where to click Hey! I'm eshaan and I'm building Ourguide -an on-screen task guidance system that can show you where to click step-by-step when you need help. I started building this because whenever I didn’t know how to do something on my computer, I found myself constantly tabbing between chatbots and the app, pasting screenshots, and asking “what do I do next?” Ourguide solves this with two modes. In Guide mode, the app overlays your screen and highlights the specific element to click next, eliminating the need to leave your current window. There is also Ask mode, which is a vision-integrated chat that captures your screen context—which you can toggle on and off anytime -so you can ask, "How do I fix this error?" without having to explain what "this" is. It’s an Electron app that works OS-wide, is vision-based, and isn't restricted to the browser. Figuring out how to show the user where to click was the hardest part of the process. I originally trained a computer vision model with 2300 screenshots to identify and segment all UI elements on a screen and used a VLM to find the correct icon to highlight. While this worked extremely well—better than SOTA grounding models like UI Tars—the latency was just too high. I'll be making that CV+VLM pipeline OSS soon, but for now, I’ve resorted to a simpler implementation that achieves <1s latency. You may ask: if I can show you where to click, why can't I just click too? While trying to build computer-use agents during my job in Palo Alto, I hit the core limitation of today’s computer-use models where benchmarks hover in the mid-50% range (OSWorld). VLMs often know what to do but not what it looks like; without reliable visual grounding, agents misclick and stall. So, I built computer use—without the "use." It provides the visual grounding of an agent but keeps the human in the loop for the actual execution to prevent misclicks. I personally use it for the AWS Console's "treasure hunt" UI, like creating a public S3 bucket with specific CORS rules. It’s also been surprisingly helpful for non-technical tasks, like navigating obscure settings in Gradescope or Spotify. Ourguide really works for any task when you’re stuck or don't know what to do. You can download and test Ourguide here: https://ourguide.ai/downloads The project is still very early, and I’d love your feedback on where it fails, where you think it worked well, and which specific niches you think Ourguide would be most helpful for. https://ourguide.ai January 26, 2026 at 11:49PM

Show HN: TetrisBench – Gemini Flash reaches 66% win rate on Tetris against Opus https://ift.tt/rHKTfWV

Show HN: TetrisBench – Gemini Flash reaches 66% win rate on Tetris against Opus https://ift.tt/zrAGquD January 27, 2026 at 12:12AM

Show HN: Postgres and ClickHouse as a unified data stack https://ift.tt/Hb31MBO

Show HN: Postgres and ClickHouse as a unified data stack Hello HN, this is Sai and Kaushik from ClickHouse. Today we are launching a Postgres managed service that is natively integrated with ClickHouse. It is built together with Ubicloud (YC W24). TL;DR: NVMe-backed Postgres + built-in CDC into ClickHouse + pg_clickhouse so you can keep your app Postgres-first while running analytics in ClickHouse. Try it (private preview): https://ift.tt/8is5qhY Blog w/ live demo: https://ift.tt/Wotf4Cy Problem Across many fast-growing companies using Postgres, performance and scalability commonly emerge as challenges as they grow. This is for both transactional and analytical workloads. On the OLTP side, common issues include slower ingestion (especially updates, upserts), slower vacuums, long-running transactions incurring WAL spikes, among others. In most cases, these problems stem from limited disk IOPS and suboptimal disk latency. Without the need to provision or cap IOPS, Postgres could do far more than it does today. On the analytics side, many limitations stem from the fact that Postgres was designed primarily for OLTP and lacks several features that analytical databases have developed over time, for example vectorized execution, support for a wide variety of ingest formats, etc. We’re increasingly seeing a common pattern where many companies like GitLab, Ramp, Cloudflare etc. complement Postgres with ClickHouse to offload analytics. This architecture enables teams to adopt two purpose-built open-source databases. That said, if you’re running a Postgres based application, adopting ClickHouse isn’t straightforward. You typically end up building a CDC pipeline, handling backfills, and dealing with schema changes and updating your application code to be aware of a second database for analytics. Solution On the OLTP side, we believe that NVMe-based Postgres is the right fit and can drastically improve performance. NVMe storage is physically colocated with compute, enabling significantly lower disk latency and higher IOPS than network-attached storage, which requires a network round trip for disk access. This benefits disk-throttled workloads and can significantly (up to 10x) speed up operations incl. updates, upserts, vacuums, checkpointing, etc. We are working on a detailed blog examining how WAL fsyncs, buffer reads, and checkpoints dominate on slow I/O and are significantly reduced on NVMe. Stay tuned! On the OLAP side, the Postgres service includes native CDC to ClickHouse and unified query capabilities through pg_clickhouse. Today, CDC is powered by ClickPipes/PeerDB under the hood, which is based on logical replication. We are working to make this faster and easier by supporting logical replication v2 for streaming in-progress transactions, a new logical decoding plugin to address existing limitations of logical replication, working toward sub-second replication, and more. Every Postgres comes packaged with the pg_clickhouse extension, which reduces the effort required to add ClickHouse-powered analytics to a Postgres application. It allows you to query ClickHouse directly from Postgres, enabling Postgres for both transactions and analytics. pg_clickhouse supports comprehensive query pushdown for analytics, and we plan to continuously expand this further ( https://ift.tt/7Vu0xZg ). Vision To sum it up - Our vision is to provide a unified data stack that combines Postgres for transactions with ClickHouse for analytics, giving you best-in-class performance and scalability on an open-source foundation. Get Started We are actively working with users to onboard them to the Postgres service. Since this is a private preview, it is currently free of cost.If you’re interested, please sign up here. https://ift.tt/8is5qhY We’d love to hear your feedback on our thesis and anything else that comes to mind, it would be super helpful to us as we build this out! January 22, 2026 at 11:51PM

Sunday, January 25, 2026

Show HN: I used my book generator to generate a catalog of books it can generate https://ift.tt/nRyMZUu

Show HN: I used my book generator to generate a catalog of books it can generate https://ift.tt/4QD5TcS January 26, 2026 at 12:56AM

Show HN: Uv-pack – Pack a uv environment for later portable (offline) install https://ift.tt/1VyFArB

Show HN: Uv-pack – Pack a uv environment for later portable (offline) install I kept running into the same problem: modern Python tooling, but deployments to air-gapped systems are a pain. Even with uv, moving a fully locked environment into a network-isolated machine was no fun. uv-pack should make this task less frustrating. It bundles a locked uv environment into a single directory that installs fully offline—dependencies, local packages, and optionally a portable Python interpreter. Copy it over, run one script, and you get the exact same environment every time. Just released, would love some feedback! https://ift.tt/UDlwpyf January 26, 2026 at 12:26AM

Show HN: I Created a Tool to Convert YouTube Videos into 2000 Word SEO Blog https://ift.tt/TPEdG3X

Show HN: I Created a Tool to Convert YouTube Videos into 2000 Word SEO Blog https://landkit.pro/youtube-to-blog January 25, 2026 at 11:16PM

Saturday, January 24, 2026

Show HN: Remote workers find your crew https://ift.tt/pEFtCnO

Show HN: Remote workers find your crew Working from home? Are you a remote employee that "misses" going to the office? Well let's be clear on what you actually miss. No one misses that feeling of having to go and be there 8 hours. But many people miss friends. They miss being part of a crew. Going to lunch, hearing about other people's lives in person not over zoom. Join a co-working space you say? Yes. We have. It's like walking into a library and trying to talk to random people and getting nothing back. Zero part of a crew feeling. https://ift.tt/2gaPZQy This app helps you find a crew and meet up for work and get that crew feeling. This is my first time using cloudflare workers for a webapp. The free plan is amazing! You get so much compare to anything else out there in terms of limits. The sqlite database they give you is just fine, I don't miss psql. January 24, 2026 at 11:54PM

Show HN: StormWatch – Weather emergency dashboard with prep checklists https://ift.tt/fVhtQkb

Show HN: StormWatch – Weather emergency dashboard with prep checklists Basically was getting annoyed jumping between 5 different sites during this winter storm season, so I built "StormWatch". It's a no-fuss, mobile-friendly webpage (dashboard) that shows all the stuff I was looking for, but in one simple UI. Features: - Real-time NWS alerts with safety tips - Snow/ice/precip accumulation forecasts (+wind) - Dynamic preparation checklists based on your alerts - Supply calculator for your household size - Regional weather news It's free, no login required, works on any device. Just enter your ZIP. https://jeisey.github.io/stormwatch/ Uses NWS and GDELT APIs and open source. Feel free to fork and modify however you'd like. For builders: - Used an API-testing agent to verify all endpoints, response patterns, types, and rate limits - Used a scope & validation agent to keep the slices simple, focused, and tested - VS-code Copilot (Sonnet 4 for dev agents + Opus 4.5 for scope and validation) https://jeisey.github.io/stormwatch/ January 25, 2026 at 01:10AM

Friday, January 23, 2026

Show HN: Obsidian Workflows with Gemini: Inbox Processing and Task Review https://ift.tt/dCRgNaW

Show HN: Obsidian Workflows with Gemini: Inbox Processing and Task Review https://gist.github.com/juanpabloaj/59bc13fbed8a0f8e87791a3fb0360c19 January 24, 2026 at 12:03AM

Show HN: Teemux – Zero-config log multiplexer with built-in MCP server https://ift.tt/4g6xarO

Show HN: Teemux – Zero-config log multiplexer with built-in MCP server I started to use AI agents for coding and quickly ran into a frustrating limitation – there is no easy way to share my development environment logs with AI agents. So that's what is Teemux. A simple CLI program that aggregates logs, makes them available to you as a developer (in a pretty UI), and makes them available to your AI coding agents using MCP. There is one implementation detail that I geek out about: It is zero config and has built-in leader nomination for running the web server and MCP server. When you start one `teemux` instance, it starts web server, .. when you start second and third instances, they join the first server and start merging logs. If you were to kill the first instance, a new leader is nominated. This design allows to seamless add/remove nodes that share logs (a process that historically would have taken a central log aggregator). A super quick demo: npx teemux -- curl -N https://ift.tt/5HZA2Ml https://teemux.com/ January 23, 2026 at 09:19PM

Thursday, January 22, 2026

Show HN: Synesthesia, make noise music with a colorpicker https://ift.tt/OVva103

Show HN: Synesthesia, make noise music with a colorpicker This is a (silly, little) app which lets you make noise music using a color picker as an instrument. When you click on a specific point in the color picker, a bit of JavaScript maps the binary representation of the clicked-on color's hex-code to a "chord" in the 24 tone-equal-temperament scale. That chord is then played back using a throttled audio generation method which was implemented via Tone.js. NOTE! Turn the volume way down before using the site. It is noise music. :) https://visualnoise.ca January 22, 2026 at 11:22AM

Show HN: I've been using AI to analyze every supplement on the market https://ift.tt/aoNMPgr

Show HN: I've been using AI to analyze every supplement on the market Hey HN! This has been my project for a few years now. I recently brought it back to life after taking a pause to focus on my studies. My goal with this project is to separate fluff from science when shopping for supplements. I am doing this in 3 steps: 1.) I index every supplement on the market (extract each ingredient, normalize by quantity) 2.) I index every research paper on supplementation (rank every claim by effect type and effect size) 3.) I link data between supplements and research papers Earlier last year, I took pause on a project because I've ran into a few issues: Legal: Shady companies are sending C&Ds letters demanding their products are taken down from the website. It is not something I had the mental capacity to respond to while also going through my studies. Not coincidentally, these are usually brands with big marketing budgets and poor ingredients to price ratio. Technical: I started this project when the first LLMs came out. I've built extensive internal evals to understand how LLMs are performing. The hallucinations at the time were simply too frequent to passthrough this data to visitors. However, I recently re-ran my evals with Opus 4.5 and was very impressed. I am running out of scenarios that I can think/find where LLMs are bad at interpreting data. Business: I still haven't figured out how to monetize it or even who the target customer is. Despite these challenges, I decided to restart my journey. My mission is to bring transparency (science and price) to the supplement market. My goal is NOT to increase the use of supplements, but rather to help consumers make informed decisions. Often times, supplementation is not necessary or there are natural ways to supplement (that's my focus this quarter – better education about natural supplementation). Some things that are helping my cause – Bryan Johnson's journey has drawn a lot more attention to healthy supplementation (blueprint). Thanks to Bryan's efforts, I had so many people in recent months reach out to ask about the state of the project – interest I've not had before. I am excited to restart this journey and to share it with HN. Your comments on how to approach this would be massively appreciated. Some key areas of the website: * Example of navigating supplements by ingredient https://ift.tt/hYEO4pA * Example of research paper analyzed using AI https://ift.tt/wsArZDi... * Example of looking for very specific strains or ingredients https://ift.tt/seEmPR0 * Example of navigating research by health-outcomes https://ift.tt/khf9BFv... * Example of product listing https://ift.tt/GgWa3fn https://pillser.com/ January 22, 2026 at 07:39PM

Wednesday, January 21, 2026

Show HN: I built a chess explorer that explains strategy instead of just stats https://ift.tt/2GSwg3a

Show HN: I built a chess explorer that explains strategy instead of just stats I built this because I got tired of Stockfish giving me evaluations (+0.5) without explaining the actual plan. Most opening explorers focus on statistics (Win/Loss/Draw). I wanted a tool that explains the strategic intent behind the moves (e.g., "White plays c4 to clamp down on d5" vs just "White plays c4"). The Project: Comprehensive Database: I’ve mapped and annotated over 3,500 named opening variations. It covers everything from main lines (Ruy Lopez, Sicilian) to deep sidelines. Strategic Visualization: The UI highlights key squares and draws arrows based on the textual explanation, linking the logic to the board state dynamically. Hybrid Architecture: For the 3,500+ core lines, it serves my proprietary strategic data. For anything deeper/rarer, it seamlessly falls back to the Lichess Master API so the explorer remains functional 20 moves deep. Stack: Next.js (App Router), MongoDB Atlas for the graph data, and Arcjet for security/rate-limiting. It is currently in Beta. I am working on expanding the annotated coverage, but the main theoretical landscape is mapped. Feedback on the UI/UX or the data structure is welcome. https://ift.tt/qSE42UN January 21, 2026 at 09:26PM

Tuesday, January 20, 2026

Show HN: Xv6OS – A modified MIT xv6 with GUI https://ift.tt/qL0EM4e

Show HN: Xv6OS – A modified MIT xv6 with GUI I've been working on a hobby project to transform the traditional xv6 teaching OS into a graphical environment. Key Technical Features: GUI Subsystem: I implemented a kernel-level window manager and drawing primitives. Mouse Support: Integrated a PS/2 mouse driver for navigation. Custom Toolchain: I used Python scripts (Pillow) and Go to convert PNG assets and TTF fonts into C arrays for the kernel. Userland: Includes a terminal, file explorer, text editor, and a Floppy Bird game. The project is built for i386 using a monolithic kernel design. You can find the full source code and build instructions here: https://ift.tt/GoNVA3C January 20, 2026 at 10:46PM

Show HN: Trinity – a native macOS Neovim app with Finder-style projects https://ift.tt/C0uoF7q

Show HN: Trinity – a native macOS Neovim app with Finder-style projects Hi HN, I built Trinity, a native macOS app that wraps Neovim with a project-centric UI. The goal was to keep Neovim itself untouched, but provide a more Mac-native workflow: – Finder-style project browser – Multiple projects/windows – Markdown preview, image/pdf viewer – Native menus, shortcuts, and windowing – Minimal UI, no GPU effects or terminal emulation It’s distributed directly (signed + notarized PKG) and uses Sparkle for incremental updates. This started as a personal tool after bouncing between terminal Neovim and heavier editors. Curious to hear feedback from other Neovim users, especially on what feels right or wrong in a GUI wrapper. Site: https://ift.tt/jZzQWw6 Direct download: https://ift.tt/IRPUSxN... https://ift.tt/jZzQWw6 January 20, 2026 at 11:14PM

Monday, January 19, 2026

Show HN: An interactive physics simulator with 1000's of balls, in your terminal https://ift.tt/DS98K6l

Show HN: An interactive physics simulator with 1000's of balls, in your terminal https://ift.tt/cIOHZGV January 19, 2026 at 11:17PM

Show HN: Subth.ink – write something and see how many others wrote the same https://ift.tt/7RVDmTK

Show HN: Subth.ink – write something and see how many others wrote the same Hey HN, this is a small Haskell learning project that I wanted to share. It's just a website where you can see how many people write the exact same text as you (thought it was a fun idea). It's built using Scotty, SQLite, Redis and Caddy. Currently it's running in a small DigitalOcean droplet (1 Gb RAM). Using Haskell for web development (specifically with Scotty) was slightly easier than I thought, but still a relatively hard task compared to other languages. One of my main friction points was Haskell's multiple string-like types: String, Text (& lazy), ByteString (& lazy), and each library choosing to consume a different one amongst these. There is also a soft requirement to learn monad transformers (e.g. to understand what liftIO is doing) which made the initial development more difficult. https://subth.ink/ January 20, 2026 at 12:04AM

Sunday, January 18, 2026

Show HN: Xenia – A monospaced font built with a custom Python engine https://ift.tt/DqliAI4

Show HN: Xenia – A monospaced font built with a custom Python engine I'm an engineer who spent the last year fixing everything I hated about monofonts (especially that double-story 'a'). I built a custom Python-based procedural engine to generate the weights because I wanted more logical control over the geometry. It currently has 700+ glyphs and deep math support. Regular weight is free for the community. I'm releasing more weights based on interest. https://ift.tt/h1xdqKT January 18, 2026 at 04:09PM

Saturday, January 17, 2026

Show HN: Docker.how – Docker command cheat sheet https://ift.tt/7W1gRyX

Show HN: Docker.how – Docker command cheat sheet https://docker.how/ January 18, 2026 at 01:47AM

Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) https://ift.tt/fbWst50

Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) Hi HN, I’m releasing minikv, a distributed key-value and object store in Rust. What is minikv? minikv is an open-source, distributed storage engine built for learning, experimentation, and self-hosted setups. It combines a strongly-consistent key-value database (Raft), S3-compatible object storage, and basic multi-tenancy. I started minikv as a learning project about distributed systems, and it grew into something production-ready and fun to extend. Features/highlights: - Raft consensus with automatic failover and sharding - S3-compatible HTTP API (plus REST/gRPC APIs) - Pluggable storage backends: in-memory, RocksDB, Sled - Multi-tenant: per-tenant namespaces, role-based access, quotas, and audit - Metrics (Prometheus), TLS, JWT-based API keys - Easy to deploy (single binary, works with Docker/Kubernetes) Quick demo (single node): git clone https://ift.tt/YSLoA0R cd minikv cargo run --release -- --config config.example.toml curl localhost:8080/health/ready # S3 upload + read curl -X PUT localhost:8080/s3/mybucket/hello -d "hi HN" curl localhost:8080/s3/mybucket/hello Docs, cluster setup, and architecture details are in the repo. I’d love to hear feedback, questions, ideas, or your stories running distributed infra in Rust! Repo: https://ift.tt/hv5tk80 Crate: https://ift.tt/dYrkCDh https://ift.tt/hv5tk80 January 18, 2026 at 01:09AM

Friday, January 16, 2026

Show HN: 1Code – Open-source Cursor-like UI for Claude Code https://ift.tt/hLzNWrH

Show HN: 1Code – Open-source Cursor-like UI for Claude Code Hi, we're Sergey and Serafim. We've been building dev tools at 21st.dev and recently open-sourced 1Code ( https://1code.dev ), a local UI for Claude Code. Here's a video of the product: https://www.youtube.com/watch?v=Sgk9Z-nAjC0 Claude Code has been our go-to for 4 months. When Opus 4.5 dropped, parallel agents stopped needing so much babysitting. We started trusting it with more: building features end to end, adding tests, refactors. Stuff you'd normally hand off to a developer. We started running 3-4 at once. Then the CLI became annoying: too many terminals, hard to track what's where, diffs scattered everywhere. So we built 1Code.dev, an app to run your Claude Code agents in parallel that works on Mac and Web. On Mac: run locally, with or without worktrees. On Web: run in remote sandboxes with live previews of your app, mobile included, so you can check on agents from anywhere. Running multiple Claude Codes in parallel dramatically sped up how we build features. What’s next: Bug bot for identifying issues based on your changes; QA Agent, that checks that new features don't break anything; Adding OpenCode, Codex, other models and coding agents. API for starting Claude Codes in remote sandboxes. Try it out! We're open-source, so you can just bun build it. If you want something hosted, Pro ($20/mo) gives you web with live browser previews hosted on remote sandboxes. We’re also working on API access for running Claude Code sessions programmatically. We'd love to hear your feedback! https://ift.tt/EVT8rXO January 16, 2026 at 12:50AM

Thursday, January 15, 2026

Show HN: I built an 11MB offline PDF editor because mobile Acrobat is 500MB https://ift.tt/Y8ac3um

Show HN: I built an 11MB offline PDF editor because mobile Acrobat is 500MB https://revpdf.com/ January 16, 2026 at 12:30AM

Show HN: OpenWork – an open-source alternative to Claude Cowork https://ift.tt/pzFq1hR

Show HN: OpenWork – an open-source alternative to Claude Cowork hi hn, i built openwork, an open-source, local-first system inspired by claude cowork. it’s a native desktop app that runs on top of opencode (opencode.ai). it’s basically an alternative gui for opencode, which (at least until now) has been more focused on technical folks. the original seed for openwork was simple: i have a home server, and i wanted my wife and i to be able to run privileged workflows. things like controlling home assistant, or deploying custom web apps (e.g. our customs recipe app recipes.benjaminshafii.com), legal torrents, without living in a terminal. our initial setup was running the opencode web server directly and sharing credentials to it. that worked, but i found the web ui unreliable and very unfriendly for non-technical users. the goal with openwork is to bring the kind of workflows i’m used to running in the cli into a gui, while keeping a very deep extensibility mindset. ideally this grows into something closer to an obsidian-style ecosystem, but for agentic work. some core principles i had in mind: - open by design: no black boxes, no hosted lock-in. everything runs locally or on your own servers. (models don’t run locally yet, but both opencode and openwork are built with that future in mind.) - hyper extensible: skills are installable modules via a skill/package manager, using the native opencode plugin ecosystem. - non-technical by default: plans, progress, permissions, and artifacts are surfaced in the ui, not buried in logs. you can already try it: - there’s an unsigned dmg - or you can clone the repo, install deps, and if you already have opencode running it should work right away it’s very alpha, lots of rough edges. i’d love feedback on what feels the roughest or most confusing. happy to answer questions. https://ift.tt/dcQACmL January 14, 2026 at 10:25AM

Wednesday, January 14, 2026

Show HN: Webctl – Browser automation for agents based on CLI instead of MCP https://ift.tt/E12jvsf

Show HN: Webctl – Browser automation for agents based on CLI instead of MCP https://ift.tt/IEQLC0T January 14, 2026 at 08:04PM

Show HN: Repomance: A Tinder style app for GitHub repo discovery https://ift.tt/rbY4gfW

Show HN: Repomance: A Tinder style app for GitHub repo discovery Hi everyone, Repomance is an app for discovering curated and trending repositories. Swipe to star them directly using your GitHub account. It is currently available on iOS, iPadOS, and macOS. I plan to develop an Android version once the app reaches 100 users. Repomance is open source: https://ift.tt/kzgtXh2 All feedback is welcome, hope you enjoy using it. https://ift.tt/JCWojD1 January 15, 2026 at 12:24AM

Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR https://ift.tt/g5zABqT

Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR For the past year I've been working to rethink how AI manages timing in conversation at Tavus. I've spent a lot of time listening to conversations. Today we're announcing the release of Sparrow-1, the most advanced conversational flow model in the world. Some technical details: - Predicts conversational floor ownership, not speech endpoints - Audio-native streaming model, no ASR dependency - Human-timed responses without silence-based delays - Zero interruptions at sub-100ms median latency - In benchmarks Sparrow-1 beats all existing models at real world turn-taking baselines I wrote more about the work here: https://ift.tt/ZhOpQyj... https://ift.tt/pnk4XHS January 14, 2026 at 11:31PM

Tuesday, January 13, 2026

Show HN: Timberlogs – Drop-in structured logging for TypeScript https://ift.tt/9BJDaFm

Show HN: Timberlogs – Drop-in structured logging for TypeScript Hi HN! I built Timberlogs because I was tired of console.log in production and existing logging solutions requiring too much setup. Timberlogs is a drop-in structured logging library for TypeScript: npm install timberlogs-client import { createTimberlogs } from "timberlogs-client"; const timber = createTimberlogs({ source: "my-app", environment: "production", apiKey: process.env.TIMBER_API_KEY, }); timber.info("User signed in", { userId: "123" }); timber.error("Payment failed", error); Features: - Auto-batching with retries - Automatic redaction of sensitive data (passwords, tokens) - Full-text search across all your logs - Real-time dashboard - Flow tracking to link related logs It's currently in beta and free to use. Would love feedback from the HN community. Site: https://timberlogs.dev Docs: https://ift.tt/Hbuk7vF npm: https://ift.tt/BfDVA2G GitHub: https://ift.tt/bCTKJS0 January 14, 2026 at 12:13AM

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever https://ift.tt/ZYhQlAg

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever Reddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware. The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone. What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine. API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools. Self-hosting options: - USB drive / local folder (just open the HTML files) - Home server on your LAN - Tor hidden service (2 commands, no port forwarding needed) - VPS with HTTPS - GitHub Pages for small archives Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away. Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic. How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture. Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://ift.tt/TRpwCj4 (Public Domain) Pushshift torrent: https://ift.tt/bnPmsBf... https://ift.tt/TRpwCj4 January 13, 2026 at 09:05PM

Monday, January 12, 2026

Show HN: Agent-of-empires: OpenCode and Claude Code session manager https://ift.tt/7ZAs5gR

Show HN: Agent-of-empires: OpenCode and Claude Code session manager Hi! I’m Nathan: an ML Engineer at Mozilla.ai: I built agent-of-empires (aoe): a CLI application to help you manage all of your running Claude Code/Opencode sessions and know when they are waiting for you. - Written in rust and relies on tmux for security and reliability - Monitors state of cli sessions to tell you when an agent is running vs idle vs waiting for your input - Manage sessions by naming them, grouping them, configuring profiles for various settings I'm passionate about getting self-hosted open-weight LLMs to be valid options to compete with proprietary closed models. One roadblock for me is that although tools like opencode allow you to connect to Local LLMs (Ollama, lm studio, etc), they generally run muuuuuch slower than models hosted by Anthropic and OpenAI. I would start a coding agent on a task, but then while I was sitting waiting for that task to complete, I would start opening new terminal windows to start multitasking. Pretty soon, I was spending a lot of time toggling between terminal windows to see which one needed me: like help in adding a clarification, approving a new command, or giving it a new task. That’s why I build agent-of-empires (“aoe”). With aoe, I can launch a bunch of opencode and Claude Code sessions and quickly see their status or toggle between them, which helps me avoid having a lot of terminal windows open, or having to manually attach and detach from tmux sessions myself. It’s helping me give local LLMs a fair try, because them being slower is now much less of a bottleneck. You can give it an install with curl -fsSL https://ift.tt/hZGvxU5... | bash Or brew install njbrake/aoe/aoe And then launch by simply entering the command `aoe`. I’m interested in what you think as well as what features you think would be useful to add! I am planning to add some further features around sandboxing (with docker) as well as support for intuitive git worktrees and am curious if there are any opinions about what should or shouldn’t be in it. I decided against MCP management or generic terminal usage, to help keep the tool focused on parts of agentic coding that I haven’t found a usable solution for. I hit the character limit on this post which prevented me from including a view of the output, but the readme on the github link has a screenshot showing what it looks like. Thanks! https://ift.tt/jHGpJyR January 12, 2026 at 07:53PM

Show HN: AI video generator that outputs React instead of video files https://ift.tt/8m9KDAk

Show HN: AI video generator that outputs React instead of video files Hey HN! This is Mayank from Outscal with a new update. Our website is now live. Quick context: we built a tool that generates animated videos from text scripts. The twist: instead of rendering pixels, it outputs React/TSX components that render as the video. Try it: https://ai.outscal.com/ Sample video: https://ift.tt/eqFEz6o... You pick a style (pencil sketch or neon), enter a script (up to 2000 chars), and it runs: scene direction → ElevenLabs audio → SVG assets → Scene Design → React components → deployed video. What we learned building this: We built the first version on Claude Code. Even with a human triggering commands, agents kept going off-script — they had file tools and would wander off reading random files, exploring tangents, producing inconsistent output. The fix was counterintuitive: fewer tools, not more guardrails. We stripped each agent to only what it needed and pre-fed context instead of letting agents fetch it themselves. Quality improved immediately. We wouldn't launch the web version until this was solid. Moved to Claude Agent SDK, kept the same constraints, now fully automated. Happy to discuss the agent architecture, why React-as-video, or anything else. https://ai.outscal.com/ January 13, 2026 at 12:33AM

Show HN: Yolobox – Run AI coding agents with full sudo without nuking home dir https://ift.tt/B32LvlH

Show HN: Yolobox – Run AI coding agents with full sudo without nuking home dir https://ift.tt/NQwZRkc January 13, 2026 at 12:04AM

Sunday, January 11, 2026

Saturday, January 10, 2026

Show HN: Play poker with LLMs, or watch them play against each other https://ift.tt/fXeN7t2

Show HN: Play poker with LLMs, or watch them play against each other I was curious to see how some of the latest models behaved and played no limit texas holdem. I built this website which allows you to: Spectate: Watch different models play against each other. Play: Create your own table and play hands against the agents directly. https://llmholdem.com/ January 11, 2026 at 12:57AM

Show HN: Marten – Elegant Go web framework (nothing in the way) https://ift.tt/G7JAtyQ

Show HN: Marten – Elegant Go web framework (nothing in the way) https://ift.tt/SZAl2Y4 January 11, 2026 at 02:40AM

Show HN: I used Claude Code to discover connections between 100 books https://ift.tt/CX09Fwx

Show HN: I used Claude Code to discover connections between 100 books I think LLMs are overused to summarise and underused to help us read deeper. I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them. I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising. On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison. One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans ( https://ift.tt/SKwVCv5 ). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset. Details: * The books are picked from HN’s favourites (which I collected before: https://ift.tt/LAQJV6I ). * Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10. * Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes. * There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window. * Everything is stored in SQLite and manipulated using a set of CLI tools. I wrote more about the process here: https://ift.tt/EVF8f3D I’m curious if this way of reading resonates for anyone else - LLM-mediated or not. https://ift.tt/eqZURGf January 10, 2026 at 10:26PM

Friday, January 9, 2026

Show HN: Various shape regularization algorithms https://ift.tt/VcSCedq

Show HN: Various shape regularization algorithms Shape regularization is a technique used in computational geometry to clean up noisy or imprecise geometric data by aligning segments to common orientations and adjusting their positions to create cleaner, more regular shapes. I needed a Python implementation so started with the examples implemented in CGAL then added a couple more for snap and joint regularization and metric regularization. https://ift.tt/vfSn7iG January 9, 2026 at 07:43AM

Thursday, January 8, 2026

Show HN: Turn your PRs into marketing updates https://ift.tt/Enw6iTH

Show HN: Turn your PRs into marketing updates https://personabox.app January 9, 2026 at 01:05AM

Show HN: Catnip – Run Claude Code from Your iPhone Using GitHub Codespaces https://ift.tt/fRqnpPw

Show HN: Catnip – Run Claude Code from Your iPhone Using GitHub Codespaces Hi HN — I built Catnip, an open-source iOS app that lets you run Claude Code against a real development environment from your phone. Under the hood it spins up a GitHub Codespace, installs Claude Code, and connects the iOS client to it securely. You can use a full terminal when needed, or a lightweight native UI for monitoring and interaction. I built this because Claude Code is most useful when it has access to a persistent environment with plugins, tools, and real repos — and I wanted that flexibility away from my laptop. GitHub gives personal users 120 free Codespaces hours/month, and Catnip automatically shuts down inactive instances. Open source: https://ift.tt/8dvCqup App Store: https://ift.tt/K0Wtau1 Happy to answer questions or hear feedback. https://ift.tt/8dvCqup January 8, 2026 at 11:25PM

Wednesday, January 7, 2026

Show HN: I visualized the entire history of Citi Bike in the browser https://ift.tt/N6zIyA2

Show HN: I visualized the entire history of Citi Bike in the browser Each moving arrow represents one real bike ride out of 291 million, and if you've ever taken a Citi Bike before, you are included in this massive visualization! You can search for your ride using Cmd + K and your Citi Bike receipt, which should give you the time of your ride and start/end station. Everything is open source: https://ift.tt/IjV89Lr Some technical details: - No backend! Processed data is stored in parquet files on a Cloudflare CDN, and queried directly by DuckDB WASM - deck.gl w/ Mapbox for GPU-accelerated rendering of thousands of concurrent animated bikes - Web Workers decode polyline routes and do as much precomputation as possible off the main thread - Since only (start, end) station pairs are provided, routes are generated by querying OSRM for the shortest path between all 2,400+ station pairs https://bikemap.nyc/ January 8, 2026 at 12:27AM

Show HN: Free and local browser tool for designing gear models for 3D printing https://ift.tt/ObaQKyJ

Show HN: Free and local browser tool for designing gear models for 3D printing Just build a local tool for designing gears that kinda looks and works nice https://ift.tt/aBcNYGX January 7, 2026 at 02:12PM

Tuesday, January 6, 2026

Show HN: Dimensions – Terminal Tab Manager https://ift.tt/LcKP4mg

Show HN: Dimensions – Terminal Tab Manager A terminal TUI that leverage tmux to make managing terminal tabs easier and more friendly. https://ift.tt/fi2x4GB January 6, 2026 at 10:18PM

Monday, January 5, 2026

Show HN: WOLS – Open standard for mushroom cultivation tracking https://ift.tt/GeBSyWK

Show HN: WOLS – Open standard for mushroom cultivation tracking I built an open labeling standard for tracking mushroom specimens through their lifecycle (from spore/culture to harvest). v1.1 adds clonal generation tracking (distinct from filial/strain generations) and conforms to JSON-LD for interoperability with agricultural/scientific data systems. Spec (CC 4.0): https://ift.tt/AO6hJp9 Client libraries (Apache 2.0): Python + CLI: pip install wols (also on GHCR) TypeScript/JS: npm install @wemush/wols Background: Mycology has fragmented data practices (misidentified species, inconsistent cultivation logs, no shared vocabulary for tracking genetics across generations). This is an attempt to fix that. Looking for feedback from anyone working with biological specimen tracking, agricultural data systems, or mycology. https://ift.tt/41oLCbD January 6, 2026 at 12:00AM

Show HN: CloudMasters TUI – Shop Boxes Across AWS, Azure, GCP, Hetzner, Vultr https://ift.tt/qneK2Wb

Show HN: CloudMasters TUI – Shop Boxes Across AWS, Azure, GCP, Hetzner, Vultr https://ift.tt/jiIp13g January 6, 2026 at 12:37AM

Show HN: Unicode cursive font generator that checks cross-platform compatibility https://ift.tt/zF5pcYG

Show HN: Unicode cursive font generator that checks cross-platform compatibility Hi HN, Unicode “cursive” and script-style fonts are widely used on social platforms, but many of them silently break depending on where they’re pasted — some render as tofu, some get filtered, and others display inconsistently across platforms. I built a small web tool that explores this problem from a compatibility-first angle: Instead of just converting text into cursive Unicode characters, the tool: • Generates multiple cursive / script variants based on Unicode blocks • Evaluates how safe each variant is across major platforms (Instagram, TikTok, Discord, etc.) • Explains why certain Unicode characters are flagged or unstable on specific platforms • Helps users avoid styles that look fine in one app but break in another Under the hood, it’s essentially mapping Unicode script characters and classifying them based on known platform filtering and rendering behaviors, rather than assuming “Unicode = universal.” This started as a side project after repeatedly seeing “fancy text” fail unpredictably in real usage. Feedback, edge cases, or Unicode quirks I may have missed are very welcome. https://ift.tt/sfr2Ixh January 1, 2026 at 07:37PM

Sunday, January 4, 2026

Show HN: I made R/place for LLMs https://ift.tt/XK01sZx

Show HN: I made R/place for LLMs I built AI Place, a vLLM-controlled pixel canvas inspired by r/place. Instead of users placing pixels, an LLM paints the grid continuously and you can watch it evolve live. The theme rotates daily. Currently, the canvas is scored using CLIP ViT-B/32 against a prompt (e.g., Pixelart of ${theme}). The highest-scoring snapshot is saved to the archive at the end of each day. The agents work in a simple loop: Input: Theme + image of current canvas Output: Python code to update specific pixel coordinates + One word description Tech: Next.js, SSE realtime updates, NVIDIA NIM (Mistral Large 3/GPT-OSS/Llama 4 Maverick) for the painting decisions Would love feedback! (or ideas for prompts/behaviors to try) https://art.heimdal.dev January 5, 2026 at 01:20AM

Show HN: Hover – IDE style hover documentation on any webpage https://ift.tt/o8VzuAk

Show HN: Hover – IDE style hover documentation on any webpage I thought it would be interesting to have ID style hover docs outside the IDE. Hover is a Chrome extension that gives you IDE style hover tooltips on any webpage: documentation sites, ChatGPT, Claude, etc. How it works: - When a code block comes into view, the extension detects tokens and sends the code to an LLM (via OpenRouter or custom endpoint) - The LLM generates documentation for tokens worth documenting, which gets cached - On hover, the cached documentation is displayed instantly A few things I wanted to get right: - Website permissions are granular and use Chrome's permission system, so the extension only runs where you allow it - Custom endpoints let you skip OpenRouter entirely – if you're at a company with its own infra, you can point it at AWS Bedrock, Google AI Studio, or whatever you have Built with TypeScript, Vite, and the Chrome extension APIs. Coming to the Chrome Web Store soon. Would love feedback on the onboarding experience and general UX – there were a lot of design decisions I wasn't sure about. Happy to answer questions about the implementation. https://ift.tt/vPQH59D January 5, 2026 at 12:13AM

Saturday, January 3, 2026

Show HN: ZELF – A modular ELF64 packer with 22 vintage and modern codecs https://ift.tt/R2MDf6s

Show HN: ZELF – A modular ELF64 packer with 22 vintage and modern codecs https://ift.tt/OIM6ZBW January 4, 2026 at 12:59AM

Show HN: Vibe Coding a static site on a $25 Walmart Phone https://ift.tt/08WYaPH

Show HN: Vibe Coding a static site on a $25 Walmart Phone Hi! I took a cheap $25 walmart phone and put a static server on it? Why? Just for a fun weekend project. I used Claude Code for most of the setup. I had a blast. It's running termux, andronix, nginx, cloudflared and even a prometheus node exporter. Here's the site: https://ift.tt/Yqx9LGV https://ift.tt/Tq1vK7o January 4, 2026 at 01:09AM

Show HN: A New Year gift for Python devs–My self-healing project's DNA analyzer https://ift.tt/Ho2qNUu

Show HN: A New Year gift for Python devs–My self-healing project's DNA analyzer I built a system that maps its own "DNA" using AST to enable self-healing capabilities. Instead of a standard release, I’ve hidden the core mapping engine inside a New Year gift file in the repo for those who like to explore code directly. It’s not just a script; it’s the architectural vision behind Ultra Meta. Check the HAPPY_NEW_YEAR.md file for the source https://ift.tt/oTiC6Jf January 4, 2026 at 12:50AM

Friday, January 2, 2026

Show HN: Go-Highway – Portable SIMD for Go https://ift.tt/Z35BevI

Show HN: Go-Highway – Portable SIMD for Go Go 1.26 adds native SIMD via GOEXPERIMENT=simd. This library provides a portability layer so the same code runs on AVX2, AVX-512, or falls back to scalar. Inspired by Google's Highway C++ library. Includes vectorized math (exp, log, sin, tanh, sigmoid, erf) since those come up a lot in ML/scientific code and the stdlib doesn't have SIMD versions. algo.SigmoidTransform(input, output) Requires go1.26rc1. Feedback welcome. https://ift.tt/eFYTdnH January 3, 2026 at 04:06AM

Show HN: Fluxer – open-source Discord-like chat https://ift.tt/mbF3nWS

Show HN: Fluxer – open-source Discord-like chat Hey HN, and happy new year! I'm Hampus Kraft [1], a 22-year-old software developer nearing completion of my BSc in Computer Engineering at KTH Royal Institute of Technology in Sweden. I've been working on Fluxer on and off for about 5 years, but recently decided to work on it full-time and see how far it could take me. Fluxer is an open source [2] communication platform for friends, groups, and communities (text, voice, and video). It aims for "modern chat app" feature coverage with a familiar UX, while being developed in the open and staying FOSS (AGPLv3). The codebase is largely written in TypeScript and Erlang. Try it now (no email or password required): https://ift.tt/nSOsCkg – this creates an "unclaimed account" (date of birth only) so you can explore the platform. Unclaimed accounts can create/join communities but have some limitations. You can claim your account with email + password later if you want. I've developed this solo , with limited capital from some early supporters and testers. Please keep this in mind if you find what I offer today lacking; I know it is! I'm sharing this now to find contributors and early supporters who want to help shape this into the chat app you actually want. ~~~ Fluxer is not currently end-to-end encrypted, nor is it decentralised or federated. I'm open to implementing E2EE and federation in the future, but they're complex features, and I didn't want to end up like other community chat apps [3] that get criticised for broken core functionality and missing expected features while chasing those goals. I'm most confident on the backend and web app, so that's where I've focused. After some frustrating attempts with React Native, I'm sticking with a mobile PWA for now (including push notification support) while looking into Skip [4] for a true native app. If someone with more experience in native dev has any thoughts, let me know! Many tech-related communities that would benefit from not locking information into walled gardens still choose Discord or Slack over forum software because of the convenience these platforms bring, a choice that is often criticised [5][6][7]. I will not only work on adding forums and threads, but also enable opt-in publishing of forums to the open web, including RSS/Atom feeds, to give you the best of both worlds. ~~~ I don't intend to license any part of the software under anything but the AGPLv3, limit the number of messages [8], or have an SSO tax [9]. Business-oriented features like SSO will be prioritised on the roadmap with your support. You'd only pay for support and optionally for sponsored features or fixes you'd like prioritised. I don't currently plan on SaaS, but I'm open to support and maintenance contracts. ~~~ I want Fluxer to become an easy-to-deploy, fully FOSS Discord/Slack-like platform for companies, communities, and individuals who want to own their chat infrastructure, or who wish to support an independent and bootstrapped hosted alternative. But I need early adopters and financial support to keep working on it full-time. I'm also very interested in code contributors since this is a challenging project to work on solo. My email is hampus@fluxer.app. ~~~ There’s a lot more to be said; I’ll be around in the comments to answer questions and fix things quickly if you run into issues. Thank you, and wishing you all the best in the new year! [1] https://ift.tt/CmSxXZe [2] https://ift.tt/w7Ii8S2 [3] https://ift.tt/jD02kEV [4] https://skip.tools/ [5] https://ift.tt/f4EyMqx [6] https://ift.tt/1vAx6sC [7] https://ift.tt/0D1BiRU [8] https://ift.tt/mNR2wc0 [9] https://sso.tax/ https://fluxer.app January 3, 2026 at 01:30AM

Show HN: I mapped System Design concepts to AI Prompts to stop bad code https://ift.tt/yPeR3vZ

Show HN: I mapped System Design concepts to AI Prompts to stop bad code https://ift.tt/HtG2hgl January 3, 2026 at 12:15AM

Thursday, January 1, 2026

Show HN: Feature detection exploration in Lidar DEMs via differential decomp https://ift.tt/QOhnU3u

Show HN: Feature detection exploration in Lidar DEMs via differential decomp I'm not a geospatial expert — I work in AI/ML. This started when I was exploring LiDAR data with agentic assitince and noticed that different signal decomposition methods revealed different terrain features. The core idea: if you systematically combine decomposition methods (Gaussian, bilateral, wavelet, morphological, etc.) with different upsampling techniques, each combination has characteristic "failure modes" that selectively preserve or eliminate certain features. The differences between outputs become feature-specific filters. The framework tests 25 decomposition × 19 upsampling methods across parameter ranges — about 40,000 combinations total. The visualization grid makes it easy to compare which methods work for what. Built in Cursor with Opus 4.5, NumPy, SciPy, scikit-image, PyWavelets, and OpenCV. Apache 2.0 licensed. I'd appreciate feedback from anyone who actually works with elevation data. What am I missing? What's obvious to practitioners that I wouldn't know? https://ift.tt/XMCSn1p January 1, 2026 at 05:59AM

Show HN: VectorDBZ, a desktop GUI for vector databases https://ift.tt/Dx3o9n0

Show HN: VectorDBZ, a desktop GUI for vector databases Hi HN, I built VectorDBZ, a cross-platform desktop app for exploring and analyzing vector databases like Qdrant, Weaviate, Milvus, and ChromaDB. It lets you browse collections, inspect vectors and metadata, run similarity searches, and visualize embeddings without writing custom scripts. GitHub (downloads and issues): https://ift.tt/fxFH5C4 Feedback welcome. If it’s useful, starring the repo helps keep me motivated. Thanks. https://ift.tt/fxFH5C4 January 1, 2026 at 08:55PM

Show HN: Please hack my C webserver (it's a collaborative whiteboard) https://ift.tt/yX7DKEw

Show HN: Please hack my C webserver (it's a collaborative whiteboard) Source code: https://ift.tt/CXb0PsN https://ced.quest/draw/ Februa...