Sunday, November 30, 2025

Show HN: Memory Lane – bootstrap your naive Claude instances with their history https://ift.tt/nqVpQYC

Show HN: Memory Lane – bootstrap your naive Claude instances with their history https://ift.tt/rWQt41y December 1, 2025 at 02:34AM

Show HN: I Built Tinyfocus – A Minimal Tool to Help Solo Founders Focus https://ift.tt/qmkCIgf

Show HN: I Built Tinyfocus – A Minimal Tool to Help Solo Founders Focus Hi HN, I just launched Tinyfocus, a small productivity tool designed specifically for solo founders and builders. The goal is simple: help you focus on what matters and get more done in less time. Here’s what Tinyfocus does: Lets you track your top tasks and prioritize efficiently. Provides micro dashboards to keep your daily focus in check. Lightweight, no distractions, no fluff. I built it entirely by myself, iterating in public, and I wanted to share it with the community to get feedback. It’s been crazy seeing how a simple tool can make such a difference in daily focus, especially when you’re juggling multiple projects as a solo founder. Check it out here: tinyfoc.us I’d love to hear your thoughts – any feedback, feature ideas, or bugs you notice. Thanks! https://ift.tt/VPiq79Q November 30, 2025 at 11:35PM

Show HN: Unmarker.it – Client-Side Tool to Disrupt Invisible AI Watermarks https://ift.tt/klBLYZy

Show HN: Unmarker.it – Client-Side Tool to Disrupt Invisible AI Watermarks I built a browser-only tool that disrupts invisible AI watermarks using Canvas, geometry, noise, and JPEG recompression. No backend, no uploads, no tracking. Pipeline: Shake – Random rotation (±0.5°) + slight zoom Stir – Low-amplitude RGB noise via getImageData Crush – JPEG recompression at ~0.85 quality Tested with SynthID (Google Gemini AI watermarking), and it remained undetected in all tests. Pipeline improvements? What would you add/change? Github: https://ift.tt/1zSylDX https://ift.tt/PcXzWvl December 1, 2025 at 12:07AM

Saturday, November 29, 2025

Show HN: I made a free log anonymizer in the browser https://ift.tt/6AVkH97

Show HN: I made a free log anonymizer in the browser https://ift.tt/cAqey8p November 30, 2025 at 04:05AM

Show HN: No Environment Setups Anymore https://ift.tt/irejCRM

Show HN: No Environment Setups Anymore Hi everyone, for last 7 months, I have been learning all the attempts made to eliminate codebase environment setups. Here's my product which is a leap in the same direction and will help you run any codebase on relevant machine. Check it out on gitarsenal.dev/ and we got ranked 6th on Product Hunt as well. https://ift.tt/VutdpMg November 30, 2025 at 01:27AM

Show HN: Zero-power photonic language model–code https://ift.tt/nCV1DgR

Show HN: Zero-power photonic language model–code The model uses a 1024-dimensional complex Hilbert space with 32 layers of programmable Mach–Zehnder meshes (Reck architecture) and derives token probabilities directly via the Born rule. Despite using only unitary operations and no attention mechanism, a 1024×32 model achieves coherent TinyStories generation after < 1.8 hours of training on a single consumer GPU. This is Part 1 - the next step is physical implementation with $50 of optics from AliExpress. https://zenodo.org/records/17764289 November 30, 2025 at 12:15AM

Friday, November 28, 2025

Show HN:TaskHub – Update https://ift.tt/xANCd0B

Show HN:TaskHub – Update https://ift.tt/gLvcn4I November 29, 2025 at 01:09AM

Show HN: Local-first RAG for PDF user manuals, datasheets https://ift.tt/0MWp6ov

Show HN: Local-first RAG for PDF user manuals, datasheets I work on embedded firmware for my day job, and I've found LLMs to be useful for answering questions about technical errata. But, they tend to be bad at answering highly specific questions without using some kind of search tool (if they decide to use one at all), and some user manuals are far too large to fit into a context window. I built askdocs-mcp as a way to give agents a more direct route to searching through a project's source-of-truth documents. My design constraints were that it run 100% locally, as some manuals are under NDA. It should start up fast, and let me experiment with different embedding & language models. It was built with ollama in mind, but if you can't run models locally, it will work with any OpenAI compatible endpoint. Features: - Incrementally builds and caches the set of docs. Initial start up can take a while as PDFs are chunked and ran through an embedding model, but after that, startup is near instant. - Uses the filesystem as the database - you only need `ollama` running somewhere so the tool can access an embedding and natural language model. - Provides a tool `ask_docs` for getting natural-language answers back about what the documentation says, which are annotated with page numbers the information came from. Those can be used with tool `get_doc_page` to retrieve the full page if the agent needs additional context. Because I'm providing the exact set of documents that apply to my project, I see fewer hallucinations and rabbit-hole chasing. The agent isn't relying (as much) on its latent space to answer questions, and it avoids using a web search tool which might find subtly different part numbers or protocol versions. It saves precious context as well, because the parent agent gets a concise version of what it's looking for, instead of doing the "searching" itself by loading large chunks of the document into itself. I'm sure there are improvements that can be made e.g. document chunking or the "system prompt" the tool gives to the language model - I'd love to hear your feedback, especially if you find this useful. Thanks! https://ift.tt/VTWQB4Y November 29, 2025 at 12:17AM

Show HN: Design a commercial bakery in an afternoon, not for $10k https://ift.tt/mDwe3gv

Show HN: Design a commercial bakery in an afternoon, not for $10k Hi HN, I'm Rafael Mauricio, the founder of RF Modern Bakery Design. For the last decade, I've worked with hundreds of talented bakers. The same frustrating pattern kept emerging: they had the culinary skills to build a successful business, but were completely blocked by the monumental task of designing their commercial kitchen. A brilliant baker shouldn't have to also become a construction manager, HVAC expert, and workflow engineer. The traditional process is a black hole of time and money—taking 3-6 months and $10,000+ in consulting fees just to get a viable floor plan. Most independent operators can't afford this. We built RF Modern Bakery Design to bridge that gap. The Product: It's a dual-sided service. Custom Bakery Design: The time-tested, professional service for creating full, build-ready bakery concepts. Online Bakery Design Courses: This is the core of our "Show HN." We've productized our decade of expertise into video courses that teach the principles of efficient layout, equipment selection, and workflow optimization. It's like having a senior designer guide you through the entire process, empowering you to design your own space or intelligently manage a contractor. The Tech Stack: We keep it simple and focused on delivery: a static site that lets us pour 100% of our energy into creating high-quality, actionable lessons and resources. We're launching this to solve the "barrier to entry" problem in the food service industry. It's for aspiring bakery owners, culinary graduates, and even existing owners planning a renovation who need a clear, professional path to a functional and profitable layout without the prohibitive upfront cost. We'd love for you to check it out and are eager for any feedback: Landing Page: https://ift.tt/RZCcHs0 Happy to answer any questions about the business model, the design principles we teach, the build process, or the bakery industry in general https://ift.tt/RZCcHs0 November 29, 2025 at 12:31AM

Show HN: Pulse 2.0 – Live co-listening rooms where anyone can be a DJ https://ift.tt/6l4JjQa

Show HN: Pulse 2.0 – Live co-listening rooms where anyone can be a DJ I wanted to listen to music with friends who live far away. Not "watch a YouTube video together" - actually share what I'm hearing in real-time, like we're in the same room. Pulse is what came out of that. Anyone can host a live audio stream from their browser tab or system audio. Listeners join, music recognition identifies tracks automatically, and there's chat with 7TV emotes. No account required - you get an anonymous code and you're in. We're running demo rooms that stream NTS Radio and SomaFM 24/7 (indie project, not affiliated - we backlink to the original stations). There's also a "Money For Nothing 24/7" room if you want to loop that Dire Straits instrumental forever. Think of it as co-listening infrastructure. Bedroom DJs, listening parties, or just sharing your current vibe. https://473999.net/pulse November 29, 2025 at 12:09AM

Thursday, November 27, 2025

Show HN: FounderPace – A leaderboard for founders who run https://ift.tt/uB9K6tU

Show HN: FounderPace – A leaderboard for founders who run https://ift.tt/swIF0Mq November 28, 2025 at 05:18AM

Show HN: I built a free astro and tailwind static site for GitHub pages https://ift.tt/IubC8M6

Show HN: I built a free astro and tailwind static site for GitHub pages Using my GitHub pro+ with vs code setup This is a demonstration of how good of a site can I build essentially 100% for free + free hosting (if coded manually without a 50$ subscription) And I went completely overboard on purpose its 99% useless for a real production deployment im sure but for mini blogs probably might be useful idk I dont even use the new GitHub spark or whatever to slow compared to 1k+ line edits every couple minutes im obviously working on a ton of other things I won't make public yet but will in the future https://tariqdude.github.io/Github-Pages-Project-v1/ November 28, 2025 at 03:47AM

Show HN: Whole-home VPN router with hardware kill switch (OpenWrt and WireGuard) https://ift.tt/HyxLlFo

Show HN: Whole-home VPN router with hardware kill switch (OpenWrt and WireGuard) With internet censorship and surveillance on the rise, ie; UK Online Safety Bill (July 2025) and Australia's social media legislation (Dec 2025) introducing mandatory age verification (read: initial step on the pathway to social credit), I wanted a privacy-first solution that protects browsing history from ISPs and third-party verification services, but not one that requires you to be an Einstein to deploy. This stack turns a Raspberry Pi (or any OpenWrt-compatible device) into a network-wide VPN gateway. Key features: - Hardware kill switch: VPN down = no internet (not a software rule that can leak) - AmneziaWG obfuscation for DPI-resistant connections - Optional AdGuard Home for DNS filtering - Works for all devices including smart TVs and IoT that can't run VPN apps Not a techie? The README is optimized for AI-assisted deployment. Feed it to your LLM of choice (Claude, GPT, etc.) and it can walk you through the entire setup for your specific hardware. Mullvad-focused but works with any WireGuard provider. MIT license. Docker deploy in testing (coming soon) https://ift.tt/tqjlk1s November 28, 2025 at 04:20AM

Show HN: No Black Friday – A directory of fair-price brands https://ift.tt/xSvBXKQ

Show HN: No Black Friday – A directory of fair-price brands The idea came from noticing how many brands inflate prices only to discount them later. Some companies refuse to do that, and I wanted a place to highlight them. If you know a company that doesn’t participate in Black Friday or similar discount events, please add it or share it here. I’d love to grow the list with help from the community. Manuel https://ift.tt/fL5C74H November 28, 2025 at 02:50AM

Wednesday, November 26, 2025

Show HN: Infinite scroll AI logo generator built with Nano Banana https://ift.tt/8Ei7LFy

Show HN: Infinite scroll AI logo generator built with Nano Banana https://ift.tt/hgOd830 November 27, 2025 at 01:04AM

Show HN: Yolodex – real-time customer enrichment API https://ift.tt/HQW4qBT

Show HN: Yolodex – real-time customer enrichment API hey hn, i’ve been working on an api to make it easy to know who your customers are, i would love your feedback. what it does send an email address, the api returns a json profile built from public data, things like: name, country, age, occupation, company, social handles and interests. It’s a single endpoint (you can hit this endpoint without auth to get a demo of what it looks like): curl https://ift.tt/edrfE3x \ --request POST \ --header 'Content-Type: application/json' \ --data '{"email": "john.smith@example.com"}' everyone gets 100 free, pricing is per _enriched profile_: 1 email ~ $0.03, but if i don’t find anything i wont charge you. why i built it / what’s different i once built open source intelligence tooling to investigate financial crime but for a recent project i needed to find out more about some customers, i tried apollo, clearbit, lusha, clay, etc but i found: 1. outdated data - the data about was out-of-date and misleading, emails didn’t work, etc 2. dubious data - i found lots of data like personal mobile numbers that i’m pretty sure no-one shared publicly or knowingly opted into being sold on 3. aggressive pricing - monthly/annual commitments, large gaps between plans, pay the same for empty profiles 4. painful setup - hard to find the right api, set it up, test it out etc i used knowledge from criminal investigations to build an api that uses some of the same research patterns and entity resolution to find standardized information about people that is: 1. real-time 2. public info only (osint) 3. transparent simple pricing 4. 1 min to setup what i’d love feedback on * speed : are responses fast enough? would you trade-off speed for better data coverage? * coverage : which fields will you use (or others you need)? * pricing : is the pricing model sane? * use-cases : what you need this type data for (i.e. example use cases)? * accuracy : any examples where i got it badly wrong? happy to answer technical questions in the thread and give more free credits to help anyone test https://api.yolodex.ai November 24, 2025 at 07:32PM

Show HN: Safe-NPM – only install packages that are +90 days old https://ift.tt/oG9In2x

Show HN: Safe-NPM – only install packages that are +90 days old This past quarter has been awash with sophisticated npm supply chain attacks like [Shai-Hulud]( https://ift.tt/CKMe0h6... () and the [Chalk/debug Compromise]( https://www.wiz.io/blog/widespread-npm-supply-chain-attack-b... ). This CLI helps protect users from recently compromised packages by only downloading packages that have been public for a while (default is 90 days or older). Install: npm install -g @dendronhq/safe-npm Usage: safe-npm install react@^18 lodash How it works: - Queries npm registry for all versions matching your semver range - Filters out anything published in the last 90 days - Installs the newest "aged" version Limitations: - Won't protect against packages malicious from day one - Doesn't control transitive dependencies (yet - looking into overrides) - Delays access to legitimate new features This is meant as a 80/20 measure against recently compromised NPM packages and is not a silver bullet. Please give it a try and let me know if you have feedback. https://ift.tt/C8WFPtb November 24, 2025 at 03:44AM

Show HN: Fixing Google Nano Banana Pixel Art with Rust https://ift.tt/P63Zp7z

Show HN: Fixing Google Nano Banana Pixel Art with Rust https://ift.tt/Dc2S9yj November 26, 2025 at 09:16PM

Tuesday, November 25, 2025

Show HN: Secure private diffchecker with merge support https://ift.tt/srqSQnL

Show HN: Secure private diffchecker with merge support Built a minimal diff checker with merge feature. 1. Supports 25K+ lines. 2. Character level instant diff. 3. Diff merge feature. 4. Share able links. 5. 100% secure, all diff computation happens in browser. No other website offering high quality diff checker and merge feature with just browser only implementation. Please review the website in detail and share feedback. https://diffchecker.dev November 26, 2025 at 12:30AM

Show HN: Superglue – OSS integration tool that understands your legacy systems https://ift.tt/7VQF5zM

Show HN: Superglue – OSS integration tool that understands your legacy systems If you've ever worked in a large company, you've probably encountered "shadow infrastructure": scripts nobody understands or custom connectors written once and never touched again. This glue layer isn't documented, isn't owned by anyone, and tends to break when systems are upgraded or someone leaves. It's also the part everybody dreads working on, because it's hard to understand, painful to work with, and full of unknown unknowns. We built superglue so that engineers stop wasting time on deciphering legacy APIs and documentation. superglue ingests existing glue code, SQL, configs, docs, OpenAPI specs and reverse-engineers what the system is actually doing. It then maps dependencies and regenerates everything as clean javascript code that can run directly or be exposed via MCP or SDK. It also monitors API changes and schema drift, and automatically repairs integrations when upstream systems change. In short: It turns legacy integrations into code you can easily understand, test, and update. So that engineers can do more exciting feature work, and companies can migrate and upgrade systems faster. Think of it as: a context engine + code generator + integration runtime for legacy glue. What we'd love feedback on - How do you deal with "nobody knows what this script does" situations? - What would you want to know about your legacy systems? OSS/community version: https://ift.tt/VcjTFBU More info: https://superglue.ai Happy to go deeper on the technical details. https://superglue.ai November 25, 2025 at 09:58PM

Monday, November 24, 2025

Show HN: I built an interactive HN Simulator https://ift.tt/jb4xIDs

Show HN: I built an interactive HN Simulator Hey HN! Just for fun, I built an interactive Hacker News Simulator. You can submit text posts and links, just like the real HN. But on HN Simulator, all of the comments are generated by LLMs + generate instantly. The best way to use it (IMHO) is to submit a text post or a curl-able URL here: https://news.ysimulator.run/submit . You don't need an account to post. When you do that, various prompts will be built from a library of commenter archetypes, moods, and shapes. The AI commenters will actually respond to your text post and/or submitted link. I really wanted it to feel real, and I think the project mostly delivers on that. When I was developing it, I kept getting confused between which tab was the "real" HN and which was the simulator, and accidentally submitted some junk to HN. (Sorry dang and team – I did clean up after myself). The app itself is built with Node + Express + Postgres, and all of the inference runs on Replicate. Speaking of Replicate, they generously loaded me up with some free credits for the inference – so shoutout to the team there. The most technically interesting part of the app is how the comments work. You can read more about it here, as well as explore all of the available archetypes, moods, and shapes that get combined into prompts: https://news.ysimulator.run/comments.html I hope you all have as much fun playing with it as I did making it! https://news.ysimulator.run/news November 24, 2025 at 11:22PM

Show HN: I built an interactive map of jobs at top AI companies https://ift.tt/EnCTf9O

Show HN: I built an interactive map of jobs at top AI companies I built a live interactive map that shows where top AI companies hire around the world. I collected this data for a hackathon project. Many ATS providers have a public API that you can hit with the slug of the companies to get open jobs. The hardest part was finding the companies. I tried Firecrawl but it returned around 200 companies per provider which wasn’t enough for me. Then, I tried SERPAPI but it was expensive. I ended up using SearXNG to discover companies by ATS type and fetch their job postings. This produced a large dataset of 200k+ jobs (I only use a subset as it would have taken too much time processing). A few days ago, I decided to build a visualization of the data as I didn’t know what to do with it and wanted people to benefit. I kept catching myself wanting to ask simple questions like “show only research roles in Europe” or “filter for remote SWE positions” (and had plenty of free ai credits) so I added a small LLM interface that translates natural language into filters on the map. The map is built with Vite + React + Mapbox. Live demo: https://map.stapply.ai GitHub (data): https://ift.tt/MbUG0R7 Would love feedback, ideas for improvement, or contributions. https://map.stapply.ai November 24, 2025 at 11:38PM

Sunday, November 23, 2025

Show HN: Search tool for "Ask HN: What Are You Working On?" https://ift.tt/l19WLJG

Show HN: Search tool for "Ask HN: What Are You Working On?" Hi all, I created a public dashboard for searching / chatting with "What are you working on?" posts. I'd love to hear any feedback that you have. https://ift.tt/7isO5Qq November 23, 2025 at 10:52PM

Show HN: Genesis DB now provides a full gRPC API alongside HTTP https://ift.tt/sthOICu

Show HN: Genesis DB now provides a full gRPC API alongside HTTP The Protobuf definition is published openly: https://ift.tt/emQWVBJ Genesis DB also exposes gRPC Server Reflection, so clients can introspect the service without needing the .proto file locally (useful for tools like grpcurl, kreya, or dynamic client generation). https://ift.tt/Gm4uyQY November 23, 2025 at 11:27PM

Saturday, November 22, 2025

Show HN: RealDeed – Tokenize Real Estate into Digital Assets https://ift.tt/DFhaWQJ

Show HN: RealDeed – Tokenize Real Estate into Digital Assets RealDeed is MENA’s advanced real estate tokenization platform, licensed under the Dubai International Financial Centre (DIFC). Our mission is simple: Make real estate “digitally alive” without forcing property owners or developers into securities, fundraising, or STO regulations on day one. Real estate globally is still stuck in PDFs, local land offices, and offline processes. Tokenization exists, but almost all solutions jump straight into securities, fractionalization, investor pooling, and STOs, which triggers regulation and makes experimentation nearly impossible. We built RealDeed because property owners kept asking us the same question: “Can I put my real estate on blockchain as a digital twin without selling ownership or offering securities?” So that’s exactly what we built. Today, we’re launching the RealDeed — a platform that turns physical real estate into digital assets or twins, represented as utility tokens pegged to land area. What RealDeed Actually Does :. RealDeed allows property owners and developers to: 1. Upload property documents Title deed, floor plan, DLD or RERA documents, etc. 2. Verify ownership KYC + property verification. 3. Define a tokenization model Example: 32 sqm → 320,000 utility tokens 120 sqm → 1,200,000 tokens Tokens represent digital land, not ownership. 4. Mint the digital twin on-chain We generate tokens on XRP Ledger & EVM networks. 5. Deliver tokens to the owner’s Web3 wallet 6. Optional integrations Where legally allowed, owners can connect their digital twins to: Broker-dealer platforms DeFi platforms Fintech apps Metaverse/spatial systems Partner proptech tools RealDeed creates the first interoperable property layer on blockchain where: A Dubai villa A Mumbai apartment A London flat …can all exist as standardized digital twins—usable across APIs, developer tools, and digital ecosystems. This enables: Global property mapping Unified digital registries Digital twin trading like gift deed and selling tokens(not property trading) Cross-border developer collaboration Blockchains finally have a way to “understand” property. Regulatory Positioning RealDeed is:Licensed under DIFC Innovation Licence (PropTech/DLT & Tokenization) (we don’t do financial services) Not a securities platform Not selling tokens Not accepting public funds Not fractional ownership Think of us as “Stripe for property tokenization.” Founded by Malhar Jajoo & Pratz (Prathmesh) Try It / Join the Waitlist realdeed.co https://ift.tt/U3bu4Dx November 23, 2025 at 02:16AM

Show HN: HN Insights – HN front page summaries https://ift.tt/lg9BaPp

Show HN: HN Insights – HN front page summaries Hi HN, Sharing HN Insights, a webapp I built that highlights trending themes and summarizes discussion threads from the front page. This started earlier this week as a toy project to test out Gemini 3 Pro in aistudio. I found the POC useful, so I decided to productionize it. I've included the original seed prompt below: > Create an app that creates a summary of the comment threads for hacker news front page. The UX should be similar, but clicking the comments instead opens a summary. The summary is generated when clicked so it can gather new threads. To productionize, I used Claude Code and heavy use of Agent SOPs ( https://ift.tt/Vl9ZOS1 ). https://hn-insights.com November 23, 2025 at 02:04AM

Show HN: Forty.News – Daily news, but on a 40-year delay https://ift.tt/V90mdcO

Show HN: Forty.News – Daily news, but on a 40-year delay This started as a reaction to a conversational trope. Despite being a tranquil place, even conversations at my yoga studio often start with, "Can you believe what's going on right now?" with that angry/scared undertone. I'm a news avoider, so I usually feel some smug self-satisfaction in those instances, but I wondered if there was a way to satisfy the urge to doomscroll without the anxiety. My hypothesis: Apply a 40-year latency buffer. You get the intellectual stimulation of "Big Events" without the fog of war, because you know the world didn't end. 40 years creates a mirror between the Reagan Era and today. The parallels include celebrity populism, Cold War tensions (Soviets vs. Russia), and inflation economics. The system ingests raw newspaper scans and uses a multi-step LLM pipeline to generate the daily edition: OCR & Ingestion: Converts raw pixels to text. Scoring: Grades events on metrics like Dramatic Irony and Name Recognition to surface stories that are interesting with hindsight. For example, a dry business blurb about Steve Jobs leaving Apple scores highly because the future context creates a narrative arc. Objective Fact Extraction: Extracts a list of discrete, verifiable facts from the raw text. Generation: Uses those extracted facts as the ground truth to write new headlines and story summaries. I expected a zen experience. Instead, I got an entertaining docudrama. Historical events are surprisingly compelling when serialized over weeks. For example, on Oct 7, 1985, Palestinian hijackers took over the cruise ship Achille Lauro. Reading this on a delay in 2025, the story unfolded over weeks: first they threw an American in a wheelchair overboard, then US fighter jets forced the escape plane to land, leading to a military standoff between US Navy SEALs and the Italian Air Force. Unbelievably, the US backed down, but the later diplomatic fallout led the Italian Prime Minister to resign. It hits the dopamine receptors of the news cycle, but with the comfort of a known outcome. Stack: React, Node.js (Caskada for the LLM pipeline orchestration), Gemini for OCR/Scoring. Link: https://forty.news (No signup required, it's only if you want the stories emailed to you daily/weekly) https://forty.news November 23, 2025 at 12:17AM

Show HN: Santamon – Lightweight macOS threat detection agent https://ift.tt/CYPJx1W

Show HN: Santamon – Lightweight macOS threat detection agent a lightweight macOS detection agent that taps into Santa’s Endpoint Security telemetry, runs CEL detection rules locally on-device, and only ships high-signal alerts to a tiny backend. basically a poor man’s macOS EDR for home labs and small fleets! https://ift.tt/QLNOzWf November 22, 2025 at 11:11PM

Friday, November 21, 2025

Show HN: I made a Rust Terminal UI for OpenSnitch, a Linux application firewall https://ift.tt/g1kd2MG

Show HN: I made a Rust Terminal UI for OpenSnitch, a Linux application firewall I made a Terminal UI for OpenSnitch[1], an interactive application firewall for Linux inspired by Little Snitch. I’ve always wanted to create a TUI and found the perfect excuse to make this for usage on one of my headless servers. I wrote this in Rust to force myself to learn more, viz. async features. Super open to feedback and contributions! [1] https://ift.tt/g4AOlkT https://ift.tt/yMkhGuS November 22, 2025 at 05:18AM

Show HN: Even Turns, track your families turns https://ift.tt/B0uv5FG

Show HN: Even Turns, track your families turns I am a dad and have a hard time keeping track of who's turn it is, so I built this simple app to help, and you can try it out and use it for free! You can create a list, add turns (in order), and advance the turns in sequential or random order. That is pretty much it. I guess a to-do list or something could do something similar, but this is designed with 'taking turns' in mind. It's a PWA, so you can "Add to Homescreen" rather than download an app from the app store. Or use it in your browser. I've been using it every day for a bit now, thought I'd share. https://eventurns.com November 22, 2025 at 12:59AM

Show HN: OCR Arena – A playground for OCR models https://ift.tt/n3mNfq0

Show HN: OCR Arena – A playground for OCR models I built OCR Arena as a free playground for the community to compare leading foundation VLMs and open-source OCR models side-by-side. Upload any doc, measure accuracy, and (optionally) vote for the models on a public leaderboard. It currently has Gemini 3, dots.ocr, DeepSeek, GPT5, olmOCR 2, Qwen, and a few others. If there's any others you'd like included, let me know! https://ift.tt/G8xPvkm November 21, 2025 at 10:14PM

Show HN: City2Graph – Python Open Source for Geospatial Graph Neural Networks https://ift.tt/XYDonO3

Show HN: City2Graph – Python Open Source for Geospatial Graph Neural Networks https://ift.tt/mJ0zuLq November 22, 2025 at 12:10AM

Thursday, November 20, 2025

Wednesday, November 19, 2025

Show HN: F32 – An Extremely Small ESP32 Board https://ift.tt/PYizEMZ

Show HN: F32 – An Extremely Small ESP32 Board As part of a little research and also some fun I decided to try my hand at seeing how small of an ESP32 board I can make with functioning WiFi. https://ift.tt/AUtI19D November 20, 2025 at 01:39AM

Show HN: PgEdge Control Plane, a declarative API for multi-region Postgres mgmt https://ift.tt/NRTG03a

Show HN: PgEdge Control Plane, a declarative API for multi-region Postgres mgmt https://ift.tt/2Grv5Ai November 20, 2025 at 02:45AM

Show HN: I made a down detector for down detector https://ift.tt/Fc6R1tT

Show HN: I made a down detector for down detector After down detector went down with the rest of the internet during the Cloudflare outage today I decided to build a robust, independent tool which checks if down detector is down. Enjoy!! https://ift.tt/o3rgBJm November 19, 2025 at 05:35AM

Monday, November 17, 2025

Show HN: ToolHop – Fast, simple utilities for every workflow https://ift.tt/8BS7dDY

Show HN: ToolHop – Fast, simple utilities for every workflow ToolHop is your all-in-one browser toolbox with 200+ fast-loading calculators, converters, generators, color labs, and dev helpers. Use global search or curated categories to jump straight into the right utility, run it client-side for instant feedback, and deep-link results to your team. Whether you’re formatting copy, validating data, checking DNS, or exploring palettes, ToolHop keeps your core workflows a single tab away, and it’s entirely free, no account required. --- I built ToolHop because I was sick of the usual “free tool” bait-and-switch. Every time I needed to convert an image, compress a file, check some text, or run a quick calculation, I’d end up hitting some arbitrary limit like “10 uses per week” or a forced signup wall. It’s ridiculous how something as basic as converting a JPG to a PNG can turn into a subscription pitch. So ToolHop started as a personal frustration project: I wanted a single place with a ton of genuinely useful tools that didn’t nag, lock you out, or throttle you. Over time that grew into 200+ handcrafted tools, all fast, simple, and actually free. No trickery, no timers, no limits. As I built it, the process became about consistency and quality. I wanted the tools to feel seamless, not slapped together. That meant focusing on speed, clean UI, accurate results, and making sure each tool works instantly without friction. The goal was always the same: a site that respects people’s time. Something you can rely on whenever you just need a tool to work. If ToolHop saves someone even a few minutes of hassle, then the project did its job. https://toolhop.app November 17, 2025 at 09:28PM

Sunday, November 16, 2025

Show HN: My Side project a free email template builder for CRM, or any website https://ift.tt/eVJDuBk

Show HN: My Side project a free email template builder for CRM, or any website Hi Everyone, I built an email template builder embeddable plugin for CRM, Marketplace, or any website. Free and paid plans are included. Add a complete email builder to any SaaS app using a single script. What's included: - Easy Integration - AI Content & Template Generation - Add external image libraries - Add Merge Tags - Display Conditions - Custom Blocks - Choose your storage server - Dedicated support during integration Check it out, and please let us know if you have any feedback for me. TIA https://ift.tt/SMVJzrg November 17, 2025 at 03:56AM

Show HN: ResendForward – OS server and UI for use with Resend.com inbound https://ift.tt/4CQRZXp

Show HN: ResendForward – OS server and UI for use with Resend.com inbound With Resend's new inbound feature I wanted to build a simple application that handles processing webhook events and forwarding emails for multiple applications. Right now Resend requires you to implement that logic in each new application. repo - https://ift.tt/wMR1dVm live - https://ift.tt/wDUNt0s Built with react + pocketbase, extremely simple to self host. https://ift.tt/wMR1dVm November 17, 2025 at 12:57AM

Saturday, November 15, 2025

Show HN: ZenPaint, a pixel-perfect MacPaint recreation for the browser https://ift.tt/f7W8zpH

Show HN: ZenPaint, a pixel-perfect MacPaint recreation for the browser I've been recreating the original MacPaint in the browser on and off for a few years. It's still alpha quality, but I'm finally ready to share it more widely. The goal was pixel-perfect accuracy, so I spent a lot of time with Atkinson's original QuickDraw source code, emulators, and my iBook G3 to get details like font rendering and the shape tools exactly right. Some technical notes: - Font rendering was surprisingly tricky; understanding the original pipeline's quirks took lots of experimentation, and avoiding canvas smoothing/aliasing required careful handling. - Written declaratively with React; performance is kept reasonable with a buffer pool and copy-on-write semantics. - You can share links to artwork from within the UI. E.g.: https://ift.tt/HYAsaSB - Mobile support was not considered here (for obvious reasons). It might still be usable on a larger phone or tablet but I have not tested this at all. There's something magical about making art within MacPaint's constraints: the 1-bit graphics, the limited resolution, the peculiar set of tools that still feel surprisingly expressive. Still some rough edges and missing features, but I'd love feedback from anyone who remembers the original. https://zenpaint.org/ November 16, 2025 at 01:21AM

Show HN: An Apache Beam batch processing clone in Rust https://ift.tt/6PbC19B

Show HN: An Apache Beam batch processing clone in Rust I've been experimenting with Apache Beam as of late at work and found that it can be slow in Python, and more complicated to use in Java where performance is better. I decided to experiment with JetBrains' AI Assistant and build an Apache Beam clone in Rust. I appreciate any commentary or feedback! https://ift.tt/HoBSI4T November 16, 2025 at 12:16AM

Show HN: DeepClause – A Neurosymbolic AI System Built on WASM and Prolog https://ift.tt/DrUq69S

Show HN: DeepClause – A Neurosymbolic AI System Built on WASM and Prolog Hi HN, Today I'd like to present the results of my weekend project of the last year or so. Given there are many posts on HN about LLMs and Prolog, I thought that this would be of interest. DeepClause is my own (possibly misguided :-) attempt at combining LLMs with Logic Programming, ultimately hoping to establish a foundation for building more reliable agents, that produce reproducible and fully traceable result. At the heart of DeepClause is a DSL called "DeepClause Meta Language" (DML) which can be used to encode agent behaviors as executable logic programs. DML is executed by a meta-interpreter implemented in Prolog and thus natively supports things like constraint logic programming, knowledge graphs, symbolic reasoning, ... The DML interpreter itself runs inside the SWI Prolog WASM module, thus allowing for a secure and sandboxed execution environment for AI agents. The project is still rough around a lot of edges, but I'd love to get some feedback and comments. https://ift.tt/ZE5nAX3 November 15, 2025 at 07:23PM

Friday, November 14, 2025

Show HN: ByteSync – Open-source hybrid file sync (LAN and remote, E2EE) https://ift.tt/sGQy9Bd

Show HN: ByteSync – Open-source hybrid file sync (LAN and remote, E2EE) Hi everyone, I've been developing ByteSync, an open-source file synchronization, backup and deduplication tool designed to bridge the gap between local and remote sync. In spirit, it's somewhat closer to FreeFileSync, but with an integrated networking layer and end-to-end encryption — which means you can synchronize files between computers on the same LAN or across the internet without VPNs or firewall setup. Everything works transparently through the same interface. The synchronization model is based on DataNodes (which represent repositories, such as servers or NAS devices) and DataSources (the folders or files inside them). A session can include multiple participants, each with one or several DataNodes, and ByteSync handles all comparisons and transfers automatically. To optimize performance, the engine uses a two-stage inventory process: an initial indexation followed by comparisons limited to items that actually changed. This keeps synchronization fast even with large datasets. There's also a flat mode, useful when structure doesn't matter and you just want to compare or align files by name. Currently, ByteSync is focused on interactive synchronization — it's not yet automated or daemon-based (CLI integration is planned). But it's already fully functional for discovering and managing differences between repositories, both local and remote. ByteSync runs on Windows, macOS, and Linux, and the entire codebase is available on GitHub: https://ift.tt/0j7f2oR You can also download binaries and read the documentation here: https://ift.tt/whRzX7e I'd really appreciate feedback and contributors — whether on usability, architecture, or ideas for future features. The goal is to make a solid, privacy-respectful alternative for hybrid file synchronization that remains simple to use and open for everyone. November 14, 2025 at 07:32PM

Thursday, November 13, 2025

Show HN: US Publicly Traded Companies probabilities of default with public data https://ift.tt/UfNeQPF

Show HN: US Publicly Traded Companies probabilities of default with public data https://ift.tt/80er3qS November 14, 2025 at 03:51AM

Show HN: YAML Validator –A simple Docker-based YAML checker https://ift.tt/1f4crnK

Show HN: DBOS Java – Postgres-Backed Durable Workflows https://ift.tt/kqLwPMY

Show HN: DBOS Java – Postgres-Backed Durable Workflows Hi HN - I’m Peter, here with Harry (devhawk), and we’re building DBOS Java, an open-source Java library for durable workflows, backed by Postgres. https://ift.tt/1CAel4s Essentially, DBOS helps you write long-lived, reliable code that can survive failures, restarts, and crashes without losing state or duplicating work. As your workflows run, it checkpoints each step they take in a Postgres database. When a process stops (fails, restarts, or crashes), your program can recover from those checkpoints to restore its exact state and continue from where it left off, as if nothing happened. In practice, this makes it easier to build reliable systems for use cases like AI agents, payments, data synchronization, or anything that takes hours, days, or weeks to complete. Rather than bolting on ad-hoc retry logic and database checkpoints, durable workflows give you one consistent model for ensuring your programs can recover from any failure from exactly where they left off. This library contains all you need to add durable workflows to your program: there's no separate service or orchestrator or any external dependencies except Postgres. Because it's just a library, you can incrementally add it to your projects, and it works out of the box with frameworks like Spring. And because it's built on Postgres, it natively supports all the tooling you're familiar with (backups, GUIs, CLI tools) and works with any Postgres provider. If you want to try it out, check out the quickstart: https://ift.tt/csGKESD We'd love to hear what you think! We’ll be in the comments for the rest of the day to answer any questions. https://ift.tt/1CAel4s November 14, 2025 at 02:03AM

Show HN: LLM fine-tuning without infra or ML expertise (early access) https://ift.tt/mDC8UlM

Show HN: LLM fine-tuning without infra or ML expertise (early access) https://www.tinytune.xyz/ November 14, 2025 at 12:33AM

Wednesday, November 12, 2025

Show HN: Built a tiny interpreter from scratch in C to understand how they work https://ift.tt/7sqSVXp

Show HN: Built a tiny interpreter from scratch in C to understand how they work Hi HN, I'm the author. I built this project for two simple reasons: I've always used higher-level languages and wanted to finally understand what's happening "under the hood" of an interpreter. I also wanted a real project to force me to "power up" my C skills, especially with manual memory management and reference counting. The result is ToyForth, a minimal interpreter for a Forth-like language, written from scratch in C, stack-based. I focused on making the code clean and understandable. It's broken down into a few simple parts: A parser that turns source text into a list of objects (parser.c). A small stack-based virtual machine (main.c). A manual reference counting system (incRef/decRef) to manage object memory (mem.c) and so on. My main goal was learning, but I've tried to document it well in the README.md so it could be a "starter kit" for anyone else who wants to learn by reading a small, complete implementation. It's easy to try out. I'd genuinely appreciate any feedback on my approach or my C code. Here's the link: https://ift.tt/rdRZuH8 https://ift.tt/rdRZuH8 November 13, 2025 at 01:53AM

Show HN: JavaScript Engines Zoo https://ift.tt/iBMHU1c

Show HN: JavaScript Engines Zoo https://ift.tt/yPpTlNJ November 12, 2025 at 09:32PM

Show HN: Cancer diagnosis makes for an interesting RL environment for LLMs https://ift.tt/XWQElKI

Show HN: Cancer diagnosis makes for an interesting RL environment for LLMs Hey HN, this is David from Aluna (YC S24). We work with diagnostic labs to build datasets and evals for oncology tasks. I wanted to share a simple RL environment I built that gave frontier LLMs a set of tools that lets it zoom and pan across a digitized pathology slide to find the relevant regions to make a diagnosis. Here are some videos of the LLM performing diagnosis on a few slides: ( https://www.youtube.com/watch?v=k7ixTWswT5c ): traces of an LLM choosing different regions to view before making a diagnosis on a case of small-cell carcinoma of the lung ( https://youtube.com/watch?v=0cMbqLnKkGU ): traces of an LLM choosing different regions to view before making a diagnosis on a case of benign fibroadenoma of the breast Why I built this: Pathology slides are the backbone of modern cancer diagnosis. Tissue from a biopsy is sliced, stained, and mounted on glass for a pathologist to examine abnormalities. Today, many of these slides are digitized into whole-slide images (WSIs)in TIF or SVS format and are several gigabytes in size. While there exists several pathology-focused AI models, I was curious to test whether frontier LLMs can perform well on pathology-based tasks. The main challenge is that WSIs are too large to fit into an LLM’s context window. The standard workaround, splitting them into thousands of smaller tiles, is inefficient for large frontier LLMs. Inspired by how pathologists zoom and pan under a microscope, I built a set of tools that let LLMs control magnification and coordinates, viewing small regions at a time and deciding where to look next. This ended up resulting in some interesting behaviors, and actually seemed to yield pretty good results with prompt engineering: - GPT 5: explored up to ~30 regions before deciding (concurred with an expert pathologist on 4 out of 6 cancer subtyping tasks and 3 out of 5 IHC scoring tasks) - Claude 4.5: Typically used 10–15 views but similar accuracy as GPT-5 (concurred with the pathologist on 3 out of 6 cancer subtyping tasks and 4 out of 5 IHC scoring tasks) - Smaller models (GPT 4o, Claude 3.5 Haiku): examined ~8 frames and were less accurate overall (1 out of 6 cancer subtytping tasks and 1 out of 5 IHC scoring tasks) Obviously, this was a small sample set, so we are working on creating a larger benchmark suite with more cases and types of tasks, but I thought this was cool that it even worked so I wanted to share with HN! November 12, 2025 at 10:31PM

Tuesday, November 11, 2025

Show HN: Data Formulator 0.5 – interactive AI agents for data visualization https://ift.tt/jXBsgkv

Show HN: Data Formulator 0.5 – interactive AI agents for data visualization Hi everyone, we are excited to share with you our new release of Data Formulator. Starting from a dataset, you can communicate with AI agents with UI + natural language to explore data and create visualizations to discover new insights. This is a build-up from our release a year ago ( https://ift.tt/jv8kIDw ). We spent a year exploring how to blend agent mode with interactions to allow you more easily "vibe" with your data but still keeping in control. We don't think the future of data analysis is just "agent to do all for you from a high-level prompt" --- you should still be able to drive the open-ended exploration; but we also don't want you to do everything step-by-step. Thus we worked on this "interactive agent mode" for data analysis with some UI innovations. Our new demo features: * We want to let you import (almost) any data easily to get started exploration — either it's a screenshot of a web table, an unnormalized excel table, table in a chunk of text, a csv file, or a table in database, you should be able to load into the tool easily with a little bit of AI assistance. * We want you to easily choose between agent mode (more automation) vs interactive mode (more fine-grained control) yourself as you explore data. We designed an interface of "data threads": both your and agents' explorations are organized as threads so you can jump into any point to decide how you want to follow-up or revise using UI + NL instruction to provide fine-grained control. * The results should be easily interpretable. Data Formulator now presents "concept" behind the code generated by AI agents alongside code/explanation/data. Plus, you can compose a report easily based on your visualizations to share insights. We are sharing the online demo at https://ift.tt/9nmrh4f for you to try! If you want more involvement and customization, checkout our source code https://ift.tt/pfvB3Zj and let's build something together as a community! https://ift.tt/9nmrh4f November 11, 2025 at 11:14PM

Monday, November 10, 2025

Show HN: Tracking AI Code with Git AI https://ift.tt/vnCZqM9

Show HN: Tracking AI Code with Git AI Git AI is a side project I created to track AI-generated code in our repos from development, through PRs, and into production. It does not just count lines, it keeps track of them as your code evolves, gets refactored and the git history gets rewritten. Think 'git blame' but for AI code. There's a lot about how it works in the post, but wanted to share how it's been impacting me + my team: - I find I review AI code very differently than human code. Being able to see the prompts my colleagues used, what the AI wrote, and where they stepped in to override has been extraordinarily helpful. This is still very manual today, but hope to build more UI around it soon. - “Why is this here?” — more than once I’ve giving my coding agent access to the past prompts that generated code I’m looking at, which lets the Agent know what my colleague was thinking when they made the change. Engineers talk to AI all day now…their prompts are sort of like a log of thoughts :) - I pay a lot of attention to the lines generated for every 1 accepted ratio. If it gets up over 4 or 5 it means I’m well outside the AI’s distribution or prompting poorly — either way, it’s a good cause for reflection and I’ve learned a lot about collaborating with LLMs. This has been really fun to build, especially because some amazing contributors who were working on similar projects came together and directed their efforts towards Git AI shine. We hope you like it. https://ift.tt/0o6sfze November 10, 2025 at 10:56PM

Show HN: Tiny Diffusion – A character-level text diffusion model from scratch https://ift.tt/fi675Q4

Show HN: Tiny Diffusion – A character-level text diffusion model from scratch https://ift.tt/21zq6PQ November 10, 2025 at 08:43PM

Sunday, November 9, 2025

Show HN: DroidDock – A sleek macOS app for browsing Android device files via ADB https://ift.tt/b6hFBku

Show HN: DroidDock – A sleek macOS app for browsing Android device files via ADB Hi HN, I’m Rajiv, a software engineer turned Math teacher living in the mountains, where I like to slow down life while still building useful software. I recently built DroidDock, a lightweight and modern macOS desktop app that lets you browse and manage files on your Android device via ADB. After 12 years in software development, I wanted a free, clean, and efficient tool because existing solutions were either paid, clunky, or bloated. Features include multiple view modes, thumbnail previews for images/videos, intuitive file search, file upload/download, and keyboard shortcuts. The backend uses Rust and Tauri for performance. You can download the latest .dmg from the landing page here: https://rajivm1991.github.io/DroidDock/ Source code is available on GitHub: https://ift.tt/Ag1hp30 I’d appreciate your feedback on usability, missing features, or bugs. Thanks for checking it out! — Rajiv https://rajivm1991.github.io/DroidDock/ November 10, 2025 at 06:21AM

Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer https://ift.tt/ko6eEzy

Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer SQL-first analytic IDE; similar to Redash/Metabase. Aims to solve reuse/composability at the code layer with modified syntax, Trilogy, that includes a semantic layer directly in the SQL-like language. Status: experiment; feedback and contributions welcome! Built to solve 3 problems I have with SQL as my primary iterative analysis language: 1. Adjusting queries/analysis takes a lot of boilerplate. Solve with queries that operate on the semantic layer, not tables. Also eliminates the need for CTEs. 2. Sources of truth change all the time. I hate updating reports to reference new tables. Also solved by the semantic layer, since data bindings can be updated without changing dashboards or queries. 3. Getting from SQL to visuals is too much work in many tools; make it as streamlined as possible. Surprise - solve with the semantic layer; add in more expressive typing to get better defaults;also use it to wire up automatic drilldowns/cross filtering. Supports: bigquery, duckdb, snowflake. Links [1] https://ift.tt/bLDlZK3 (language info) Git links: [Frontend] https://ift.tt/A28MJCP [Language] https://ift.tt/c4WRNzZ Previously: https://ift.tt/e1vQMxc (significant UX/feature reworks since) https://ift.tt/dmLuQ9i https://ift.tt/uOb3nLs November 10, 2025 at 04:56AM

Show HN: I'm a pastor/dev and built a 200M token generative Bible https://ift.tt/Aagoet2

Show HN: I'm a pastor/dev and built a 200M token generative Bible https://ift.tt/5af7HU0 November 10, 2025 at 01:41AM

Saturday, November 8, 2025

Show HN: Livestream of a coding agent controlled by public chat https://ift.tt/FVPu9Nq

Show HN: Livestream of a coding agent controlled by public chat https://ift.tt/fTpeJS0 November 8, 2025 at 10:40PM

Show HN: I built a website to visualize company financial data https://ift.tt/FeqdgyM

Show HN: I built a website to visualize company financial data Hi HN, I built a website myfinsight.com that aims to make complicated company financials easy to understand. The problem: The go-to place for financial data such as revenue, sales, net income is Yahoo finance. However, their data is usually wrong and very limited. The numbers are hard to digest to get insight quickly. There are also numerous websites that provide much better data for a very expensive monthly fee. Solution: a website that provides free diagrams and charts that visualize important financial data, such as income growth rate by date, revenue breakdown etc. It is free because the financial data process is highly automated without manual input and correction. I used to send the finance infographics to friends and family. I found it easier just to make a website and they can grab the data from it. Next steps: there is a long tail of companies that don’t file their reports correctly. I am trying to make it more accurate somehow, and maybe add live stock prices to the website. I am also looking for feedback! Please play around with it and let me know if something is wrong. https://myfinsight.com/ November 9, 2025 at 03:00AM

Show HN: Easily reduce GitHub Actions costs with Ubuntu-slim migration https://ift.tt/NDtIGWx

Show HN: Easily reduce GitHub Actions costs with Ubuntu-slim migration Hi, HN! I've been running GitHub Actions workflows for a while, and when GitHub announced ubuntu-slim runners as a cheaper alternative to ubuntu-latest, I wanted to migrate. (Blog: https://ift.tt/oLG6ZHM... ) But manually checking which workflows can safely migrate is tedious—you need to check for Docker usage, services, containers, execution times, and missing commands. So I built gh-slimify, a GitHub CLI extension that automates this. It scans your workflows, detects migration candidates, checks for incompatible patterns, identifies missing commands, and can safely update workflows with one command. Try it: gh extension install fchimpan/gh-slimify gh slimfy # Scan workflows gh slimfy fix # Update safe jobs only Open source (MIT). I'd love feedback on how to improve it or what edge cases I might have missed. https://ift.tt/LZgpSWs November 8, 2025 at 10:19PM

Friday, November 7, 2025

Show HN: Three Emojis, a daily word puzzle for language learners https://ift.tt/h5Niovq

Show HN: Three Emojis, a daily word puzzle for language learners I'm in the process of learning German and wanted to play a German version of the NYT’s Spelling Bee. It was awful, I was very bad at it, it was not fun. So I built my own version of Spelling Bee meant for people like me. Three Emojis is a daily word game designed for language learners. You get seven letters and a list of blanked-out words to find. When you discover shorter words, they automatically fill into longer ones—like a crossword—which turns out to be really useful for languages like German. Each word also gets three emojis assigned to it as a clue, created by GPT-5 to try and capture the word’s meaning (this works surprisingly well, most of the time). If you get stuck, you can get text/audio hints as well. It supports German and English, with new puzzles every day. You can flag missing words or suggest additions directly in the game. The word lists include slang, abbreviations, and chat-speak—because those are, in my opinion, a big part of real language learning too (just nothing vulgar, too obscure or obsolete). Every word you find comes with its definition and pronunciation audio. If you want infinite hints or (coming soon) archive access, you can upgrade to Pro. Feedback is very welcome, it's my first game and I'm certainly not a frontend guy. Happy spelling! https://ift.tt/9Ri82yJ November 8, 2025 at 01:06AM

Show HN: A Lightweight Kafka Alternative https://ift.tt/zx8rIR1

Show HN: A Lightweight Kafka Alternative https://ift.tt/yDlbikH November 7, 2025 at 07:28PM

Thursday, November 6, 2025

Show HN: I scraped 3B Goodreads reviews to train a better recommendation model https://ift.tt/bVWEud4

Show HN: I scraped 3B Goodreads reviews to train a better recommendation model Hi everyone, For the past couple months I've been working on a website with two main features: - https://book.sv - put in a list of books and get recommendations on what to read next from a model trained on over a billion reviews - https://ift.tt/EUlNy4e - put in a list of books and find the users on Goodreads who have read them all (if you don't want to be included in these results, you can opt-out here: https://ift.tt/dlR2tiv ) Technical info available here: https://ift.tt/CtGqIa4 Note 1: If you only provide one or two books, the model doesn't have a lot to work with and may include a handful of somewhat unrelated popular books in the results. If you want recommendations based on just one book, click the "Similar" button next to the book after adding it to the input book list on the recommendations page. Note 2: This is uncommon, but if you get an unexpected non-English titled book in the results, it is probably not a mistake and it very likely has an English edition. The "canonical" edition of a book I use for display is whatever one is the most popular, which is usually the English version, but this is not the case for all books, especially those by famous French or Russian authors. https://book.sv November 5, 2025 at 11:20PM

Show HN: DIY accessibility mouse helps people even with complete paralysis https://ift.tt/LH0WFx1

Show HN: DIY accessibility mouse helps people even with complete paralysis This is a DIY, open-source alternative to expensive solutions like the MouthPad, eye-trackers, or even complex systems like Neuralink. Everyone deserves access to assistive technology. https://ift.tt/C1ScXnq November 7, 2025 at 12:01AM

Show HN: TabPFN-2.5 – SOTA foundation model for tabular data https://ift.tt/YDVgNCt

Show HN: TabPFN-2.5 – SOTA foundation model for tabular data I am excited to announce the release of TabPFN-2.5, our tabular foundation model that now scales to datasets of up to 50,000 samples and 2,000 features - a 5x increase from TabPFN v2, published in the Nature journal earlier this year. TabPFN-2.5 delivers state-of-the-art predictions in one forward pass without hyperparameter tuning across classification and regression tasks. What’s new in 2.5 : TabPFN-2.5 maintains the core approach of v2 - a pretrained transformer trained on more than hundred million synthetic datasets to perform in-context learning and output a predictive distribution for the test data. It natively supports missing values, cateogrical features, text and numerical features is robust to outliers and uninformative features. The major improvements: - 5x scale increase: Now handles 50,000 samples × 2,000 features (up from 10,000 × 500 in v2) - SOTA performance: TabPFN-2.5 outperforms tuned tree-based methods and matches the performance of a complex ensemble (AutoGluon 1.4), that itself includes TabPFN v2, tuned for 4 hours. Tuning the model improves performance, outperforming AutoGluon 1.4 for regression tasks. - Rebuilt API: New REST interface along with Python SDK with dedicated fit & predict endpoints, making deployment and integration more developer-friendly - A distillation engine that converts TabPFN-2.5 into a compact MLP or tree ensemble while preserving accuracy and offer low latency inference. There are still some limitations. The model is designed for datasets up to 50K samples. It can handle larger datasets but that hasn’t been our focus with TabPFN-2.5. The distillation engine is not yet available through the API but only through licenses (though we do show the performance in the model report). We’re actively working on removing these limitations and intend to release newer models focused on context reasoning, causal inference, graph networks, larger data and time-series. TabPFN-2.5 is available via API and a package on Hugging Face. Would love for you to try it and give us your feedback! Model report: https://ift.tt/spA860v... Package: https://ift.tt/OpShWaR Client: https://ift.tt/u8xDed4 Docs: https://ift.tt/gAWd4wV https://ift.tt/WBcAKLE November 6, 2025 at 11:56PM

Wednesday, November 5, 2025

Show HN: JermCAD – A YAML-powered, vibe-coded, browser-based CAD software https://ift.tt/qG7P4Lj

Show HN: JermCAD – A YAML-powered, vibe-coded, browser-based CAD software I had a hard time figuring out CAD software like Fusion, OnShape, etc., and decided to go about making my own CAD modeling software that I can "program" my models similar to how I think about them in my head. I used Cursor to write like 95+% of this, giving it my YAML examples and making it implement the actual code to make those work. Currently 100% self-hosted, and it is just a static HTML/CSS/JS, so it might just work without running npm at all. Very few features working currently, basically just modeling a few primitive solids, and boolean operations. https://ift.tt/T67tGEy November 5, 2025 at 08:31PM

Tuesday, November 4, 2025

Show HN: ReadMyMRI DICOM native preprocessor with multi model consensus/ML pipes https://ift.tt/0fFp1dq

Show HN: ReadMyMRI DICOM native preprocessor with multi model consensus/ML pipes I'm building ReadMyMRI to solve a problem I kept running into: getting medical imaging data (DICOM files) ready for machine learning without violating HIPAA or losing critical context. What it does: ReadMyMRI is a preprocessing pipeline that takes raw DICOM medical images (MRIs, CTs, etc.) and: Strips all Protected Health Information (PHI) automatically while preserving DICOM metadata integrity Compresses images to manageable sizes without destroying diagnostic quality Links deidentified scans to user-provided clinical context (symptoms, demographics, outcomes) Uses multi-model AI consensus analysis for both consumer facing 2nd opinions and clinical decision making support at bedside Outputs everything into a single dataframe ready for ML training using Daft (Eventual's distributed dataframe library) Technical approach: Built on pydicom for DICOM manipulation Uses Pillow/OpenCV for quality-preserving compression Daft integration for distributed processing of large medical imaging datasets Frontier models for multi model analysis (still debating this) What I'm looking for: Feedback from anyone working with medical imaging ML Edge cases I haven't thought about Whether the Daft integration actually makes sense for your use case or if plain pandas would be better HIPAA/privacy concerns I am not thinking about Happy to answer questions about the architecture, HIPAA considerations, or why medical imaging data is such a pain to work with. https://ift.tt/Fvt5Led November 5, 2025 at 04:17AM

Show HN: Barcable – We Built Agents That Automatically Load Test Your Back End https://ift.tt/74q1hIN

Show HN: Barcable – We Built Agents That Automatically Load Test Your Back End Hey HN, we’re Iyan and Datta, founders of Barcable. Barcable connects to your backend (HTTP, gRPC, GraphQL) and uses autonomous agents to generate and run load tests directly inside your CI/CD. No configs, no scripts. It scans your repo, understands your API routes, and builds real test scenarios that hit your endpoints with realistic payloads. Docs: https://ift.tt/A3FKUQl We built this out of frustration. Every team we’ve worked with ran into the same issue: reliability testing never kept up with development speed. Pipelines deploy faster than anyone can validate performance. Most “load tests” are brittle JMeter relics or one-off scripts that rot after the first refactor. Barcable is our attempt to automate that. It: - Parses your OpenAPI spec or code to discover endpoints automatically - Generates realistic load tests from PR diffs (no manual scripting) - Spins up isolated Cloud Run jobs to execute at scale - Reports latency, throughput, and error breakdowns directly in your dashboard - Hooks into your CI so tests run autonomously before deploys Each agent handles a part of the process—discovery, generation, execution, analysis—so testing evolves with your codebase rather than fighting against it. Right now it works best with Dockerized repos. You can onboard from GitHub, explore endpoints, generate tests, run them, and see metrics in a unified dashboard. It’s still a work in progress. We’ll create accounts manually and share credentials with anyone interested in trying it out. We’re keeping access limited for now because of Cloud Run costs. We’re not trying to replace performance engineers, just make it easier for teams to catch regressions and incidents before production without the setup tax. Would love feedback from anyone who’s been burned by flaky load testing pipelines or has solved reliability differently. We’re especially curious about gRPC edge cases and complex auth setups. HN has always been a huge source of inspiration for us, and we’d love to hear how you’d test it, break it, or make it better. — Iyan & Datta https://ift.tt/3hTALdF https://ift.tt/N360JHG November 5, 2025 at 04:55AM

Show HN: Agor → Figma for AI Coding (Open Source) https://ift.tt/ao1FYXR

Show HN: Agor → Figma for AI Coding (Open Source) https://agor.live November 4, 2025 at 07:29PM

Sunday, November 2, 2025

Show HN: Chatolia – create, train and deploy your own AI agents https://ift.tt/ciAIJ3Y

Show HN: Chatolia – create, train and deploy your own AI agents Hi everyone, I've built Chatolia, a platform that lets you create your own AI chatbots, train them with your own data, and deploy them to your website. It is super simple to get started: - Create your agent - Train it with your data - Deploy it anywhere You can start for free, includes 1 agent and 500 message credits per month. Would love to hear your thoughts, https://ift.tt/w9unMx4 https://ift.tt/w9unMx4 November 3, 2025 at 02:38AM

Show HN: I built a Raspberry Pi webcam to train my dog (using Claude) https://ift.tt/TZ7M64E

Show HN: I built a Raspberry Pi webcam to train my dog (using Claude) Hey HN! I’m a Product Manager and made a DIY doggy cam (using Claude and a Raspberry Pi) to help train my dog with separation anxiety. I wrote up a blog post sharing my experience building this project with AI. https://ift.tt/vg1Axei November 3, 2025 at 05:34AM

Show HN: Give your coding agents the ability to message each other https://ift.tt/LWBirZX

Show HN: Give your coding agents the ability to message each other I submitted this earlier but it didn’t get any traction. But it’s blowing up on Twitter, so I figured I would give it another shot here. The system is quick and easy to setup and works surprisingly well. And it’s not just a fun gimmick; it’s now a core part of my workflow. https://ift.tt/URcyYrJ November 3, 2025 at 03:09AM

Show HN: Carrie, for what Calendly can't do https://ift.tt/dTzJYov

Show HN: Carrie, for what Calendly can't do Hey everyone, Through my career, I've spent too many hours and too much mental load on busywork like scheduling and following up on people's availabilities. So, I built Carrie. You simply cc her into your emails, and she sorts out meeting times across time zones, finds what works best for everyone, confirms the meeting and sends the invite. She handles scenarios beyond what Calendly can handle and it’s been freeing me up from the back-and-forth of juggling different meeting requests. I’ve been testing this with a beta group of users and am now looking to expand the user pool (please feel free to join the waitlist if you're interested). Would also love feedback on whether this seems useful and what seems to be missing to make this part of your workflow. Thanks! https://getcarrie.com/ November 2, 2025 at 08:10PM

Saturday, November 1, 2025

Show HN: UnisonDB – Log-native KV database that replicates like a message bus https://ift.tt/xa2pPLf

Show HN: UnisonDB – Log-native KV database that replicates like a message bus Hi HN, For the past few months, I’ve been building UnisonDB — a log-native database where the Write-Ahead Log (WAL) is the database, not just a recovery mechanism. I started this because every time I needed data to flow — from core to edge, or between datacenters — I ended up gluing together a KV database + CDC + Kafka. It worked, but it always felt like overkill: too many moving parts for even small workloads, and too little determinism. What is it? UnisonDB unifies storage and streaming into a single log-based core. Every write is: • Durable (appended to the WAL), • Ordered (globally sequenced for safety), • Streamable (available to any follower in real time). It combines B+Tree storage (predictable reads, no LSM compaction storms) with WAL-based replication (sub-second fan-out to 100+ nodes). Key Ideas 1. Storage + Streaming = One System — no CDC, no Kafka, no sidecar pipelines 2. B+Tree-Backed — predictable reads, zero compaction overhead 3. Multi-Model — KV, wide-column, and large objects (LOB) in one atomic transaction 4. Replication-Native — WAL streams via gRPC; followers tail in real time 5. Reactive by Design — every write emits a ZeroMQ notification 6. Edge-Friendly — replicas can go offline and resync instantly Performance & Tradeoffs 1. Write throughput is lower than pure LSM stores (e.g. BadgerDB) — because writes are globally ordered for replication safety. Deliberate tradeoff: consistency > raw write speed. 2. Still ~2× faster than BoltDB with replication enabled. Tech Details Written in Go FlatBuffers for zero-copy serialization gRPC for streaming replication GitHub: https://ift.tt/3zuUEM6 https://unisondb.io November 2, 2025 at 12:31AM

Show HN: Just vibe coded a HN TV dashboard https://ift.tt/LzBrVUk

Show HN: Just vibe coded a HN TV dashboard https://ift.tt/61ZQqAp November 2, 2025 at 12:11AM

Show HN: Proxmox-GitOps: Container Automation Framework https://ift.tt/WFrXu2T

Show HN: Proxmox-GitOps: Container Automation Framework By encapsulating infrastructure within an extensible monorepository - recursively resolved from Git submodules at runtime - Proxmox-GitOps provides a comprehensive Infrastructure-as-Code (IaC) abstraction for an entire, automated, container-based infrastructure. Core Concepts: - Recursive Self-management: Control plane seeds itself by pushing its monorepository onto a locally bootstrapped instance, triggering a pipeline that recursively provisions the control plane onto PVE. - Monorepository: Centralizes infrastructure as comprehensive IaC artifact (for mirroring, like the project itself on Github) using submodules for modular composition. - Single Source of Truth: Git represents the desired infrastructure state. - Loose coupling: Containers are decoupled from the control plane, enabling runtime replacement and independent operation. https://ift.tt/zE91ZoF November 1, 2025 at 11:19PM

Show HN: Pion SCTP with RACK is 70% faster with 30% less latency https://ift.tt/AWOqGHm

Show HN: Pion SCTP with RACK is 70% faster with 30% less latency SCTP is a low level protocol focused on reliable packet transmission. Unlik...