Blips

Blip (noun) - a ping of activity on a radar

RSS

My Blips are basically my blog. Not to be confused with my Posts, which are long-form and semi-professional articles. Think of Blips as Ethan-flavoured Tumblr. I blip about what I'm up to or about random interesting things I've found that aren't substantial enough to merit a full Post.

~Ethan

I've finally found the solution to an extremely mild inconvenience of mine: Just. It's a command runner that provides a unified interface for project-specific commands.

Recently, I've been working on a lot of different projects, each using a different framework. It's not uncommon for me to switch between projects that use Node, Deno, Zola, and Hugo all within the span of a few minutes. Critically, all of these frameworks use different CLI commands. If all I want to do is build the project, I have to remember which framework I'm using, so I can't just rely on muscle memory. pnpm build works on every Node-based project, but if I try to run pnpm build on a Deno project I'll get an error.

What I really want is to be able to translate my brain's internal "I want to build the project now" into a muscle memory action that types a command that my computer can run. With Just, I create a justfile that contains a list of project-specific recipes. I can just type just build (or just b, which is even shorter) and Just will convert that into the corresponding framework-specific build command (e.g. deno task build).

This is extremely mildly convenient.

I've just stumbled upon this fascinating project: https://madebyoll.in/posts/world_emulation_via_dnn/demo/. It's a neural world model that lets you to explore a neural-rendered version of the Heron Loop Trail in Marymoor Park near Seattle.

According to his article, he trained it by walking around the park for 15 minutes and using his phone's camera and gyroscope to gather 22,814 of frame+direction data points. He then used the data to train a 5M-param asymmetric UNet model. The web demo runs inference locally in the browser, which is pretty cool. Training the final model cost around $100 (to rent GPU time), which is much cheaper than I would have thought based on how relatively coherent the output is.

The result is pretty dream-like and the environment will shift even if you don't touch the controls, but it genuinely looks like grainy footage of a real park. It seems to prefer rendering a forest walkway, but you can sometimes make it render a lake by staring up at the sky and then looking down. It won't let you leave the walkway, but it'll still let you walk towards the railing, giving the illusion of motion without ever letting you reach it. I assume that's because it only has training data from the walkway.

Remember a few years ago when this kind of thing was complete sci-fi?

I've just opened this PR on Blips that'll switch it to use server-side rendering. Basically, it'll make new blips show up the moment I publish them rather than having to wait for the entire site to rebuild. However, it doesn't work with GitHub Pages, so I'd have to decommission the "ethmarks.github.io/blips/" url and switch to some other domain, likely hosted on Vercel. I'm not ready to do that yet, so I'm not going to merge the PR just yet.

Netifly Drop is pretty neat. It's a "classic" web hosting provider where, rather than linking a git repo, you just drag & drop (or FTP) a folder full of HTML files. They offer an interesting trial/demo thing where you can deploy a site without logging in, but it gets deleted after an hour. I think that's a really clever way to minimze friction for potential users trying out their service without straining their servers too much.

There's a bug in the Gemini CLI bot on GitHub that makes it argue with itself. If you assign the status/needs-triage label to an issue, the bot will automatically remove it and add a comment explaining that you didn't have permission to add that label. But then the bot will notice that someone (the bot, in this case) removed the label without permission. So it'll add it back and add a comment explaining that the bot didn't have permission to remove the label. And then the bot will notice that someone added the label without permission, so it'll remove it and add a comment explaining that the bot didn't have permission to add that label. Here's an example with over 5,195 back-and-forths: https://github.com/google-gemini/gemini-cli/issues/16723.

I just realized that the ASCII Globe image on my personal website's home page was encoded as a PNG instead of a WebP. It weighed 1.73 megabyes as a PNG and was 3.8x heavier than all other assets on the page combined. When I re-encoded it to a lossless WebP, the size dropped to 5.24 kilobytes, which is less than the size of the HTML document itself. Media codecs matter.

I've been experimenting with Mitosis components this afternoon. The core concept is that you write components in Mitosis's custom .lite.tsx language (it's a subset of JSX), and it'll transform the component into basically every other component language. Mitosis's list of supported target languages is genuinely impressive.

Happy New Year, dear reader!

The first thing I did in 2026 was watch my New Year countdown site reach zero. The second thing I did in 2026 was tell my friends "Happy New Year!" in our group chat (somehow, despite sending the message at 12:00:09, I still wasn't the first person to say 'happy new year'). The third thing I did in 2026 was merge this PR on my personal website that updates the footer to say "©2026". The fourth thing I did in 2026 was write this blip.

I just merged the Thessa rewrite PR: https://github.com/ethmarks/thessa/pull/5. So now, if you visit https://thessa.vercel.app/, you'll see the new Thessa v2 interface made with SvelteKit, powered by Cerebras, and hosted on Vercel. I've been working on this for a while and I think it turned out pretty nice. I also made Thessa's old GitHub Pages URL redirect to the new url on Vercel. Oh and I also updated Thessa's project post on my personal website. I really wanted to finish the refactor before the new year, and I've done so with 20 minutes to spare. Yay!

I made a New Year countdown website! You can view it online here: https://nye-ethmarks.vercel.app/. If you want, here's the GitHub repo. I also made a live chat using Kraa. See you in 2026!

I was gifted There Is No Antimemetics Division for Christmas this year and I've started reading it today. The rewrites are fascinating. Hughes kept the overall structure very close to the original, but he's changed all of the names. The Foundation is now the "Organization". SCPs are now "Unknowns". Wheeler is now "Quinn".

Hughes apparently did this to skirt copyright law because he released the original under a Creative Commons license and the new version has normal copyright (because the publishing company wants to own the exclusive rights to their books), so he had to alter it enough to make it a legally distinct work. It's the same principle that causes cereal companies to label their knockoff Froot Loops as "Fruit Spins". They can't use the original name, so they invent a new one that's legally different but is a clear and obvious allusion.

I don't think I'd previously encountered a knockoff created by the author of the thing it's knocking off.

I created an account on NPM: https://www.npmjs.com/~ethmarks. I did this because I'm considering formalizing my external stylesheets and web components architecture for my various sites that are styled to look exactly the same as my main site but are in different repos and have none of the same code. Publishing my stylesheets and components as an NPM package seemed like a good way to accomplish that, and plus tinkering with NPM sounds interesting.

I spent most of today remaking Thessa in SvelteKit. I started the reimplementation last week, but today was when I really started working on it in earnest. I'm using a custom server-side API with Vercel for the LLM inference, which is the first time I've really built anything on the server side. Thessa v1 used free APIs on the client side, but v2 (the refactor) uses a custom endpoint with the Cerebras API and my private API key. So hopefully v2's LLM inference will be faster, more reliable, and more capable. v2 has feature parity with v1 right now (plus a much nicer UI), but I'm not merging it yet until I rewrite the README and iron out the remaining bugs.

Update on my Waterman Butterfly post's search rankings via DuckDuckGo: I'm now number two. Directly below the Wikipedia article. Same via Bing. This is delightful.

I've been informed via an email from a reader (hi, Johann!) that my post about the Waterman Butterfly map is the third search result on the entire internet for "waterman butterfly" via DuckDuckGo. My post ranks right below the official Waterman website created by Steven Waterman himself. My post somehow ranks above Jason Davies's article about the Waterman, which was one of my sources. Uh. Why? What did I do to made DuckDuckGo's algorithm like me so much? This is bizarre but delightful.

I meant to make a Christmas countdown webpage earlier, but forgot until today. With less than 4 hours left, I finished and published it. Here it is: https://xmas-ethmarks.vercel.app/.

I wrote a project post for fphf on my personal website: https://ethmarks.github.io/posts/fphf/. I tried to make it accessible, so I spent like 40% of it just explaining SHA-256 and fixed-point hashes. I think it turned out pretty well. And most importantly I didn't wait 2 months after publishing the project itself to write the post like I did with Blips.

I stumbled upon this deep dive into the Google Photos web layout: https://medium.com/google-design/google-photos-45b714dfbed1 written by a Google employee. It's nearly seven thousand words of detailed technical explanation, complete with screenshots and graphs and animations. Fascinating.

I just published a Post on my personal website about Blips: https://ethmarks.github.io/posts/blips/. This was long overdue because I published Blips on October 7th and then inexplicably procrastinated on writing a Post about it for over two months. Better late than never, I guess.

Look now toward heaven, and tell the stars, if thou be able to number them...

Genesis 15:5

Because of modern light pollution, this is actually pretty easy. Seven. There are seven stars. Next question.

I checked out my AI-generated "HN Wrapped" for 2025: https://hn-wrapped.kadoa.com/ethmarks.

It seems to think that I'm some kind of detail-obsessed super-pedant. Personally, I think this is ridiculous. "super" is a Latin stem meaning "beyond", which implies that I've transcended the qualities of pedantry. A better term would be 'pluri-pedant', which denotes someone who is exceptionally punctilious while still remaining within the bounds of being pedantic.

Anyways, I thought that the xkcd-style comic that it generated was pretty funny. A stick figure announces that they're "sending a quick message", to which the stick figure representing me replies "you mean you'll initiate a data transfer sequence via a haptic interface device, requiring 4.2 joules of bio-energy to depress the 'enter' key". The caption below the comic says "he's still calculating the thermodynamic cost of the eye-roll that followed".

It had been bugging me for a while, but I finally figured out how to make GitHub realize that the Blips repo is a Svelte project, not a JavaScript one. Because of the number of SvelteKit-related files that have the .js file extension, GitHub thought that there were more JavaScript files in the repo than there were Svelte files and was marking it as a JavaScript project. So I created a .gitattributes file for Blips that makes GitHub treat the SvelteKit .js files as Svelte files. The docs on how to do this are here.

Lit components seem pretty interesting. I haven't really seen many component libraries that make use of Web Components. Most of them, like Svelte and React, provide component-ey functionality via custom DOM manipulation, not by using the built-in component APIs.

I used to use Web Components for my personal website's header and footer (link to the code here), but I switched to Hugo partials a few months ago.

I created fphf this evening. It's a tool that finds fixed-point hashes, which are strings that contain part of their own SHA-256 hash.

Here's an example: "Hello, dear readers of Blips! Hash prefix: 39479fbe."

You can verify it like so:

$ printf 'Hello, dear readers of Blips! Hash prefix: 39479fbe.' | sha256sum
39479fbe1f559d2ced86049491f3625d9d281ed0a43390737d76f7291b92d55b  -

If you don't understand why this is cool, basically if you change anything about the string, it'll result in a completely different hash. For example, here's what happens when you make the first letter lowercase.

$ printf 'hello, dear readers of Blips! Hash prefix: 39479fbe.' | sha256sum
a55a6077531d73f8c8df3264ad5501bf757e99593efb9761ff46ce3bfed41045  -

It's a completely different hash! The statement is no longer true because the hash doesn't start with "39479fbe" anymore.

The only way to find strings that accurately contain part of their own hash is by randomly guessing. A lot. To find the "Hello, dear readers..." string, I had to check 2,140,879,506 (over 2 billion) hashes. That's a few more than I'm willing to check by hand, which is why I created fphf to do it for me.

You can read the code here, or you can read the fairly comprehensive README that I wrote if you'd like to learn more.

Here's A Connecticut Yankee in King Arthur's Court fed through Susam Pal's Markov babbler, mvs.

curl https://www.gutenberg.org/cache/epub/86/pg86.txt -s | uv run https://raw.githubusercontent.com/susam/mvs/refs/heads/main/mvs

fence a hundred years wages will have risen to six times what they may. I could only shake my head, and she was acting like a candle, and was just fact, and not to ask that of a select few--peasants of that sort of thing which of them fell by the lord himself or stand vacated; and that together; that all revolutions that will not endure that work in the east, and wend thitherward, ye shall see them treated in ways unbecoming their rank. The troublesomest old sow of the fashion. Secondly, these missionaries would gradually, and at times we

I guess this is what CYiKAC would have been like if Mark Twain wrote the whole thing while sleepy. You might even say that he was acting like a candle.

My personal website's SCSS to CSS migration saga has come to an end.

I've blipped about this a few times over the past couple months, and I've put a great deal of thought into it. Basically, this all started because I realized that my federated sites (sites that import my main site's compiled stylesheets) were unable to access design tokens like colours and fonts. This is because I was using SCSS variables to store my design tokens, which compile away their values at build time.

I initially tried to fix this with PostCSS, but I ended that experiment when I realized that Hugo doesn't integrate well with PostCSS (I blipped about this on November 30th). After PostCSS failed, I decided to just switch from SCSS variables to CSS custom properties. This approach worked perfectly and solved the problem.

However, I decided that I wasn't finished on my conquest against SCSS, and I've spent the last 9 days trying to fully switch from SCSS to CSS. The caveat is that I insist on using inline bundled CSS imports, which SCSS supports but vanilla CSS doesn't. So I decided to emulate it using a custom Hugo partial that recursively calls itself.

This approach, though clever, brought its own slew of problems; it made the build process much more complex and fragile, it broke Hugo's live server updates, and it made my code less portable because even though vanilla CSS is more portable than SCSS, my CSS only worked with custom Hugo partial. Soon after I realized these problems, I also realized that switching away from SCSS didn't really provide any advantage to potentially counterbalance all these problems other than the aesthetic satisfaction of using vanilla CSS.

So I closed the PR and chose with the compromisey middle ground. I'm not switching back to SCSS variables, but I am going to stick with SCSS preprocessing. I think this is the most pragmatic approach.

This was a fun experiment, but it just didn't work out. If you'd like to check out the end result, you can view the css-switch branch here. I also deployed the css-switch branch to Vercel here, if you'd like to see the rendered version (note: this link will probably break at some point in the future because Vercel only gives you one public preview link at a time).

I tried out Vercel's v0 this evening. It's an "AI-powered development platform that turns ideas into production-ready, full-stack web apps". In other words, it's a tool that creates websites out of a natural language prompt. I thought I'd dislike it, but the demos that they listed (like this one) looked genuinely impressive. So I gave it a try and requested the following joke website:

the landing page for a company called Salty Recycling that sells salt harvested from salt shakers

After 53 seconds, it responded with this. It's honestly kind of disappointing. It's just a fairly generic React app. The design is tasteful, the colour palette is cohesive, and the layout is logical, but it's pretty bland. Also the text overflows on small screens and if you hover over certain buttons the text becomes unreadable due to contrast issues. My overall impression is "meh".

I can definitely see this being useful for making mockups or random little web apps like Simon Willison's svg-render thing, but to me it just doesn't seem that useful other than that.

I just pushed this update to my personal website that phases out SCSS. I replaced all SCSS variables with CSS custom properties, I replaced all SCSS functions with color-mix, and I replaced all usages of SCSS mixins with the mixin's content.

I did this to prepare for a full conversion to vanilla CSS. My reasons for doing so are thus.

  1. I need my federated sites (sub-sites that import my main site's compiled stylesheets) to be able to access my design tokens like colours and fonts, and SCSS variables compile those tokens away
  2. Working within the limitations of vanilla CSS is more interesting in my opinion
  3. A major refactor sounded fun

The only SCSS features that my site currently uses are the SCSS nesting polyfill and the inline imports. In order to switch to CSS, I'll need to solve my dependency on these two features.

The SCSS nesting feature can be solved by ignoring it. Modern CSS natively supports nesting, although it's not Baseline yet. CSS nesting became widely available in January of 2024, but it'll only become Baseline 30 months later in July 2026. Until then, you aren't technically supposed to use it. I'm going to use it anyways because it's not like my personal website is a critical piece of infrastructure. I think that it's acceptable if my site styling breaks when viewed on a browser from 2023.

Vis-a-vis the inline imports, I have a solution planned. Native CSS imports are terrible because they are runtime imports that create dependency trees and massively slow down page loads. Instead, I'll use a custom Hugo template. I already have a Hugo partial that takes a SCSS filename as input, transpiles it to CSS, minifies it, fingerprints it, and outputs a <link rel="stylesheet"> line. I can modify this pipeline to use a regex to match all @import statements and recursively call itself to fetch the content of the referenced CSS file and insert it in place of the at-import statement. This way I can replicate the basic functionality of SCSS's inline imports while still using vanilla CSS. This approach will be a bit brittle and it'll break if it encounters circular dependencies or relative paths. My solution is to just be careful to not code a circular dependency and to make all import paths relative to the base css directory. If it works it works.

Anyways, that's what I've been up to for the past two days.

I just pushed this update to Blips that adds an RSS feed. I couldn't do this before because the blips used to be dynamically fetched at runtime, but I switched Blips to use server-side prerendering a couple weeks ago. So now it can have an RSS feed. And so now it does. Here's the link: https://ethmarks.github.io/blips/rss.xml.

This made its way to the front page of HN earlier today: Kraa, a free online markdown editor. I like it.

The sharability is especially cool: each leaf (Kraa's word for a note) gets a URL that anybody can edit (provided that the leaf owner enables anonymous editing) without creating an account. It's basically Google Docs but with a sleeker and nicer interface.

It also has a "real-real-time chat" widget that you can add to any leaf. There's no 'send' button, and all messages are visible in real time as they're being typed. I didn't think that I would like this, but it was actually pretty fun to talk to the other people in the HN chatroom demo and see people draft and revise messages in real time. As is to be expected from an uncensored anonymous chatroom, there was a non-zero amount of trolling and toxicity. The moderator is doing an incredible job immediately taking things down, though.

Delightfully, because Kraa is a very new service that launched a little over a week ago, not all of the xkcd Namespace Land Rush usernames have been taken. I've personally managed to snag canada and administrator. Some of the ones like user and nasa aren't available because of the six-character minimum. As of the time of writing, google, facebook, username, iphone, bitcoin, and tons of common first names are still available. Alas, ethan is below the character minimum.

I'm a teeny bit wary about the long-term financial stability of Kraa. Running a no-login CRDT editor can't be cheap, and I don't know how much capital they have. The devs said that they plan to add a paid tier in 2026 that includes a larger image storage quota, but until then it's completely unpaid. It does have a nice .md export feature, though, so there's no lock-in.

Overall, it's a very cool app. I'm not switching from Obsidian for personal knowledge management, but Kraa very well might replace Apostrophe as my preferred editor for one-off Markdown notes.

Here's some use cases for Kraa off the top of my head:

  • meeting scratchpad
  • collaborative to-do list
  • Pastebin alternative
  • live-blogging platform
  • anonymous poll
  • Q&A
  • guestbook
  • collaborative art thing kind of like r/place

Here's a guestbook I set on up on Kraa. Feel free to stop by and leave a message: https://kraa.io/blips-guestbook

I've spent the last hour trying to convert my personal website from SCSS to PostCSS, which is something I've been meaning to do since October.

I have changed my mind.

Post CSS is incredibly finicky, it's prone to errors, it breaks Hugo's live preview, and it increases build times from 131ms to over 5202ms. I've tried to fix these problems and failed. It just not worth it.

Switching from SCSS variables to CSS custom properties is still something that needs to happen for the sake of my federates sites (sub-sites that import my main site's stylesheets), but PostCSS is definitely not the way that I'm going to do it. I'll either just use custom properties in SCSS or I'll migrate to vanilla CSS. The only things that make me hesitate about switching to vanilla CSS are the mixins and the inline imports. Vanilla CSS supports neither of these. Mixins are negotiable, but inline imports are not.

I'm honestly considering writing some black magic Hugo templates to automatically concatenate vanilla CSS files from the @import rules. It wouldn't be any less advanced than my current setup (I'm using SCSS import, not SCSS use) and probably wouldn't even be too difficult to program; just a regex, some recursively called sub-templates, and resources.FromString. Hmm. I'll look into this.

I don't know what it is, but uncancelled units of energy and power irrationally irritate me. Watts are a unit of power, and joules are a unit of energy. Watt-hours are a unit of energy, so they should be measured in joules.

Even worse, some people use watt-hours per hour (watts times hours divided by hours), which just equal watts. It's analogous to a mathematical formula including the step "multiply by 10", then having the subsequent step being "divide by 10". Maddening.

Relavent xkcd: https://xkcd.com/3038/

Why is Docker such a pain to install on Linux? I was warned against using Docker Desktop so I installed Docker Engine. But then I had to restart the daemon a bunch and clear Docker's state and modify my user permissions. I wouldn't say it was frustrating, but it was far more friction than I was expecting from such a ubiquitous developer tool.

It's odd how package.json is almost universally seen as a Node.JS thing even though it works perfectly fine as just a general-purpose language-agnostic project metadata file. It includes the project name, author, description, license, and repositoy url all in a standardized and structured format.

It occured to me earlier tonight that my personal website's GitHub repository still didn't have a README or a license, so I added both of those things and also did a bit of refactoring: https://github.com/ethmarks/ethmarks.github.io/pull/35. I also added a package.json because I plan on using PostCSS soon.

I spent most of this morning working on a research paper for my English class. I usually do Microsoft Office stuff on my Surface laptop that runs Windows, but I had a bunch of research tabs open on my Linux laptop and I decided to just use the Word web app instead of sending each and every tab to my Surface. What I forgot is that the Word web app is a horrible buggy mess that simply doesn't support some features, has frequent rendering issues, and uses different keybinds than the native app. They did an absolutely fantastic job with vscode.dev; why couldn't Microsoft put the same level of effort into their other apps? It's not even an exclusivity platform-locking thing because they developed a MacOS port for Word. They clearly don't mind non-Windows users using Word, they just for some reason can't be bothered to fix their web port.

I've been made aware that ch.at, the LLM API provider I've been using for Thessa, keeps having uptime issues which cause Thessa to stop working. I just coded and pushed an update to Thessa that makes the code try to use LLM7 (another no-auth LLM endpoint) first, and if that fails it tries to use ch.at. Hopefully they won't both go offline at the same time.

Blips is at 100 commits exactly right now. Only 28 to go until a big round-number milestone.

I just spent the last two hours writing the README for Blips on one monitor and also doing physics homework on another. Whenever I got stuck on word choice in the README, I'd switch over to homework, and whenever I got stuck on how to approach a physics problem, I'd switch back to the README. It was surprisingly productive, except for the couple of times I spaced out and started rambling about gravitational potential energy in the SvelteKit section.

I was testing what happens if I trigger the webhook while a deployment job is already being executed. As expected, it cancels the in-progress job and focuses on the newer job.

What's odd is that it also sends me an email to every single one of my email addresses alerting me that the job was cancelled. I wonder if I can disable that. I don't want to be alterted about intentional, expected behavior that doesn't require any action from me. There should be a way to 'quiet fail' a workflow so that it cancels it but doesn't email me.

The webhook works! When I published the previous Blip, it automatically and instantly triggered a GitHub Action to rebuild the site. So with today's update to Blips, the page loads faster for you, dear reader, and it's not any more labour-intensive for me than it was before. Yay!

I managed to get the webhook to fire, but only after creating a PAT, and only via a manual REST request from my local machine. Now I just have to test if Sanity will fire the webhook automatically when I publish new Blips.

I just pushed an update to Blips that makes it prerender content at build time rather than fetching on the client side. Basically, it makes the page load faster but means that it'll take about 30 seconds for changes to show up. It also significantly complicates the deployment pipline because now I have to manage webhooks. I'm testing the webhook now.

The one-electron universe is the hypothesis that all electrons and positrons are actually manifestations of a single entity moving backwards and forwards in time. It was proposed by theoretical physicist John Wheeler in a telephone call to Richard Feynman in the spring of 1940.

One-electron universe on Wikipedia

Snap just uninstalled Firefox, a root level package, from my computer and reinstalled it as a Snap. Again. If you remember from when I blipped about this on Oct 28, Snap has done this before. Once again, I am outraged.

My favorite part of Linux is that it respects my agency. Linux doesn't have Windows's "remind me later" buttons, nor does it have MacOS's Gatekeeper/SIP. Linux lets you say "no" to things and it lets you run whatever code you like. In my experience, the only exception to this is Snap. Nothing other than Snap has uninstalled things without my permission. This is a violation of user trust and user agency and is unacceptable.

Thankfully, rm -rf ~/snap still works (you also have to uninstall snapd and whatnot but you get the idea). I didn't want to have to fully uninstall Snap, but I am out of patience. If I wanted an operating system that forces software on me, I'd buy a Mac.

That's not a jab at Apple; Macs are fairly inexpensive, have powerful hardware, have pretty good software, and have almost universal support. The reason that I'm on Linux is that I don't want software forced on me.

I gave Snap three chances to respect my explicit uninstallation of Snap-Firefox, and it gleefully burned through them. I'm slightly concerned that by uninstalling Snap I might have broken something critical to my OS, but if I did then that just gives me an excuse to switch away from an Ubuntu-based distro.

The hardcover of There Is No Antimemetics Division by Sam Hughes (qntm) released today.

I read TINAD V1 about a year ago, but the new hardcover version is a major rewrite that includes new content and exists as a separate story to the SCP universe. TINAD V1 was one of the best sci-fi stories I've ever read. It's just phenomenally clever and written very well.

I've also read and enjoyed one of Hughes's other books, Valuable Humans in Transit, and I plan on reading Ra at some point. Cerebral sci-fi is a genre that Hughes is really good at.

Also, sidenote, TINAD is the source of my old username, "ColourlessSpearmint". One of the chapters in V1 is titled "CASE COLOURLESS GREEN". This is a subtle and very clever double-reference to both Noam Chompsky's "Colorless green ideas sleep furiously" quote and to Charlie Stoss's "CASE NIGHTMARE GREEN" scenario. I shamelessly stole Hughes's chapter title because it was witty and sounded cool, and I replaced "green" with "spearmint" to add a bit of uniqueness.

Anyways, I have multiple thousands of pages of sci-fi in my reading queue right now (including the Foundation series, the Three Body Problem series, and the Mars series), so I don't plan on buying TINAD immediately, but I have high hopes for it when I eventually get around to reading it.

I wrote a little Python script this afternoon to extract HN items into a human-readable format. Here's the link: https://gist.github.com/ethmarks/066e7df25f50dd3a53259cc5a72e34ba

It has PEP 722 metadata, so you can just call run it with uv. I've aliased it to hn on my computer.

Here's an example usage:

uv run https://gist.githubusercontent.com/ethmarks/066e7df25f50dd3a53259cc5a72e34ba/raw/extract_hn.py \
https://news.ycombinator.com/item?id=44849129

I've just stumbled upon ch.at. Basically, it's a zero-authentication AI service. It's very bare-bones; no images, no model selection, not even conversations. Just a text query and a text response. But it's free and publicly available. I found the ch.at HN discussion and the developer stated that "It has not been expensive to operate so far. If it ever changes we can think about rate limiting it". What a generous service! This is super useful for little automation scripts.

I bet I could use it as the AI provider for Thessa. Thessa is a static site, so the IP rate limiting isn't a problem (each user uses their own IP rather than everything being routed through my server). And the low-traffic and not-for-profit nature of Thessa means that it won't be taking advantage of their generosity, so no ethical concerns. It's a much better solution than my current "Bring your own Gemini API key or else you can't use it lol" approach. I'll look into this later today.

I've just discovered the Charm company. They're the ones who developed several CLI and TUI utilities that I interact with regularly. They make great software, but their main tactic seems to be attention-getting design and an energetic tone. I think they're just trying to be memorable (there are lots of other tools that do basically the same things), and for what it's worth they're doing a really good job.

Their website, demo videos, and even the software itself are all very colourful and contain lots of animations and clever designs. For example, look at this promotional video for their AI agent, Crush: https://charm.land/crush-promo.fd990f87ae513e1e.webm. They're listing the words that they've chosen to define Crush: 'Smarter', 'Faster', and 'Glamour'. Between 'Faster' and 'Glamour', about 11 seconds into the video, they smoothly draw an ampersand (&) in the negative space of a gradient, filling the positive space with a glimpse of the agent editing some code. They could have just written "and" or they could have skipped the conjuction entirely, but instead they dedicated an entire half-second to an ampersand because ampersands are pretty. The attention to detail is impressive.

Every project's README adopts the same playful witty tone. When listing the package manager installation instructions, they write "Arch Linux (btw): yay -S crush-bin". For the uninitiated, this is a reference to the "I use arch btw" meme. Also, in big yellow text, they state "Warning - Productivity may increase when using Crush". They could have easily overdone these subtle jokes, but I think that they strike a good balance. The bits of personality intermixed with the fairly well-written documentation is pretty charming. They should name their company after that or something.

Kind of crazy that the internet has been around long enough to have witnessed major geopolitical shifts. For example, when ccTLDs first started being registered to countries in 1985, East Germany received .dd. After the reuinification of Germany in 1990, it switched over to .de, leaving .dd unused. Likewise, .cs was originally used by Czechoslovakia until it split into the Czech Republic (.cz) and Slovakia (.sk). Cold War-era countries having their own ccTLDs kind of feels like Napolean having an email address.

Showing the latest 50 blips. Older ones are off-radar...