Blips

Blip (noun) - a ping of activity on a radar

RSS

My Blips are basically my blog. Not to be confused with my Posts, which are long-form and semi-professional articles. Think of Blips as Ethan-flavoured Tumblr. I blip about what I'm up to or about random interesting things I've found that aren't substantial enough to merit a full Post.

~Ethan

I wrote a project post for fphf on my personal website: https://ethmarks.github.io/posts/fphf/. I tried to make it accessible, so I spent like 40% of it just explaining SHA-256 and fixed-point hashes. I think it turned out pretty well. And most importantly I didn't wait 2 months after publishing the project itself to write the post like I did with Blips.

I stumbled upon this deep dive into the Google Photos web layout: https://medium.com/google-design/google-photos-45b714dfbed1 written by a Google employee. It's nearly seven thousand words of detailed technical explanation, complete with screenshots and graphs and animations. Fascinating.

I just published a Post on my personal website about Blips: https://ethmarks.github.io/posts/blips/. This was long overdue because I published Blips on October 7th and then inexplicably procrastinated on writing a Post about it for over two months. Better late than never, I guess.

Look now toward heaven, and tell the stars, if thou be able to number them...

Genesis 15:5

Because of modern light pollution, this is actually pretty easy. Seven. There are seven stars. Next question.

I checked out my AI-generated "HN Wrapped" for 2025: https://hn-wrapped.kadoa.com/ethmarks.

It seems to think that I'm some kind of detail-obsessed super-pedant. Personally, I think this is ridiculous. "super" is a Latin stem meaning "beyond", which implies that I've transcended the qualities of pedantry. A better term would be 'pluri-pedant', which denotes someone who is exceptionally punctilious while still remaining within the bounds of being pedantic.

Anyways, I thought that the xkcd-style comic that it generated was pretty funny. A stick figure announces that they're "sending a quick message", to which the stick figure representing me replies "you mean you'll initiate a data transfer sequence via a haptic interface device, requiring 4.2 joules of bio-energy to depress the 'enter' key". The caption below the comic says "he's still calculating the thermodynamic cost of the eye-roll that followed".

It had been bugging me for a while, but I finally figured out how to make GitHub realize that the Blips repo is a Svelte project, not a JavaScript one. Because of the number of SvelteKit-related files that have the .js file extension, GitHub thought that there were more JavaScript files in the repo than there were Svelte files and was marking it as a JavaScript project. So I created a .gitattributes file for Blips that makes GitHub treat the SvelteKit .js files as Svelte files. The docs on how to do this are here.

Lit components seem pretty interesting. I haven't really seen many component libraries that make use of Web Components. Most of them, like Svelte and React, provide component-ey functionality via custom DOM manipulation, not by using the built-in component APIs.

I used to use Web Components for my personal website's header and footer (link to the code here), but I switched to Hugo partials a few months ago.

I created fphf this evening. It's a tool that finds fixed-point hashes, which are strings that contain part of their own SHA-256 hash.

Here's an example: "Hello, dear readers of Blips! Hash prefix: 39479fbe."

You can verify it like so:

$ printf 'Hello, dear readers of Blips! Hash prefix: 39479fbe.' | sha256sum
39479fbe1f559d2ced86049491f3625d9d281ed0a43390737d76f7291b92d55b  -

If you don't understand why this is cool, basically if you change anything about the string, it'll result in a completely different hash. For example, here's what happens when you make the first letter lowercase.

$ printf 'hello, dear readers of Blips! Hash prefix: 39479fbe.' | sha256sum
a55a6077531d73f8c8df3264ad5501bf757e99593efb9761ff46ce3bfed41045  -

It's a completely different hash! The statement is no longer true because the hash doesn't start with "39479fbe" anymore.

The only way to find strings that accurately contain part of their own hash is by randomly guessing. A lot. To find the "Hello, dear readers..." string, I had to check 2,140,879,506 (over 2 billion) hashes. That's a few more than I'm willing to check by hand, which is why I created fphf to do it for me.

You can read the code here, or you can read the fairly comprehensive README that I wrote if you'd like to learn more.

Here's A Connecticut Yankee in King Arthur's Court fed through Susam Pal's Markov babbler, mvs.

curl https://www.gutenberg.org/cache/epub/86/pg86.txt -s | uv run https://raw.githubusercontent.com/susam/mvs/refs/heads/main/mvs

fence a hundred years wages will have risen to six times what they may. I could only shake my head, and she was acting like a candle, and was just fact, and not to ask that of a select few--peasants of that sort of thing which of them fell by the lord himself or stand vacated; and that together; that all revolutions that will not endure that work in the east, and wend thitherward, ye shall see them treated in ways unbecoming their rank. The troublesomest old sow of the fashion. Secondly, these missionaries would gradually, and at times we

I guess this is what CYiKAC would have been like if Mark Twain wrote the whole thing while sleepy. You might even say that he was acting like a candle.

My personal website's SCSS to CSS migration saga has come to an end.

I've blipped about this a few times over the past couple months, and I've put a great deal of thought into it. Basically, this all started because I realized that my federated sites (sites that import my main site's compiled stylesheets) were unable to access design tokens like colours and fonts. This is because I was using SCSS variables to store my design tokens, which compile away their values at build time.

I initially tried to fix this with PostCSS, but I ended that experiment when I realized that Hugo doesn't integrate well with PostCSS (I blipped about this on November 30th). After PostCSS failed, I decided to just switch from SCSS variables to CSS custom properties. This approach worked perfectly and solved the problem.

However, I decided that I wasn't finished on my conquest against SCSS, and I've spent the last 9 days trying to fully switch from SCSS to CSS. The caveat is that I insist on using inline bundled CSS imports, which SCSS supports but vanilla CSS doesn't. So I decided to emulate it using a custom Hugo partial that recursively calls itself.

This approach, though clever, brought its own slew of problems; it made the build process much more complex and fragile, it broke Hugo's live server updates, and it made my code less portable because even though vanilla CSS is more portable than SCSS, my CSS only worked with custom Hugo partial. Soon after I realized these problems, I also realized that switching away from SCSS didn't really provide any advantage to potentially counterbalance all these problems other than the aesthetic satisfaction of using vanilla CSS.

So I closed the PR and chose with the compromisey middle ground. I'm not switching back to SCSS variables, but I am going to stick with SCSS preprocessing. I think this is the most pragmatic approach.

This was a fun experiment, but it just didn't work out. If you'd like to check out the end result, you can view the css-switch branch here. I also deployed the css-switch branch to Vercel here, if you'd like to see the rendered version (note: this link will probably break at some point in the future because Vercel only gives you one public preview link at a time).

I tried out Vercel's v0 this evening. It's an "AI-powered development platform that turns ideas into production-ready, full-stack web apps". In other words, it's a tool that creates websites out of a natural language prompt. I thought I'd dislike it, but the demos that they listed (like this one) looked genuinely impressive. So I gave it a try and requested the following joke website:

the landing page for a company called Salty Recycling that sells salt harvested from salt shakers

After 53 seconds, it responded with this. It's honestly kind of disappointing. It's just a fairly generic React app. The design is tasteful, the colour palette is cohesive, and the layout is logical, but it's pretty bland. Also the text overflows on small screens and if you hover over certain buttons the text becomes unreadable due to contrast issues. My overall impression is "meh".

I can definitely see this being useful for making mockups or random little web apps like Simon Willison's svg-render thing, but to me it just doesn't seem that useful other than that.

I just pushed this update to my personal website that phases out SCSS. I replaced all SCSS variables with CSS custom properties, I replaced all SCSS functions with color-mix, and I replaced all usages of SCSS mixins with the mixin's content.

I did this to prepare for a full conversion to vanilla CSS. My reasons for doing so are thus.

  1. I need my federated sites (sub-sites that import my main site's compiled stylesheets) to be able to access my design tokens like colours and fonts, and SCSS variables compile those tokens away
  2. Working within the limitations of vanilla CSS is more interesting in my opinion
  3. A major refactor sounded fun

The only SCSS features that my site currently uses are the SCSS nesting polyfill and the inline imports. In order to switch to CSS, I'll need to solve my dependency on these two features.

The SCSS nesting feature can be solved by ignoring it. Modern CSS natively supports nesting, although it's not Baseline yet. CSS nesting became widely available in January of 2024, but it'll only become Baseline 30 months later in July 2026. Until then, you aren't technically supposed to use it. I'm going to use it anyways because it's not like my personal website is a critical piece of infrastructure. I think that it's acceptable if my site styling breaks when viewed on a browser from 2023.

Vis-a-vis the inline imports, I have a solution planned. Native CSS imports are terrible because they are runtime imports that create dependency trees and massively slow down page loads. Instead, I'll use a custom Hugo template. I already have a Hugo partial that takes a SCSS filename as input, transpiles it to CSS, minifies it, fingerprints it, and outputs a <link rel="stylesheet"> line. I can modify this pipeline to use a regex to match all @import statements and recursively call itself to fetch the content of the referenced CSS file and insert it in place of the at-import statement. This way I can replicate the basic functionality of SCSS's inline imports while still using vanilla CSS. This approach will be a bit brittle and it'll break if it encounters circular dependencies or relative paths. My solution is to just be careful to not code a circular dependency and to make all import paths relative to the base css directory. If it works it works.

Anyways, that's what I've been up to for the past two days.

I just pushed this update to Blips that adds an RSS feed. I couldn't do this before because the blips used to be dynamically fetched at runtime, but I switched Blips to use server-side prerendering a couple weeks ago. So now it can have an RSS feed. And so now it does. Here's the link: https://ethmarks.github.io/blips/rss.xml.

This made its way to the front page of HN earlier today: Kraa, a free online markdown editor. I like it.

The sharability is especially cool: each leaf (Kraa's word for a note) gets a URL that anybody can edit (provided that the leaf owner enables anonymous editing) without creating an account. It's basically Google Docs but with a sleeker and nicer interface.

It also has a "real-real-time chat" widget that you can add to any leaf. There's no 'send' button, and all messages are visible in real time as they're being typed. I didn't think that I would like this, but it was actually pretty fun to talk to the other people in the HN chatroom demo and see people draft and revise messages in real time. As is to be expected from an uncensored anonymous chatroom, there was a non-zero amount of trolling and toxicity. The moderator is doing an incredible job immediately taking things down, though.

Delightfully, because Kraa is a very new service that launched a little over a week ago, not all of the xkcd Namespace Land Rush usernames have been taken. I've personally managed to snag canada and administrator. Some of the ones like user and nasa aren't available because of the six-character minimum. As of the time of writing, google, facebook, username, iphone, bitcoin, and tons of common first names are still available. Alas, ethan is below the character minimum.

I'm a teeny bit wary about the long-term financial stability of Kraa. Running a no-login CRDT editor can't be cheap, and I don't know how much capital they have. The devs said that they plan to add a paid tier in 2026 that includes a larger image storage quota, but until then it's completely unpaid. It does have a nice .md export feature, though, so there's no lock-in.

Overall, it's a very cool app. I'm not switching from Obsidian for personal knowledge management, but Kraa very well might replace Apostrophe as my preferred editor for one-off Markdown notes.

Here's some use cases for Kraa off the top of my head:

  • meeting scratchpad
  • collaborative to-do list
  • Pastebin alternative
  • live-blogging platform
  • anonymous poll
  • Q&A
  • guestbook
  • collaborative art thing kind of like r/place

Here's a guestbook I set on up on Kraa. Feel free to stop by and leave a message: https://kraa.io/blips-guestbook

I've spent the last hour trying to convert my personal website from SCSS to PostCSS, which is something I've been meaning to do since October.

I have changed my mind.

Post CSS is incredibly finicky, it's prone to errors, it breaks Hugo's live preview, and it increases build times from 131ms to over 5202ms. I've tried to fix these problems and failed. It just not worth it.

Switching from SCSS variables to CSS custom properties is still something that needs to happen for the sake of my federates sites (sub-sites that import my main site's stylesheets), but PostCSS is definitely not the way that I'm going to do it. I'll either just use custom properties in SCSS or I'll migrate to vanilla CSS. The only things that make me hesitate about switching to vanilla CSS are the mixins and the inline imports. Vanilla CSS supports neither of these. Mixins are negotiable, but inline imports are not.

I'm honestly considering writing some black magic Hugo templates to automatically concatenate vanilla CSS files from the @import rules. It wouldn't be any less advanced than my current setup (I'm using SCSS import, not SCSS use) and probably wouldn't even be too difficult to program; just a regex, some recursively called sub-templates, and resources.FromString. Hmm. I'll look into this.

I don't know what it is, but uncancelled units of energy and power irrationally irritate me. Watts are a unit of power, and joules are a unit of energy. Watt-hours are a unit of energy, so they should be measured in joules.

Even worse, some people use watt-hours per hour (watts times hours divided by hours), which just equal watts. It's analogous to a mathematical formula including the step "multiply by 10", then having the subsequent step being "divide by 10". Maddening.

Relavent xkcd: https://xkcd.com/3038/

Why is Docker such a pain to install on Linux? I was warned against using Docker Desktop so I installed Docker Engine. But then I had to restart the daemon a bunch and clear Docker's state and modify my user permissions. I wouldn't say it was frustrating, but it was far more friction than I was expecting from such a ubiquitous developer tool.

It's odd how package.json is almost universally seen as a Node.JS thing even though it works perfectly fine as just a general-purpose language-agnostic project metadata file. It includes the project name, author, description, license, and repositoy url all in a standardized and structured format.

It occured to me earlier tonight that my personal website's GitHub repository still didn't have a README or a license, so I added both of those things and also did a bit of refactoring: https://github.com/ethmarks/ethmarks.github.io/pull/35. I also added a package.json because I plan on using PostCSS soon.

I spent most of this morning working on a research paper for my English class. I usually do Microsoft Office stuff on my Surface laptop that runs Windows, but I had a bunch of research tabs open on my Linux laptop and I decided to just use the Word web app instead of sending each and every tab to my Surface. What I forgot is that the Word web app is a horrible buggy mess that simply doesn't support some features, has frequent rendering issues, and uses different keybinds than the native app. They did an absolutely fantastic job with vscode.dev; why couldn't Microsoft put the same level of effort into their other apps? It's not even an exclusivity platform-locking thing because they developed a MacOS port for Word. They clearly don't mind non-Windows users using Word, they just for some reason can't be bothered to fix their web port.

I've been made aware that ch.at, the LLM API provider I've been using for Thessa, keeps having uptime issues which cause Thessa to stop working. I just coded and pushed an update to Thessa that makes the code try to use LLM7 (another no-auth LLM endpoint) first, and if that fails it tries to use ch.at. Hopefully they won't both go offline at the same time.

Blips is at 100 commits exactly right now. Only 28 to go until a big round-number milestone.

I just spent the last two hours writing the README for Blips on one monitor and also doing physics homework on another. Whenever I got stuck on word choice in the README, I'd switch over to homework, and whenever I got stuck on how to approach a physics problem, I'd switch back to the README. It was surprisingly productive, except for the couple of times I spaced out and started rambling about gravitational potential energy in the SvelteKit section.

I was testing what happens if I trigger the webhook while a deployment job is already being executed. As expected, it cancels the in-progress job and focuses on the newer job.

What's odd is that it also sends me an email to every single one of my email addresses alerting me that the job was cancelled. I wonder if I can disable that. I don't want to be alterted about intentional, expected behavior that doesn't require any action from me. There should be a way to 'quiet fail' a workflow so that it cancels it but doesn't email me.

The webhook works! When I published the previous Blip, it automatically and instantly triggered a GitHub Action to rebuild the site. So with today's update to Blips, the page loads faster for you, dear reader, and it's not any more labour-intensive for me than it was before. Yay!

I managed to get the webhook to fire, but only after creating a PAT, and only via a manual REST request from my local machine. Now I just have to test if Sanity will fire the webhook automatically when I publish new Blips.

I just pushed an update to Blips that makes it prerender content at build time rather than fetching on the client side. Basically, it makes the page load faster but means that it'll take about 30 seconds for changes to show up. It also significantly complicates the deployment pipline because now I have to manage webhooks. I'm testing the webhook now.

The one-electron universe is the hypothesis that all electrons and positrons are actually manifestations of a single entity moving backwards and forwards in time. It was proposed by theoretical physicist John Wheeler in a telephone call to Richard Feynman in the spring of 1940.

One-electron universe on Wikipedia

Snap just uninstalled Firefox, a root level package, from my computer and reinstalled it as a Snap. Again. If you remember from when I blipped about this on Oct 28, Snap has done this before. Once again, I am outraged.

My favorite part of Linux is that it respects my agency. Linux doesn't have Windows's "remind me later" buttons, nor does it have MacOS's Gatekeeper/SIP. Linux lets you say "no" to things and it lets you run whatever code you like. In my experience, the only exception to this is Snap. Nothing other than Snap has uninstalled things without my permission. This is a violation of user trust and user agency and is unacceptable.

Thankfully, rm -rf ~/snap still works (you also have to uninstall snapd and whatnot but you get the idea). I didn't want to have to fully uninstall Snap, but I am out of patience. If I wanted an operating system that forces software on me, I'd buy a Mac.

That's not a jab at Apple; Macs are fairly inexpensive, have powerful hardware, have pretty good software, and have almost universal support. The reason that I'm on Linux is that I don't want software forced on me.

I gave Snap three chances to respect my explicit uninstallation of Snap-Firefox, and it gleefully burned through them. I'm slightly concerned that by uninstalling Snap I might have broken something critical to my OS, but if I did then that just gives me an excuse to switch away from an Ubuntu-based distro.

The hardcover of There Is No Antimemetics Division by Sam Hughes (qntm) released today.

I read TINAD V1 about a year ago, but the new hardcover version is a major rewrite that includes new content and exists as a separate story to the SCP universe. TINAD V1 was one of the best sci-fi stories I've ever read. It's just phenomenally clever and written very well.

I've also read and enjoyed one of Hughes's other books, Valuable Humans in Transit, and I plan on reading Ra at some point. Cerebral sci-fi is a genre that Hughes is really good at.

Also, sidenote, TINAD is the source of my old username, "ColourlessSpearmint". One of the chapters in V1 is titled "CASE COLOURLESS GREEN". This is a subtle and very clever double-reference to both Noam Chompsky's "Colorless green ideas sleep furiously" quote and to Charlie Stoss's "CASE NIGHTMARE GREEN" scenario. I shamelessly stole Hughes's chapter title because it was witty and sounded cool, and I replaced "green" with "spearmint" to add a bit of uniqueness.

Anyways, I have multiple thousands of pages of sci-fi in my reading queue right now (including the Foundation series, the Three Body Problem series, and the Mars series), so I don't plan on buying TINAD immediately, but I have high hopes for it when I eventually get around to reading it.

I wrote a little Python script this afternoon to extract HN items into a human-readable format. Here's the link: https://gist.github.com/ethmarks/066e7df25f50dd3a53259cc5a72e34ba

It has PEP 722 metadata, so you can just call run it with uv. I've aliased it to hn on my computer.

Here's an example usage:

uv run https://gist.githubusercontent.com/ethmarks/066e7df25f50dd3a53259cc5a72e34ba/raw/extract_hn.py \
https://news.ycombinator.com/item?id=44849129

I've just stumbled upon ch.at. Basically, it's a zero-authentication AI service. It's very bare-bones; no images, no model selection, not even conversations. Just a text query and a text response. But it's free and publicly available. I found the ch.at HN discussion and the developer stated that "It has not been expensive to operate so far. If it ever changes we can think about rate limiting it". What a generous service! This is super useful for little automation scripts.

I bet I could use it as the AI provider for Thessa. Thessa is a static site, so the IP rate limiting isn't a problem (each user uses their own IP rather than everything being routed through my server). And the low-traffic and not-for-profit nature of Thessa means that it won't be taking advantage of their generosity, so no ethical concerns. It's a much better solution than my current "Bring your own Gemini API key or else you can't use it lol" approach. I'll look into this later today.

I've just discovered the Charm company. They're the ones who developed several CLI and TUI utilities that I interact with regularly. They make great software, but their main tactic seems to be attention-getting design and an energetic tone. I think they're just trying to be memorable (there are lots of other tools that do basically the same things), and for what it's worth they're doing a really good job.

Their website, demo videos, and even the software itself are all very colourful and contain lots of animations and clever designs. For example, look at this promotional video for their AI agent, Crush: https://charm.land/crush-promo.fd990f87ae513e1e.webm. They're listing the words that they've chosen to define Crush: 'Smarter', 'Faster', and 'Glamour'. Between 'Faster' and 'Glamour', about 11 seconds into the video, they smoothly draw an ampersand (&) in the negative space of a gradient, filling the positive space with a glimpse of the agent editing some code. They could have just written "and" or they could have skipped the conjuction entirely, but instead they dedicated an entire half-second to an ampersand because ampersands are pretty. The attention to detail is impressive.

Every project's README adopts the same playful witty tone. When listing the package manager installation instructions, they write "Arch Linux (btw): yay -S crush-bin". For the uninitiated, this is a reference to the "I use arch btw" meme. Also, in big yellow text, they state "Warning - Productivity may increase when using Crush". They could have easily overdone these subtle jokes, but I think that they strike a good balance. The bits of personality intermixed with the fairly well-written documentation is pretty charming. They should name their company after that or something.

Kind of crazy that the internet has been around long enough to have witnessed major geopolitical shifts. For example, when ccTLDs first started being registered to countries in 1985, East Germany received .dd. After the reuinification of Germany in 1990, it switched over to .de, leaving .dd unused. Likewise, .cs was originally used by Czechoslovakia until it split into the Czech Republic (.cz) and Slovakia (.sk). Cold War-era countries having their own ccTLDs kind of feels like Napolean having an email address.

I love this AT&T advert from 1993: https://www.youtube.com/watch?v=RvZ-667CEdo

It was so close about so much. It predicted things like on-demand movie streaming, remote classrooms, and smartwatches, that were complete sci-fi in the 90s but are utterly unremarkable in the 2020s.

It all has a mild retrofuturist flavor that I find very intriguing. It predicted video calls but imagined that they would be held in phone booths. It predicted digital payments but imagined that your car would have a credit card reader. Though it was remarkably prescient about the concepts of future technology, its envisioned execution was pretty anachronistic.

We should step up our near-future prediction game. Most of the predictions for the 2050s that I've seen are some variant of "everything is self-driving, screens are holographic, virtual reality is commonplace, and social media companies are even more powerful". We should make more wildly inventive but plausible predictions for the near future.

When I was choosing a Linux distro, I was often warned away from Ubuntu because of Snap. But I ignored these warnings because I thought people were just being zealous and overly critical of Snap. I have since realized that Snap deserves every bit of critique that it gets.

First of all, Snap packages are just worse. They're slower, often have weird bugs like weird window decoratations, and are just generally less pleasant to use.

Because of these reasons and more, I decided a couple weeks to stop using the Snap version of Firefox. So I uninstalled Snap Firefox and reinstalled it with APT using Mozilla's official guide for installing Firefox on Linux. I noticed a significant decrease in startup time and those weird bugs were fixed. All was well.

Until earlier today when Snap decided to uninstall Firefox without my permission and reinstall it as a Snap. I wasn't doing anything Snap-related, wasn't installing anything, and didn't even have Firefox open. Apparently other people have had this same issue.

Canonical, what is wrong with you? Sneakily redirecting APT commands to install Snaps instead was bad enough, but uninstalling root-level packages in the background just to reinstall them with your special in-house package manager is unacceptable.

I'm sticking with Ubuntu for now because of inertia and I want to avoid a full reinstall if possible, but I figure that it's only a matter of time before I accidentally mess up my SSD or uninstall something that I shouldn't have, rendering my OS unusable. When this occurs and I have to reinstall a new OS, I will not be choosing an Ubuntu-based distro.

Alright. I'll concede that CodinGame is pretty fun.

I just spent an hour on the mars lander puzzle where you have to gently guide a lander down by writing code to control the engine throttle.

I got a working solution within a few minutes by just hacking together a hysteresis controller that set throttle to max when it was going too fast and then throttled all the way down when it went too slow. My hysteresis solution was good enough to complete the puzzle, but it was fuel-inefficient and kind of boring.

My inner rocket science nerd insisted that I find a way to implement a suicide burn where the rocket free-falls for as long as possible then applies a max-throttle braking burn at the last possible moment. I built a kinematics solver to perform on-the-fly trajectory calculations, I fixed the errors in my code (CodinGame's implementation of Ruby is oddly strict about type coercion), I accounted for the thruster ramp-up stage by doing more kinematics, I accounted for the imprecision in the simulation by applying a dynamic half-throttle landing burn, and I eventually got the landing velocity down to a gentle 12 m/s. Ideally it'd be 0 m/s, but the allowable limit is 40 m/s and my hysteresis solution was 14 m/s with manual fine-tuning, so I'll take it.

I learned very little about Ruby, kinematics, or rocket dynamics, but it was a fun exercise nonetheless.

I should play Kerbal Space Program again.

I signed up for CodinGame earlier today. It seems like the lowest-friction source of random little interactive programming puzzles. I don't really like how gamified it is, but I guess it's fine; I've seen worse. I'm doing most of the puzzles in Ruby because I don't have as much Ruby experience as I'd like.

I'm experimenting with Vercel right now. I deployed the SvelteKit demo you get from npx sv create. It's live now on https://vercel-test-liart-tau.vercel.app/.

I must say, Vercel has an interesting hierarchy. I'm used to GitHub's hierarchy of 'owner > repo' where owners can be organizations or individual users, but Vercel uses GitLab's hierarchy of 'team > project' where you're encouraged to create single-user teams for personal projects.

Also, deployment domains are based entirely on the project name and not the team name or user name. With GitHub Pages, domains are "{username}.github.io/{projectname}", but on Vercel they're "{projectname}-{uniqueness padding}.vercel.app/".

The total word count of the W3C specification catalogue is 114 million words at the time of writing. If you added the combined word counts of the C11, C++17, UEFI, USB 3.2, and POSIX specifications, all 8,754 published RFCs, and the combined word counts of everything on Wikipedia’s list of longest novels, you would be 12 million words short of the W3C specifications.

I conclude that it is impossible to build a new web browser. The complexity of the web is obscene. The creation of a new web browser would be comparable in effort to the Apollo program or the Manhattan project.

It is impossible to:

This quote works well as a melodramatic soliloquy to derisively express the opinion that the person you're talking to said something stupid.

Much I marvelled this ungainly fowl to hear discourse so plainly,
Though its answer little meaning—little relevancy bore;

Edgar Allan Poe, "The Raven"

I'm considering another refactor of my site's CSS.

I've been using SCSS for the past month and a half, and it works perfectly acceptably. But now that I'm starting to create "federated" sites (e.g. Blips or Thessa) that import my site's compiled stylesheets despite not being part of the same repo or build process, I'm realizing the flaws with SCSS variables.

With SCSS, if I want to use a specific color or whatever on a federated site, I'm kind of just out of luck. The SCSS variables are complied away and aren't present in the final stylesheet. I can include a specific one-off custom property on the main site and "pass" the value over to the federated site, but this is a dumb manual solution. It would be better if all of my variables were available to all sites that use my stylesheet. CSS custom properties, what I was using before I switched to SCSS, accomplish exactly this. So I'm probably going to switch back to custom props instead of SCSS variables.

I would just switch to vanilla CSS, if not for inline imports and mixins. I really like inline imports and mixins. SCSS has these and they work quite well, while vanilla CSS does not.

Don't hold me to this and there's a roughly 40% chance I'll change my mind within the hour, but I'm considering PostCSS. It seems interesting, plus it seems to be widely used in enterprise software, plus it's extensible so I get to decide what features I do and don't use.

And also it reminds me of SvelteKit's philosphy. Both SvelteKit and PostCSS give you the experience and advantages of writing code in vanilla web while also applying all sorts of clever optimizations and features and enhancements to the output so you get the advantages of modern tools. There aren't many tools that input the same language as they output and do more than just minify the code or something, and I really appreciate the ones that do.

I'm kind of in a lull vis-a-vis schoolwork so I do have time for a major refactor like this. Whether I spent that time on a refactor or playing Dyson Sphere Program is yet to be seen.

I just finished watching Annihilation. What a deeply disturbing but deeply interesting film. I'm sure that it's already been analyzed to bits, so rather than talking about the obvious cancer allegory, I'd like to note an interesting detail I noticed. If you haven't watched it, beware of moderate spoilers.

On the beach around the lighthouse, there's these big glass trees that sprout up out of the sand. They're very pretty and creative set dressing, but they aren't really discussed in the film.

Some people have suggested that the trees are a silicon-based lifeform created by the Shimmer. This is completely plausible, but I'd point out that the Shimmer mostly operates by combining and modifying life, not by creating something completely new. Silicon-based life does not exist on Earth, so the Shimmer would have had to create the silicon life and then splice it with tree genes. This definitely isn't outside of the capabilities of the Shimmer and would be good foreshadowing for the lighthouse, but it is inconsistent with the rest of the Shimmer's creations, which are always modified from existing life instead of created from scratch.

I propose an alternative theory: the glass trees are fulgurite formations. Real-life fulgurite formations are small crusty clumps of dirty glass formed by a lightning strike fusing sand into glass. The formations in the film look nothing like fulgurite formations; they're too clean, too large, and too crisp. Also, they're shaped like trees. I suggest that the glass trees are the result of the Shimmer controlling lightning.

What this implies is that the shimmer is controlling the ambient weather, causing lightning to strike in such a way that it forms tree-like structures. It's splicing tree genes into the weather. It's explicitly stated in the film that the Shimmer affects light and radio waves. Why not atmospheric processes too?

Just like how the Shimmer miraculously made the flowers form human-shaped vines by splicing human genes into the flowers, it made the lightning form tree-shaped fulgurite structures by splicing tree genes into the clouds and air particles I guess.

And at the very least, this theory is still more grounded than what happens in the lighthouse.

A one-way mirror, also called a two-way mirror, is a reciprocal mirror that appears...

One-way mirror on Wikipedia

Zed finally has an official public Windows build: https://zed.dev/windows. Before now I was using Scoop to install and update it, but now I don't have to do that anymore. Yay!

I'm doing some research on Age of Discovery maps for an upcoming project. I just learned about Terra Australis: https://en.wikipedia.org/wiki/Terra_Australis. It's one of the single most bizarre things I've ever learned.

Bascially, around the 15th century, cartographers noticed that "hey, there's all this land in the Northern Hemisphere, but not as much land in the Southern Hemisphere" So they decided "well I guess we'll just invent a continent. We'll call it Terra Australis (Latin for 'Southern Land'). Even though nobody has ever surveyed this continent, seen it, heard about it, or acquired any empirical data whatsoever to suggest that it exists, we're going to put it on maps anyways".

They did this for three hundred years until people started trying to explore it and eventually realized that this imaginary continent was, in fact, imaginary.

If there's a moral to this story, I have no idea what it is.

Blips update: I removed the auto-update logic because it added a lot of complexity, was burning through my CMS API quota, and wasn't really necessary because I only write blips a few times a day at most. So from now on if you want to see new blips you have to reload the page. Sorry!

I just spent like half an hour trying to fix the getRelativeDate() function in Blips. As it turns out, datetime processing is really really hard. Relevant xkcd: https://xkcd.com/2867/.

I wrote nearly 200 lines of datetime processing logic, wrote an entire test suite to test things like threshold edge cases, time zones, and daylight savings time, and continued iterating until I got frustrated and gave up. So instead I just used the date-fns library.

The problem with date-fns is that it likes to use its own output format that is too verbose for my taste. So I piped the date-fns output into a series of 16 regex replace() operations that remove all the vague qualifiers like "about", "over" and "almost" and also swap the full time units like ("minute") with their abbreviated form ("m").

This would be a lot simpler if we just abolished the day/night cycle.

There's a new Zed release today: https://zed.dev/releases/stable/0.207.3

To me, the only notable changes are the improved font rendering on Linux, the improved Markdown preview, and the bugfix for when the inline assistant wraps output in <rewrite-this> tags.

I don't use the inline assistant much, but when I do it's usually for really trivial things like renaming a variable. I would feel bad for bothering Anthropic's servers with such a menial task, so I usually use a tiny local model via Ollama. Because the model is tiny, it's also very stupid, which means that it often falls victim to really stupid mistakes, the most common of which was the <rewrite-this> wrapping. They've fixed that now, which is nice.

Blips is now officially published! I added a link to the site header, which means that it's visible on every page of my personal website.

Showing the latest 50 blips. Older ones are off-radar...