Two notable articles on AI and career:
- An AI agent coding skeptic tries AI agent coding, in excessive detail, and Simon Willison’s comment on it
- Yes, and…, the answer to the question “Given AI, should I still consider becoming a computer programmer?”, by Carson Gross, author of htmx
Two articles on sandboxing for AI agents:
- A field guide to sandboxes for AI, and Simon Willison’s comment to it
- The surprising attention on sprites, exe.dev, and shellbox
Individuals I’m following, who actively write and contribute in the AI field:
- Simon Willison. A must-read in this field now. He’s been topping Hacker News in 2023–20251. I can’t believe how he manages to cover nearly every aspect of the frontier. If you could only follow one source, make it him. He’s also the co-creator of the famous Django web framework.
- Armin Ronacher. He’s the creator of a lot of Python libraries, like Flask and Click. Now he’s writing a lot about LLMs.
- Mario Zechner. I discovered him through his tiny but curated coding agent Pi, which has been turning heads recently2. I haven’t taken a look yet, but will do.
- Mitchell Hashimoto. Ghostty’s creator. He’s writing a lot about his AI adoption in real development.
-
Simon Willison’s post: The most popular blogs of Hacker News in 2025 ↵
-
Armin wrote about it: Pi: The Minimal Agent Within OpenClaw ↵
The Value of Things. Another article about the AI trend (see the previous one). This time from Bob Nystrom, one of my favorite writers.
Two articles on work habits:
I just added a world map to my landing page, showing where I’ve been. Have a look! I’m looking forward to exploring more of this world!
Screenshot of the world map
Here’s what I missed in the AI field this week — I was on holiday in Tokyo.
Two new models dropped within about 15 minutes of each other: Claude Opus 4.6 and GPT-5.3-Codex. Amp immediately adopted Opus 4.6 for its smart mode, but GPT-5.3-Codex is only available in their Codex app, not yet via the API. I believe Amp will adopt it for its deep mode once it’s generally available.
Amp is sunsetting the editor extension next month. It hasn’t been officially announced yet, but the team mentioned it in their latest Raising An Agent podcast episode. I use Amp exclusively through the editor extension, so unfortunately I’ll have to switch to the TUI version and get used to it.
Ghostty’s author Mitchell Hashimoto has been busy lately:
- Ghostty’s updated AI usage policy for contributions. More and more open source projects are drowning in AI-generated issues and PRs submitted without human review — the slop. He proposed a new policy for dealing with this trend. It’s not against AI, but makes every AI-generated contribution accountable to a human.
- Vouch, a community trust management system. A tool that puts the policy above into practice. To mitigate the slop burden, open source projects should build a network to identify trustworthy contributors.
- My AI Adoption Journey. Mitchell’s reflections on his AI adoption journey. Most of it resonates with me — and probably with every thoughtful developer.
Beautiful Mermaid (via). Mermaid is the de facto tool for describing diagrams in plain text and embedding them in Markdown. GitHub supports it, for example1. But I’ve never liked the default theme — that’s why I still haven’t adopted it.
Today I found out the Craft team felt the same way, and they released a new rendering engine for Mermaid diagrams. It outputs both SVG and ASCII art, and the default theme looks great.
I haven’t looked into it deeply yet, but it looks promising at a glance. I hope it becomes a catalyst for better-looking diagrams — either by maturing into a drop-in replacement that the ecosystem adopts, or by pushing the Mermaid team to ship a better default theme.
Beautiful Mermaid
Mermaid's default theme
-
GitHub blog post: Include diagrams in your Markdown files with Mermaid; GitHub documentation: Creating diagrams ↵
Tw93, and his Mole. I heard of this macOS cleaner app called Mole before, but today I finally tried it out — a neat CLI utility that digs through and cleans up your macOS.
I checked out its author, Tw93. He’s also a Chinese programmer, and keeps a blog that caught my eye immediately. I’m glad to see programmers like him sharing tech thoughts and personal life, reminding me I’m not alone. He’s doing a great job — another role model to look up to. From his GitHub profile, I believe Simon Willison influenced him too.
Follow him on X: @HiTw93.
I just added a new category of notes called “collection notes”.
Collection notes gather resources on a single topic — books, articles, videos — in one place. Think of them as better-organized bookmark folders. Here’s the commit: zlliang/zlliang@d4c1f5a.
I’ve already written one on my Zilong’s Days (Chinese) website: Shinobu Yoshii’s works.
Simon Willison on Technical Blogging. Simon was the direct catalyst for me starting my own blog (see my post), so it’s great to see him share more about his blogging experience.
Zig’s new juicy main is here. Not quite following Zig’s new features recently, and a quick check yesterday made me find that the juicy main has landed!
Andrew Kelley proposed it directly (see #24510) to enhance the main function by providing useful variables like memory allocators, I/O instance, environment variables, and command line arguments as parameters. It reduces the boilerplate we previously needed to set up these variables.
Now there are three allowed argument signatures for the main function:
pub fn main() !voidpub fn main(init: std.process.Init.Minimal) !voidpub fn main(init: std.process.Init) !void
The definition of std.process.Init is as follows:
pub const Init = struct {
/// `Init` is a superset of `Minimal`; the latter is included here.
minimal: Minimal,
/// Permanent storage for the entire process, cleaned automatically on
/// exit. Not threadsafe.
arena: *std.heap.ArenaAllocator,
/// A default-selected general purpose allocator for temporary heap
/// allocations. Debug mode will set up leak checking if possible.
/// Threadsafe.
gpa: std.mem.Allocator,
/// An appropriate default Io implementation based on the target
/// configuration. Debug mode will set up leak checking if possible.
io: std.Io,
/// Environment variables, initialized with `gpa`. Not threadsafe.
environ_map: *std.process.Environ.Map,
/// Named files that have been provided by the parent process. This is
/// mainly useful on WASI, but can be used on other systems to mimic the
/// behavior with respect to stdio.
preopens: std.process.Preopens,
/// Alternative to `Init` as the first parameter of the main function.
pub const Minimal = struct {
/// Environment variables.
environ: std.process.Environ,
/// Command line arguments.
args: std.process.Args,
};
};
The changeset is in #30644 and there’s a follow-up issue #30677 for a minimal CLI parsing mechanism.
Astro is joining Cloudflare. I always had a feeling Astro would be acquired — and now it’s happening. Astro has been my favorite framework for building static websites, and it’s my choice for my personal websites now. It reminds me of when I first discovered static site generators like Jekyll — just basic build-time templating and composition, making creating a blog very easy. Features like content collections and islands are genuinely innovative. I hope it stays productive and keeps its simplicity.
Ralph Wiggum as a “software engineer”. The AI field is evolving so fast like your math classes in high school, that if you miss a week, you’re suddenly lost. For me recently, it’s Ralph, a new pattern for coding agents that pushes them to a higher automation level.
Its name comes from a character called Ralph Wiggum in the show The Simpsons, who somehow captures the spirit of this technique.
To get familiar with Ralph, I skimmed (and watched) these materials, in addition to the original post by Geoffery Huntley:
- Matt Pocock’s walkthroughs: Ship working code while you sleep with the Ralph Wiggum technique, and 11 Tips For AI Coding With Ralph Wiggum
- Greg Isenberg’s video: “Ralph Wiggum” AI Agent will 10x Claude Code/Amp
- Ryan Carson’s article on X: Step-by-step guide to get Ralph working and shipping code
In short, Ralph is a technique that runs your coding agent sessions in a loop. It pushes the typical coding agent workflow — you give it a task, watch it work, and then a new task based on its output — forward by making the agent itself assess the outputs and decide what’s next. Back in 2025, we’ve got the agreement that an “agent” is simply an AI program running tools in a loop to achieve a goal1. Ralph extends that idea naively: It’s a bash script running agent sessions in a loop to achieve a goal.
To run agents the Ralph way, you basically need the following harnesses:
- A bash script that simply runs your coding agent in a for loop
- A PRD file that lists and tracks the tasks, commonly organized as
prd.json - A progress note that the agent appends to when completing tasks, providing relevant context to the next agent session, commonly organized as
progress.txt
These elements reveal what’s truly valuable about the Ralph idea: It formalizes a context engineering approach when tackling large scale development requirements. And that’s why Ralph differs from just using a single agent session for all tasks. Every time the session completes a task, it checks the tasks in prd.json, appends notes to progress.txt, and usually makes a git commit. Then a new agent session starts with the context window cleared, so the files the last session updated serve as the only memory of the Ralph loop.
Rough notes here. If you’re interested in the details, check the materials above. It’s indeed a new idea in the field and the community will explore it further to see if it’ll truly stand out.
-
Simon Willison’s well-known article: I think “agent” may finally have a widely enough agreed upon definition to be useful jargon now ↵
Splitting My Websites and Finalizing My Writing Framework
Last weekend, I reorganized my personal websites. Here’s the new structure:
| Website | URL | Description |
|---|---|---|
| Personal landing page | https://zlliang.me | Brief introduction and navigation |
| Zilong’s Tech Notes | https://tech.zlliang.me | Tech learning and research (English) |
| Zilong’s Days | https://days.zlliang.me | Daily life and reflections (Chinese) |
In this post, I explain my motivation for splitting the sites. After this change, I’m ready to mark version 1.0 of my writing framework. The permalinks will be stable, and I’m excited to share more technical and personal topics in these spaces.
Today I spent a day trying to add i18n support to the website. I brainstormed ideas and documented them in a GitHub issue. I also tried to design and implement a translation key system and a new routing system, and wrote a lot of code.
In the end, I realized it makes both the site and my writing workflow more complicated than I’d like. Direct i18n support doesn’t feel like the right move right now — it adds friction and mental overhead, and I want to be able to just start writing when an idea comes up.
Since the website’s structure is entirely under my control, I want to design a content organization that genuinely fits my own writing habits while still being open and readable to different audiences. I don’t want to add structural complexity to the site just to satisfy a sense of “everything must be unified.”
So I’m going to park this issue for now. The site will stay focused on technical writing and public English content. Anything that doesn’t fit yet will live in my private Notion workspace, and I’ll revisit it later when it makes sense.
Paul Graham’s post on X about taste. Another interesting post from Paul Graham1 — what struck me is how his posts spark real discussion. All the comments are worth reading. I’m following him now.
Oddly enough, I first learned about Paul Graham through his essays, and only later realized he co-founded Y Combinator and is such a central figure in Silicon Valley.
-
My previous note: Paul Graham’s post on X about writing ↵
Zilong's Tech Notes