Notes

21 in total
Jan 5, 2026

Today I spent a day trying to add i18n support to the website. I brainstormed ideas and documented them in a GitHub issue. I also tried to design and implement a translation key system and a new routing system, and wrote a lot of code.

In the end, I realized it makes both the site and my writing workflow more complicated than I’d like. Direct i18n support doesn’t feel like the right move right now — it adds friction and mental overhead, and I want to be able to just start writing when an idea comes up.

Since the website’s structure is entirely under my control, I want to design a content organization that genuinely fits my own writing habits while still being open and readable to different audiences. I don’t want to add structural complexity to the site just to satisfy a sense of “everything must be unified.”

So I’m going to park this issue for now. The site will stay focused on technical writing and public English content. Anything that doesn’t fit yet will live in my private Notion workspace, and I’ll revisit it later when it makes sense.

#TECH21 Jan 5, 2026
Jan 4, 2026

Paul Graham’s post on X about taste. Another interesting post from Paul Graham1 — what struck me is how his posts spark real discussion. All the comments are worth reading. I’m following him now.

Oddly enough, I first learned about Paul Graham through his essays, and only later realized he co-founded Y Combinator and is such a central figure in Silicon Valley.

  1. My previous note: Paul Graham’s post on X about writing

#TECH20 Jan 4, 2026

I just updated the license of this website. Now it’s dual-licensed: code under MIT, content under CC BY 4.0. Previously I used CC BY-NC-SA 4.0 for content, but decided to go more open — fewer restrictions, more sharing.

Here’s the commit: zlliang/zlliang@6083f34.

#TECH19 Jan 4, 2026
Jan 3, 2026

I just noticed a gap between what I’m thinking and what I write. Ideally I’d write down everything, but sometimes a voice in my head whispers, “This thought isn’t worth it.” No. Go write it.

#TECH18 Jan 3, 2026

I now use ChatGPT and Amp in a very simple way: I just create new threads and leave them as-is.

Previously for ChatGPT, I created several projects, and when I wanted to talk to it, I’d find and continue a relevant existing thread or create a new one in a project. I’d organize them periodically. Turns out it just looked neat but didn’t actually help. Now I just start a new chat when I think I need to. ChatGPT memorizes context automatically, which is sufficient.

Similarly for Amp, I used to organize my threads very carefully. After the labels feature shipped1, I started to label every thread manually after I completed one. I finally realized this practice doesn’t help — for now. So I deleted all the labels. And when to make a thread public? When I find I need to.

When you start using a tool, use it with the least friction and in the most intuitive way. Any feature that forces redundant manual work isn’t worth the hassle. Only use a feature if you find you need to.

  1. Amp news: Thread Labels

#TECH17 Jan 3, 2026

Happy New Year! Here’s a quick recap of my New Year’s break:

I finished watching The King of Internet Writing, a video podcast by David Perell about what we can learn from Paul Graham’s writing. I’m ready to write more, and better, in 2026.

On New Year’s Eve, I was traveling with my partner in Chongqing. We ate spicy hotpot and walked through the hilly streets!

Chongqing's cityscape on New Year's EveChongqing's cityscape on New Year's Eve

I’m planning to add new features to this website. I created two GitHub issues, following Simon Willison’s approach to building features1:

  1. Simon Willison’s blog post: How I build a feature

#TECH16 Jan 3, 2026
Dec 29, 2025

Andrew Kelley's growth curve

I’m just reminded of the growth curve Andrew Kelley showed in his talk A Practical Guide to Applying Data Oriented Design. I watched it several days ago and realized I’m now at a plateau of my own. Back in university, I built a solid foundation in web frontend and landed a job in it. Now I’ve hit another bottleneck, eager to jump to the next stage. Andrew found his trigger in a book; I’m still looking for mine. But two strategies are already in my mind: writing and starting my own projects. I bet that producing and creating will push me to evolve.

#TECH15 Dec 29, 2025
Dec 28, 2025

Paul Graham’s post on X about writing. I started writing recently (as you can probably tell), so I’ve been reading a lot about it. What’s interesting are the comments under this post. People are sharing their own thoughts on writing, and many are surprisingly inspiring. Reading them makes me feel less alone in my writing journey.

#TECH14 Dec 28, 2025

I just added a new category of notes called “quote notes”. Quote notes share quotes from books, articles, and other sources, sometimes with my own commentary.

Here’s the commit: zlliang/zlliang@3419c89. I also updated the relevant descriptions in Starting a Tech Blog at the End of 2025.

#TECH13 Dec 28, 2025

I don’t know what still newer marvels will make writing twice as easy in the next 30 years. But I do know they won’t make writing twice as good. That will still require plain old hard thinking.

— William Zinsser, On Writing Well

William wrote this in 2006. Nearly 20 years later, LLMs have made producing text easier than ever — yet concerns about AI-generated junk are growing too. Good writing remains rare and precious. It still needs to be written and rewritten, again and again, by humans.

#TECH12 Dec 28, 2025
Dec 27, 2025

How uv got so fast (via). I haven’t followed the Python ecosystem for maybe five years. But I know uv has taken off. I have it on my Mac, and it’s my go-to when I occasionally want to play with Python. It feels like pnpm or Cargo — fast and modern.

I assumed Rust was the main reason uv is so fast. Turns out, that’s actually the least important factor. Years of PEP standards made uv possible in the first place. Intentionally limited compatibility with pip, plus smart language-agnostic optimizations, did most of the heavy lifting. It’s the design choices, not the language choice, that really matter.

#TECH11 Dec 27, 2025
Dec 26, 2025

I just added pagination to note pages like /notes and /notes/categories/link, as the number of notes grows. Each page now shows up to 20 notes, and a tiny pagination indicator lets you navigate between pages without scrolling endlessly. I used Astro’s built-in pagination feature. Here’s the commit: zlliang/zlliang@0b22dda.

Pagination indicator on the notes pagePagination indicator on the notes page

Pagination indicator on the notes pagePagination indicator on the index page, guiding you to the second page of notes

#TECH10 Dec 26, 2025
Dec 25, 2025

Happy holidays! At the end of 2025, I’m starting a blog. I’ve already written several entries and feel confident I can keep it going.

Here I talk about my past attempts, the writers who inspired me, the motivation, the topics to cover, and the approach I’m taking. It’s my version of a blogging manifesto.

#TECH9 Dec 25, 2025
Dec 24, 2025

I gradually realized that a unified formatting rule set is needed when working with multiple AI chatbots and agents.

Output formatting styles vary from model to model. For technical topics, I’ve found that Claude tends to output responses like complete documents, starting with an h1 heading and loves to use horizontal rules to separate sections; Gemini usually skips to h3 headings directly without h2 ones, which in my opinion is not a good practice.

Here are examples I tried on OpenRouter, prompting “Explain the Python programming language.”

Python explanation by GPT-5.2GPT-5.2, starting with an introductory paragraph and followed by sections

Python explanation by Claude Opus 4.5Claude Opus 4.5, a document-like output with an h1 heading at the top and multiple horizontal rules

Python explanation by Gemini 3 FlashGemini 3 Flash, using h3 headings directly

Python explanation by Kimi K2 ThinkingKimi K2 Thinking, also a document-like one

Even worse, from my experience, outputs from different versions of the same model series (e.g. GPT-5 and GPT-5.2) can vary greatly in terms of formatting.

To address this issue, and to unify output styles of different tools I’m using (ChatGPT as my daily driver, Gemini for work, and Amp as my coding agent), I drafted a minimal formatting guide as follows:

Shared formatting rules:

  • Use consistent formatting within the same response
  • Insert spaces between English words and CJK characters
  • Always specify the language for syntax highlighting when using fenced code blocks
  • Do not use horizontal dividers (<hr /> or ---) unless they add clear structural value, especially directly before headings
  • For list items, do not use a period at the end unless the item is a complete sentence

For chat responses:

  • Use “Sentence case” for chat names (auto-generated chat titles) and all section headings (capitalize the first word only), never use “Title Case” in such circumstances
  • Use heading levels sequentially (h2, then h3, etc), never skip levels; Introductory paragraphs may be needed before the first heading in chat responses; Never use h1 for chat responses
  • Avoid filler, praise, or conversational padding (for example “Good question”, “You’re absolutely right”)

For document generation and editing:

  • Use “Title Case” for top-level headings (e.g. h1), typically only once in a document, and “Sentence case” for section headings (capitalize the first word only)
  • Use heading levels sequentially (h2, then h3, etc), never skip levels

I apply these rules to the custom instructions setting in ChatGPT and to AGENTS.md for my coding agent.

Custom instructions setting in ChatGPTCustom instructions setting in ChatGPT

#TECH8 Dec 24, 2025
Dec 23, 2025

GLM 4.7 and MiniMax M2.1. Chinese AI labs first caught the world’s attention with the DeepSeek models in late 2024. Then in the second half of 2025, we saw a wave of Chinese open-source models like GLM 4.6, MiniMax M2, and Kimi K2. People like these models for their low price, open weights, and solid performance — just slightly below state-of-the-art proprietary models1.

Today, the updated GLM 4.7 and MiniMax M2.1 dropped on the same day. As public holidays approach in the US, Chinese AI labs keep pushing forward, making good use of the time 😉

AI is a rapidly changing field. I’m not one to chase every new model release, though I do find myself following this topic more recently. I’m still learning these concepts and trying to find a pragmatic way to use AI tools. I use ChatGPT as my daily driver, Gemini for work (my company subscribes to it), and Amp as my coding agent.

I may not post about every model release in the future, but here are the models on my radar:

  1. For example: Vercel CEO Guillermo Rauch’s post on X, T3 Chat creator Theo’s post on X

#TECH7 Dec 23, 2025
Dec 22, 2025

Since Zig hasn’t hit 1.0 and is still evolving rapidly, following the master branch is common practice for trying out new features and tracking where the language is heading. Even its release notes say “working on a non-trivial project using Zig may require participating in the development process.”

However, nightly master builds quietly stopped on November 26, 2025, when the Zig team announced the migration from GitHub to Codeberg. I assumed the builds were provided by some automation tied to GitHub.

Today, I found that nightly master builds have resumed! The download index JSON used by version managers like ZVM is now being updated again, though the download page hasn’t caught up yet.

Nevertheless, good news! I’m looking forward to trying out exciting follow-ups on the new async I/O.

Zig download pageZig download page

Update Dec 23, 2025: The download page is now updating again!

#TECH6 Dec 22, 2025

TIL: When editing Markdown files in VS Code, you can paste URLs as formatted links via the markdown.editor.pasteUrlAsFormattedLink.enabled setting.

This setting was first introduced in June 2023, with the release of VS Code 1.80.

This is a nice quality-of-life feature. I used to type brackets, parentheses, and URLs manually, always wishing for a simpler way. I’m now using the smart option, which “smartly creates Markdown links by default when not pasting into a code block or other special element.”

Here’s a quick demo:

Paste URLs demo

#TECH5 Dec 22, 2025
Dec 20, 2025

OpenAI Codex now officially supports skills. After a few days of people finding OpenAI was quietly adopting skills1, the announcement came today. The thread on X goes through how skills work in Codex and shows examples on how to install third-party pre-built skills like Linear and Notion.

Two baked-in skills skill-creator and skill-installer are available in Codex, making bootstrapping and installing skills easier. See details in their official documentation.

Codex’s choice of skills location is .codex/skills, joining the war with .claude/skills, .github/skills, and .agents/skills. I really want to see a unification here.

  1. Simon Willison’s blog: OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI

#TECH4 Dec 20, 2025

AI Transparency Statement (via). More and more content on the internet is generated by AI these days, and there’s a new word slop to describe the wave of unwanted, unreviewed, and low-value AI-generated content. It’s so alarming that people are starting to be paranoid about the quality of the content they see online, even obviously handcrafted and curated ones.

One of the indicators is the em dashes (—). Since AI-generated content often includes em dashes, they become a signal, and a warning: you might be reading AI-generated content.

It’s usually not true. But the paranoia runs so deep that some writers like Armin Ronacher now publish statements to defend their work.

As for me, I guarantee that all content here is written by me, though I do use AI tools to help review and refine my writing (like this one), and it’s me who does the thinking and makes the final decision. That’s an appropriate way to use AI as an editing tool, in my opinion. Maybe I should write a similar statement for this website too — and maybe every content creator should do the same.

#TECH3 Dec 20, 2025
Dec 19, 2025

Agent Skills (via). Anthropic published Agent Skills as an open standard yesterday1, just a few days after they co-founded the Agentic AI Foundation and donated the MCP (Model Context Protocol) to it2. Now, along with the widely adopted AGENTS.md, there are three major agentic AI patterns for managing context and tools.

Among the three, AGENTS.md is the simplest and most straightforward one, which is essentially a dedicated README.md for coding agents. It is usually loaded in the context window when starting a session, providing general instructions to help coding agents know the user and the workspace better.

It originated from OpenAI to unify the chaotic name conventions of agent instruction files, before which we had .cursorrules for Cursor, .github/copilot-instructions.md for GitHub Copilot, GEMINI.md for Gemini CLI, etc. It has been gradually adopted by almost all coding agents, except Claude Code, which still insists on its CLAUDE.md. (There’s an open issue though.)

Agent Skills is another neat practice. Introduced by Anthropic in October 20253, it is a composable and token-efficient way to provide capabilities to agents. LLMs can call tools, and Agent Skills is just a simple and standardized way to define a set of tools. A skill is a set of domain-specific instruction files, which can be loaded on demand by the agent itself. Besides instructions in Markdown, a skill can also bundle a set of scripts and supplementary resource files, enabling the agent to run deterministic and reproducible tasks.

Amp, my current coding agent choice, just released the support for Agent Skills earlier this month4. Along with Agent Skills becoming an open standard, GitHub Copilot and VS Code announced their support for it5. Also, Dax, one of OpenCode maintainers, committed to adding support in the upcoming days6. Though, the skills folder name convention is still not unified, .claude/skills for Claude Code, .github/skills for GitHub Copilot, and .agents/skills for Amp. I’d like to see the neutral .agents/skills win.

Compared with these two approaches, MCP is way more complex. It uses a server-client architecture and JSON-RPC to communicate, instead of natural language — the native language of LLMs. An MCP server can provide remote tools, resources and pre-built prompts to the MCP client baked in an agent, enhancing the agent’s capabilities. It was introduced by Anthropic at the end of 20247, and after one year of adoption, its limitations like authorization overhead and token inefficiency have started to emerge, not to mention its difficulty to implement and integrate. In fact, the only MCP server that is still catching my eye is Playwright MCP, which simply gives the browser automation superpower to coding agents. Honestly I didn’t manage to find a chance to try out MCP deeply. Opinions here are merely my observations and largely shaped by discussions on it, like Simon Willison’s post.

Personally, I’m already adopting AGENTS.md globally and in my personal projects. Since Agent Skills becomes more and more promising, I’m looking forward to trying it out, diving deeply, and building my own set of skills.

  1. Claude blog: Skills for organizations, partners, the ecosystem

  2. Anthropic news: Donating the Model Context Protocol and establishing the Agentic AI Foundation

  3. Claude blog: Introducing Agent Skills

  4. Amp news: Agent Skills

  5. GitHub blog: GitHub Copilot now supports Agent Skills

  6. Dax’s post on X

  7. Anthropic news: Introducing the Model Context Protocol

#TECH2 Dec 19, 2025
Page 1 / 2