Link Notes
Paul Graham’s post on X about taste. Another interesting post from Paul Graham1 — what struck me is how his posts spark real discussion. All the comments are worth reading. I’m following him now.
Oddly enough, I first learned about Paul Graham through his essays, and only later realized he co-founded Y Combinator and is such a central figure in Silicon Valley.
-
My previous note: Paul Graham’s post on X about writing ↵
Paul Graham’s post on X about writing. I started writing recently (as you can probably tell), so I’ve been reading a lot about it. What’s interesting are the comments under this post. People are sharing their own thoughts on writing, and many are surprisingly inspiring. Reading them makes me feel less alone in my writing journey.
How uv got so fast (via). I haven’t followed the Python ecosystem for maybe five years. But I know uv has taken off. I have it on my Mac, and it’s my go-to when I occasionally want to play with Python. It feels like pnpm or Cargo — fast and modern.
I assumed Rust was the main reason uv is so fast. Turns out, that’s actually the least important factor. Years of PEP standards made uv possible in the first place. Intentionally limited compatibility with pip, plus smart language-agnostic optimizations, did most of the heavy lifting. It’s the design choices, not the language choice, that really matter.
GLM 4.7 and MiniMax M2.1. Chinese AI labs first caught the world’s attention with the DeepSeek models in late 2024. Then in the second half of 2025, we saw a wave of Chinese open-source models like GLM 4.6, MiniMax M2, and Kimi K2. People like these models for their low price, open weights, and solid performance — just slightly below state-of-the-art proprietary models1.
Today, the updated GLM 4.7 and MiniMax M2.1 dropped on the same day. As public holidays approach in the US, Chinese AI labs keep pushing forward, making good use of the time 😉
AI is a rapidly changing field. I’m not one to chase every new model release, though I do find myself following this topic more recently. I’m still learning these concepts and trying to find a pragmatic way to use AI tools. I use ChatGPT as my daily driver, Gemini for work (my company subscribes to it), and Amp as my coding agent.
I may not post about every model release in the future, but here are the models on my radar:
- Proprietary models: GPT (OpenAI), Claude (Anthropic), and Gemini (Google)
- Open-source models: DeepSeek, Kimi (Moonshot AI), GLM (Z.ai), and MiniMax
OpenAI Codex now officially supports skills. After a few days of people finding OpenAI was quietly adopting skills1, the announcement came today. The thread on X goes through how skills work in Codex and shows examples on how to install third-party pre-built skills like Linear and Notion.
Two baked-in skills skill-creator and skill-installer are available in Codex, making bootstrapping and installing skills easier. See details in their official documentation.
Codex’s choice of skills location is .codex/skills, joining the war with .claude/skills, .github/skills, and .agents/skills. I really want to see a unification here.
-
Simon Willison’s blog: OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI ↵
AI Transparency Statement (via). More and more content on the internet is generated by AI these days, and there’s a new word slop to describe the wave of unwanted, unreviewed, and low-value AI-generated content. It’s so alarming that people are starting to be paranoid about the quality of the content they see online, even obviously handcrafted and curated ones.
One of the indicators is the em dashes (—). Since AI-generated content often includes em dashes, they become a signal, and a warning: you might be reading AI-generated content.
It’s usually not true. But the paranoia runs so deep that some writers like Armin Ronacher now publish statements to defend their work.
As for me, I guarantee that all content here is written by me, though I do use AI tools to help review and refine my writing (like this one), and it’s me who does the thinking and makes the final decision. That’s an appropriate way to use AI as an editing tool, in my opinion. Maybe I should write a similar statement for this website too — and maybe every content creator should do the same.
Agent Skills (via). Anthropic published Agent Skills as an open standard yesterday1, just a few days after they co-founded the Agentic AI Foundation and donated the MCP (Model Context Protocol) to it2. Now, along with the widely adopted AGENTS.md, there are three major agentic AI patterns for managing context and tools.
Among the three, AGENTS.md is the simplest and most straightforward one, which is essentially a dedicated README.md for coding agents. It is usually loaded in the context window when starting a session, providing general instructions to help coding agents know the user and the workspace better.
It originated from OpenAI to unify the chaotic name conventions of agent instruction files, before which we had .cursorrules for Cursor, .github/copilot-instructions.md for GitHub Copilot, GEMINI.md for Gemini CLI, etc. It has been gradually adopted by almost all coding agents, except Claude Code, which still insists on its CLAUDE.md. (There’s an open issue though.)
Agent Skills is another neat practice. Introduced by Anthropic in October 20253, it is a composable and token-efficient way to provide capabilities to agents. LLMs can call tools, and Agent Skills is just a simple and standardized way to define a set of tools. A skill is a set of domain-specific instruction files, which can be loaded on demand by the agent itself. Besides instructions in Markdown, a skill can also bundle a set of scripts and supplementary resource files, enabling the agent to run deterministic and reproducible tasks.
Amp, my current coding agent choice, just released the support for Agent Skills earlier this month4. Along with Agent Skills becoming an open standard, GitHub Copilot and VS Code announced their support for it5. Also, Dax, one of OpenCode maintainers, committed to adding support in the upcoming days6. Though, the skills folder name convention is still not unified, .claude/skills for Claude Code, .github/skills for GitHub Copilot, and .agents/skills for Amp. I’d like to see the neutral .agents/skills win.
Compared with these two approaches, MCP is way more complex. It uses a server-client architecture and JSON-RPC to communicate, instead of natural language — the native language of LLMs. An MCP server can provide remote tools, resources and pre-built prompts to the MCP client baked in an agent, enhancing the agent’s capabilities. It was introduced by Anthropic at the end of 20247, and after one year of adoption, its limitations like authorization overhead and token inefficiency have started to emerge, not to mention its difficulty to implement and integrate. In fact, the only MCP server that is still catching my eye is Playwright MCP, which simply gives the browser automation superpower to coding agents. Honestly I didn’t manage to find a chance to try out MCP deeply. Opinions here are merely my observations and largely shaped by discussions on it, like Simon Willison’s post.
Personally, I’m already adopting AGENTS.md globally and in my personal projects. Since Agent Skills becomes more and more promising, I’m looking forward to trying it out, diving deeply, and building my own set of skills.
-
Claude blog: Skills for organizations, partners, the ecosystem ↵
-
Anthropic news: Donating the Model Context Protocol and establishing the Agentic AI Foundation ↵
-
Claude blog: Introducing Agent Skills ↵
-
Amp news: Agent Skills ↵
-
GitHub blog: GitHub Copilot now supports Agent Skills ↵
-
Anthropic news: Introducing the Model Context Protocol ↵
Berkeley Mono (via). Looks like major coding agents like Claude Code, Cursor, and Amp (which I mainly use these days) are all using this monospaced typeface on their social media1 and web pages2. The typeface looks great and indeed has a retro-computing charm. The type foundry, US Graphics Company, also introduces it as “a love letter to the golden era of computing”:
Berkeley Mono coalesces the objectivity of machine-readable typefaces of the 70’s while simultaneously retaining the humanist sans-serif qualities. Inspired by the legendary typefaces of the past, Berkeley Mono offers exceptional straightforwardness and clarity in its form. Its purpose is to make the user productive and get out of the way.
Berkeley Mono specimen from the official website
As the introduction suggests, the typeface reminds me of man pages, telephone books, and vintage technical documentation. The foundry’s website also reflects that aesthetic.
Berkeley Mono is a commercial typeface. Curiously, however, some of those coding agents appear to be using it without a license, which has led the foundry to frequently tag them on X1.
-
The type foundry’s posts on X: Claude uses Berkeley Mono, Cursor uses Berkeley Mono ↵ ↵
Zilong's Tech Notes