Zig, Rust, AI, community

Reading time: 9 minute(s).
2026-05-13

I like writing software and computers. I've also always been fascinated by the theory of programming languages and computation itself. This is an important premise to make, since not everyone is coding with the same objectives and I do.

For instance I cherish safety and correctness before performance or development time, and what I cherish the most is the ability to reason about complex system via elegant abstractions, just like a mathematician loves a short but deep formula.

We should never forget that, up until now, the abstractions we developed were meant for humans to understand chasing some idea of "elegance".

Rust

When I learned Rust back in 2018 circa it was regularly mocked by C/C++ devs and regarded as needlessly complex and unfit to write any serious performant program. I learned Rust mainly because I was trying to use LLVM (following the kaleidoscope tutorial), and I didn't know enough C++ to be comfortable. I picked Rust out of curiosity and went further than I did in C++ in a weekend.

I also had previous experience with Haskell and loved the ergonomics of the language, so I was immediately sold on Rust, which I still use to this day.

Rust gave me a similar degree of correctness that Haskell would, and that assurance is composable: I remember boasting that my Haskell program were free of partial functions or panics and could simply never crash. Every code path was handled and the compiler assured it for me.

Nowadays we'd say that some Rust crates uses no unsafe blocks and never calls unwrap(). Use away!

This lowered significantly the bar for anyone to write software with performance on par with other "system programming languages", to the dismay of the previous generation of programmers who kept track of allocations in their head.

Thanks to the modern and centralized crates.io, anyone could pull some rust dependency (hopefully, with a safe and simple API), write some code, and push a new crate for others to continue the cycle. I regularly compile project with hundreads of dependencies between direct and transitive. Writing software that was usually relegated to some "high level scripting language" becomes feasable with Rust thanks to the richness of this ecosystem.

Writing a modern Rust project feels more similar to javascript/typescript than a c++ project. People are worried about massive supply chain attacks waiting to happen, as the ecosystem becomes more and more similar to npm.

Overall, the world traded a bit of speed and efficiency for correctness and wider community. Great!

... and LLMs

LLMs are excellent at generating Rust code, since there's a lot of it to train on and there's a very good compiler. They're just as great at writing Python, Javascript or similar, but Rust is the only language where you can also spit out reasonably fast code with the same cognitive overhead as the first two.

Yesterday I thought I'd found a bug in a small rust application I downloaded via Cargo without thinking too much about it.

So I git clone the source to take a look. A quick glance at the folder immediately reveals a lot of AI assisted coding had taken place, both by the abundance of folders and subcrates for what I thought a very simple tool, and by the proliferation of AGENTS.md files or similar in the codebase.

I eventually navigate to the bit of code that concerned me, and I find that the program tries to read some number from a /sys/devices/blahblah, fair, but I don't have permissions for it, so it returns 0 and calls it a day. I find somewhere in the docs how to set up a udev rule to give myself permission, and then it works. But that's not the point.

I don't trust Rust code anymore.

uncanny valley effect

Yeah it compiles, works, prints nice rocket emojis, but my understanding and experience is USELESS. Looking for a bug in a LLM-written codebase is like trying to interpret a dream and making sense out of it. Mistakes won't be where humans make mistakes, it won't be structured like a hand-made codebase, and most of all it tries very hard to look human.

No matter how good it gets at looking like a human, it's always a show (at least for this kind of "AI"). There is no human with a certain model of how code works behind the keyboard and no keyboard. The model might output something that looks like a human struggling to learn C's memory model, but it will never learn and won't actually meet the same hardships as humans. It might follow some coding guidelines picked up from internet or structure your project like most other projects are structured, but that's not the AI's judgement. It might even write a little convincing tale of how it reached that conclusion, but that's all useless.

Assembly is for machines, code is for humans, and code as we know today was invented so humans could speak among themselves about code. You write your code so that in a few weeks you'll be able to understand it, we write comments to let people use our libraries, etc. An AI model knows nothing of this stuff.

Zig

Zig is the decade long project of a single developer aiming to create a pragmatic programming language, "even more than C".

It doesn't provide a new fancy memory model, but rather exposes the same low level details with a (hopefully) much more manageable API. This went directly against my instincts as a types kind of person, since the whole safety business is based on the idea that humans shouldn't think about stuff a compiler could do for them.

Instead, anything that allocates must do so explicitly, and you must have an instance of std.mem.Allocator of your liking. There are multiple allocators to grab in the standard library. The default allocator used for tests (std.testing.allocator) will detect memory leaks and double free with generous stacktraces by default.

They've also decided, at some point, to move away from LLVM and self-host their compiler backend. The hubris!

Zig is undergoing heavy development with changes to the language or standard library every few months. Version 1.0 is supposed to be stable forever. Again, the Hubris!

Given these premises it feels like Zig is unfit for any serious programming. Period.

I've changed my mind and they were right about most of these things. I gave Zig a spin a few days ago and I was so pleasantly surprised, and they are slowly and steadily delivering on everything. After a few days I could fit a lot of Zig's in my head and read the standard library like the grownups. It had been a while since I admittedly learnt something new, and it felt so good to get tests to pass and see my code run in this new language I thought so hard to use.

... and LLMs

LLMs are absolute trash at writing Zig. There are little examples and explainers online, and they're probably outdated. The standard library's documentation is the code itself, just read it, you know Zig right?

Zig's compilation model makes this even harder, since code that isn't needed isn't compiled at all, and the same code could be compiled to two different functions thanks to comptime capabilities. Most errors will be caught via tests. This is a longer feedback loop than Rust's and code that compiles and works might actually crash later.

The Zig community is also vehemently anti-ai and has recently enforced a strict no-LLM policy, anywhere. It made many headlines and forced bun to fork and maintain their zig compiler, since the Zig team wouldn't accept their massive AI-assisted PR delivering 4x speedup in compilation using a non-deterministic parallel compiler backend.

Recently bun was also ported (apparently as an experiment) to Rust with great success. I can only speculate why. AI sucks at Zig. I predict the Rust fork will become the de-facto bun compiler, and then die when the tokens eventually will run out. But that's just an uninformed opinion.

I LOVE IT.

I WANT to write Zig now. I want to find other people who love languages and write little nifty libraries in Zig. I can trust them.

I'll have to read some source code anyway, since there is probably little or no documentation. It better be simple and hackable, anything too complex and you won't be able to hold it in your head.

It was about community, and the folks over at Zig foundation that decided to ban LLM's to develop mature and expert contributors were right. I still love the Rust community as a whole, and I don't think any less of people who use LLMs to assist in their coding. This was just a reflection of how AI, more than anything, eroded the trust I had in "safety" and made me think of how much closer I was to other fellow humans now that some of them are gone!

If a Haskell function has type


square :: Num a => a -> a
square x = x * x

then we can infer a bunch of nice properties about it, like that it won't do I/O or fail, but nothing about its type prevents this:


square :: Num a => a -> a
square x = x * x * x

This is where I trust humans to not deceive me, or to do it in predictable, human ways I'm used to. It's still much easier to write memory safe rust code (that is the whole point of the language, after all), and the advent of AI doesn't make Rust any less useful.

References and Other Relevant Articles

These quick articles (or threads) expand and explain much better than I did some things I covered here, especially the first one!

An almost identical article and main inspiration to write this

bun's fork in Rust pass 99.8% of existing test suite

Zig's homepage

Zig's AI policy rationale

top↑ end↓