From small to enterprise: impressions

Reading time: 5 minute(s).
2025-01-26

For the past 8 months I worked in an undisclosed company, and I began to use a lot of software I never had the chance or interest to look at because you only need it at enteprise scale. I've learned to look at software in a lot new ways that I completely missed as a hobbist. Maybe these things were quite obious to anyone but they totally were not for me.

Software is not bloated. Sometimes you need Kubernetes, or Java

I don't think anyone ever debated that Google didn't need Kubernetes. There is a feeling in some unix circles that the modern software stack is bloated. I avoid systemd myself, I admit I have not gotten over it in 2025.

Yet I realied that sometimes you need kubernetes: what it really does is helping with scalability. I work in an industrial setting and an airgapped environment, and we want to be able to let the client just throw more hardware to distribute the load, possibly to infinity. Kubernetes helps a lot with that.

Sometimes you also need to use Kafka, which is written in the bloated and slow Java, and sometimes you need to use Kafka-Streams which is also a Java library, even if you despise Java.

The good news is that actually you can compile Java with GraalVM and use Quarkus with Kafka-Streams, which makes it a nice experience even if you're not a hardcore java fan: it has a modern feel and doesn't make you want to commit suicide.

A lot of times you don't need to scale like that, and this brings to the second point:

You should and can go for mad performance even if you write a containerized application meant to be scaled.

If your program is reading and writing data to an infinite kafka cluster, the load can be shared evenly (up to the number of partitions) by just running more program. This means that if your program reaches a bottleneck you can probably spin up another container/more worker threads. What is nice is that on a host you might run out of RAM, have too many threads, IO bottlenecks, etc etc but you can always "run it on another machine".

Still, writing programs with small footprints and minimal containers is very rewarding: you can have higher container density per node, thus running a lot of isolated services that scale in a granular way. If you avoid the complexities of kubernetes and containers and write a monolith you'd need to worry a lot about performance and scaling anyway.

You can get very, very far without scaling, and sometimes the infrastructural overhead of scaling might not be worth it. But if you worry about it once and work on performance as usual you'll unleash great powers.

I am a lot more aware of the commerial motive behind software

Since being paid to write code I now understand the "why" of a lot of tools aimed at programmers. I tend to ask myself the fundamental question "who's paying these people" when I look at some open source project, especially now that I now understand how much mantime and resources it takes to maintain a big project.

When you're coding with a deadline and with unrealistic or changing requirements your approach to software changes. It becomes best to use reusable technologies over performance and elegance sometimes. Software that is easy to throw away is just as important as code that's easy to refactor.

Sometimes you have the use the thing that just works regardless of how ugly it is.

Not everyone uses VScode and/or is a corporate slave

I have someone on my team using micro as their primary editor, someone using VSCode and a lot of copilot in a very effective and interesting way (fill configuration fields, formatting, etc), I am aware of people using EMACS that have written big critical systems in C.

These people are all very talented and have each their workflow in which they've found to be most effective, and I really like the diversity.

The DRY principle is still my favourite thing

I like to phrase the Don't Repeat Yourself with the similar slogan "have a single source of truth". Our entire project can be deployed with helmfile, and if I wanted to change a database endpoint I'd only have it to modify it in a single location, run a command, and the affected containers will be rolled over with no downtime. It's so powerful, and in my head it solves the problem of having to write the database name twice. This stuff is pretty old in 2025, but I still found it pretty exciting :).

At work, I used a lot of protobuf, openapi, and jsonschema. We can generate documentation out of these, but also code and parsers. And they just work, and we can update them in a whim. The benefit is evident comparing the time I spend (waste) debugging terrible industrial binary protocols.

top↑ end↓