The HPC cluster as a reflection of values

Yesterday while I was cooking dinner, I happened to re-watch Bryan Cantrill’s talk on “Platform as a Reflection of Values“. (I watch a lot tech talks while cooking or baking — I often have trouble focusing on a video unless I’m doing something with my hands, but if I know a recipe well I can often make it on autopilot.)

If you haven’t watched this talk before, I encourage checking it out. Cantrill gave it in part to talk about why the node.js community and Joyent didn’t work well together, but I thought he had some good insights into how values get built into a technical artifact itself, as well as how the community around those artifacts will prioritize certain values.

While I was watching the talk (and chopping some vegetables), I started thinking about what values are most important in the “HPC cluster platform”.

Continue reading

Adam’s weekly update, 2022-12-04

What’s new

This week was really intense from a work perspective. Not “bad intense”, but the kind of week where every day was spent with such a level of focus, that at 5 PM or so I found myself staring off into space and forgetting words. I think I got some good things accomplished, but my brain also felt like mush by the time the weekend came.

Continue reading

happy living close (-ish) to the metal

For various reasons, I’ve been doing a little bit of career introspection lately. One of the interesting realizations to come out of this is that, despite in practice doing mostly software work, I’ve been happiest when my work involved a strong awareness of the hardware I was running on.

Continue reading

The web services I self-host

Why self-host anything?

In a lot of ways, self-hosting web services is signing up for extra pain. Most useful web services are available in SaaS format these days, and most people don’t want to be a sysadmin just to use chat, email, or read the news.

In general, I decide to self-host a service if one of two things is true:

Continue reading

Interesting links I clicked this week

I watched several really interesting talks from SRECon22 Americas this week, and in particular I’d like to highlight:

  • Principled Performance Analytics, Narayan Desai and Brent Bryan from Google. Some interesting thoughts on quantitative analysis of live performance data for monitoring and observability purposes, moving past simple percentile analysis.
  • The ‘Success’ in SRE is Silent, Casey Rosenthal from Verica.io. Interesting thoughts here on the visibility of reliability, qualitative analysis of systems, and why regulation and certification might not be the right thing for web systems.
  • Building and Running a Diversity-focused Pre-internship program for SRE, from Andrew Ryan at Facebook Meta. Some good lessons-learned here from an early-career internship-like program, in its first year.
  • Taking the 737 to the Max, Nickolas Means from Sym. A really interesting analysis of the Boeing 737 Max failures from both a technical and cultural perspective, complete with some graph tracing to understand failure modes.

I also ran across some other articles that I’ve been actively recommending and sharing with friends and colleagues, including:

  • Plato’s Dashboards, Fred Hebert at Honeycomb. This article has some great analysis of how easily-measurable metrics are often poor proxies for the information we’re actually interested in, and discussing qualitative research methods as a way to gain more insight.
  • The End of Roe Will Bring About A Sea Change In The Encryption Debate, Rianna Pfefferkorn from the Stanford Internet Observatory. You should absolutely go read this article, but to sum up: Law enforcement in states than ban abortion is now absolutely part of the threat model that encrypted messaging defends against. No one claiming to be a progressive should be arguing in favor of “exceptional access” or other law enforcement access to encryption.

An unstructured rant on running long-lived software services

– Be kind to your colleagues. Be kind to your users. Be kind to yourself. This is a long haul and you’ll all fuck up.

⁃ The natural environment for your code is production. It will run there longer than it does anywhere else. Design for prod first, and if possible, make your dev environment act like prod.

⁃ Legacy code is the only code worth caring about.

⁃ Users do weird stuff, but they usually have a very good reason, at least in their context. Learn from them.

⁃ It’s 2022, please do structured logging.

⁃ Contexts and tracing make everyone’s lives easier when it comes time to debug. At minimum, include a unique request id with every request and plumb it through the system.

⁃ Do your logging in a separate thread. It sucks to find a daemon blocked and hanging because of a full disk or a down syslog server.

⁃ Don’t page for individual machines going down. Do provide an easy or automated way for bad nodes to get thrown out of the system.

– Be prepared for your automation to be the problem, and include circuit breakers or kill switches to stop it. I’ve seen health checks that started flagging every machine in the fleet as bad, whether it was healthy or not. We didn’t bring down prod because the code assumed if it flagged more than 15% of the fleet as bad, the problem was probably with the test, not the service.

⁃ Make sure you have a way to know who your users are. If you allow anonymous access, you’ll discover in five years that a business-critical team you’ve never heard of is relying on you.

⁃ Make sure you have a way to turn off access for an individual machine, user, etc. If your system does anything more expensive than sending network requests, it will be possible for a single bad client to overwhelm a distributed system with thousands of servers. Turning off their access is easier than begging them to stop.

⁃ If you don’t implement QOS early on, it will be hellish to add it later, and you will certainly need it if your system lasts long enough.

⁃ If you provide a client library, and your system is internal only, have it send logs to the same system as your servers. This will help trace issues back to misbehaving clients so much.

⁃ Track the build time for every deployed server binary and monitor how old they are. If your CI process deploys daily, week-old binaries are a problem. Month-old binaries are a major incident.

⁃ If you can get away with it (internal services): track the age of client library builds and either refuse to support builds older than X, or just cut them off entirely. It sucks to support requests from year-old clients, force them to upgrade!

⁃ Despite all this, you will at some point start getting requests from an ancient software version, or otherwise malformed. Make sure these requests don’t break anything.

⁃ Backups are a pain, and the tooling is often bad, but I swear they will save you one day. Take the time to invest in them.

⁃ Your CI process should exercise your turnup process, your decommission process, and your backups workflow. Life will suck later if you discover one of these is broken.

⁃ Third party services go down. Your service goes down too, but they probably won’t happen at the same time. Be prepared to either operate without them, or mirror them yourself

⁃ Your users will never, ever care if you’re down because of a dependency. Every datacenter owned by AWS could be hit by a meteor at the same time, but your user will only ever ask “why doesn’t my service work?”

⁃ Have good human relationships with your software dependencies. Know the people who develop them, keep in touch with them, make sure you understand each other. This is especially true internally but also important with external deps. In the end, software is made of people.

⁃ If users don’t have personal buy-in to the security policy, they will find ways to work around them and complain about you for making their lives harder. Take the time to educate them, or you’ll be fighting them continuously.

A supportive job interview story

(adapted from an old lobste.rs comment)

My favorite interview ever was a systems interview that didn’t go as planned. This was for an SRE position, and while I expected the interview to be a distributed systems discussion, the interviewer instead wanted to talk kernel internals.

I was not at all prepared for this, and admitted it up front. The interviewer said something along the lines of, “well, why don’t we see how it goes anyway?”

He then proceeded to teach me a ton about how filesystem drivers work in Linux, in the form of leading me carefully through the interview question he was “asking” me. The interviewer was incredibly encouraging throughout, and we had a good discussion about why certain design decisions worked the way they did.

I ended the interview (a) convinced I had bombed it, but (b) having had an excellent time anyway and having learned a bunch of new things. I later learned the interviewer had recommended to hire me based on how our conversation had gone, though I didn’t end up taking the job for unrelated reasons having to do with relocation.

I’ve given a number of similar interviews since, on system design or general sysadmin skills. I’ve always tried to go into these thinking about both where I could learn, and where I could teach, and how either outcome would give the candidate a chance to shine.

Developing managed vs self-hosted software

I’ve done some work lately with teams that deliver their products in very different ways, and it has me thinking about how much our “best practices” depend on a product’s delivery and operations model. I’ve had a bunch of conversations about this tension

On the one hand, some of the teams I’ve worked with build software services that are developed and operated by the same team, and where the customers (internal or external) directly make use of the operated service. These teams try to follow what I think of as “conventional” SaaS best practices:

  • Their development workflow prioritizes iteration speed above all else
  • They tend to deploy from HEAD, or close to it, in their source repository
    • In almost all cases, branches are short-lived for feature development
  • They’ve built good automated test suites and well-tuned CI/CD pipelines
  • Releases are very frequent
  • They make extensive use of observability tooling, often using third-party SaaS for this
  • Fast roll-back is prioritized over perfect testing ahead of time
  • While their user documentation is mostly good, their operations documentation tends to be “just good enough” to onboard new team members, and a lot of it lives in Slack

However, we also have plenty of customers who deploy our software to their own systems, whether in the cloud or on-premise. (Some of them don’t even connect to the Internet on a regular basis!) The development workflow for software aimed at these customers looks rather different:

  • Deploys are managed by the customer, and release cycles are longer
  • These teams do still have CI/CD and extensive automated tests… but they may also have explicit QA steps before releases
  • There tend to be lots of longer-lived version branches, and even “LTS” branches with their own roadmaps
  • Logging is prioritized over observability, because they can’t make assumptions about the customer tooling
  • They put a lot more effort into operational documentation, because most operators will not also be developers

From a developer perspective, of course, this all feels much more painful! The managed service use case feels much more comfortable to develop for, and most of the community tooling and best practices for web development seems to optimize for that model.

But from a sysadmin perspective, used to mostly operating third-party software, the constraints of self-hosted development are all very familiar. And even managed service teams often rely on third-party software developed using this kind of model, relying on LTS releases of Linux distributions and pinning major versions of dependencies.

The biggest challenge I’ve seen, however, is when a development team tries to target the same software at both use cases. As far as I can tell, it’s very difficult to simultaneously operate a reliable service that is being continuously developed and deployed, and to provide predictable and high-quality releases to self-hosted customers.

So far, I’ve seen this tension resolved in three different ways:

  • The internal service becomes “just another customer”, operating something close to the latest external release, resulting in a slower release cycle for the internal service
  • Fast development for the internal service gets prioritized, with external releases becoming less frequent and including bigger and bigger changes
  • Internal and external diverge completely, with separate development teams taking over (and often a name change for one of them)

I don’t really have a conclusion here, except that I don’t really love any of these results. /sigh

If you’re reading this and have run into similar tensions, how have you seen this resolved? Have you seen any success stories in deploying the same code internally and externally? Or alternatively — any interesting stories of failure to share? 😉 Feel free to send me an email, I’d be interested to hear from you.