Trying to write more, with less pressure

I’ve been pretty bad at blogging for the past *mumble mumble years*, but it’s not for lack of writing.

The thing is, I like writing. I have a rather large drafts folder filled with work-in-progress posts, not to mention all the various brainstorming docs I have for work, D&D, and other writing tasks. Those WIPs are frequently five or ten pages long, with lots of little notes on extra bits I should add to avoid missing things.

Like plenty of other folks, my problem isn’t writing, it’s finishing things.

However, this blog is titled “thinking out loud”! I don’t need to write the definitive article on a given topic, or at least I don’t need to do it here. Instead, I want this to be a place where I can get thoughts out in front of people (and myself!) so I can make them better.

To that end, I’m setting a goal for 2022 to write here:

  • At least once every two weeks
  • With only light editing
  • At most two pages of text in a post
  • And being willing to delete anything I decide I hate! šŸ˜…

To make my life easier, I’m cheating a little: I’ve written four short posts in the past week, and set them to auto-publish every two weeks!

That should get me through February. In the mean time, I’ll keep writing — with any new posts either being added to the queue, or potentially posted live if I have a particularly hot take. With luck, this process will help me stick to my goal despite any temporary crises or fits of ennui, and keep my momentum up.

Happy New Year!

SRE to Solutions Architect

Itā€™s been about two years since I joined NVIDIA as a Solutions Architect, which was a pretty big job change for me! Most of my previous work was in jobs that could fall under the heading of ā€œsite reliability engineeringā€, where I was actively responsible for the operations of computing systems, but my new job mostly has me helping customers design and build their own systems.

Iā€™m finally starting to feel like I know what Iā€™m doing at least 25% of the time ? so I thought this would be a good time to reflect on the differences between these roles and what my past experience brings to the table for my (sort of) new job.

Continue reading

Sketching out HPC clusters at different scales

High-performance computing (HPC) clusters come in a variety of shapes and sizes, depending on the scale of the problems youā€™re working on, the number of different people using the cluster, and what kinds of resources they need to use.

However, itā€™s often not clear what kinds of differences separate the kind of cluster you might build for your small research team:

Note: do not use in production

From the kind of cluster that might serve a large laboratory with many different researchers:

The Trinity supercomputer at Los Alamos National Lab, also known as ā€œthat goddamn machineā€ when I used to get paged at 3am

There are lots of differences between a supercomputer and my toy Raspberry Pi cluster, but also a lot in common. From a management perspective, a big part of the difference is how many different specialized node types you might find in the larger system.

Continue reading

handy utilities for every hpc cluster

Iā€™ve built a lot of HPC clusters, and theyā€™ve often looked very different from each other depending on the particular hardware and target applications. But I almost always find myself installing a few common tools on them, to make their management easier, so I thought Iā€™d share the list.

Continue reading

my default technology choices

Iā€™ve written several partial versions of this post in various emails and Slack posts, and finally decided I should just put it on the blog.

The tech landscape is complex and picking the right tool is hard, but the vast majority of problems can be solved in a ā€œgood enoughā€ way using a wide variety of tools. The best choice is usually the one you know well already. So I tend to think most developers should have a ā€œdefault tech stackā€ that they use for most things, only switching when the problem constraints or early experience dictate otherwise.

And hereā€™s mine! This is the list of tools I usually start with, and use most frequently in production. I will frequently adjust some part of this list for any given project, but I find these are usually useful choices. I donā€™t expect any of these to be very surprising, but I think thereā€™s some value in writing them down.

Continue reading

Some thoughts after reading Vincenti’s “What Engineers Know and How They Know It”

A few weeks ago I watched Hillel Wayne’s recent talk “Are we really engineers?”, where he looked at the idea of whether software engineers get to call themselves “engineers” or not. (Spoiler: the answer is yes!)

During the Q&A, Wayne mentioned that while he had seen a lot of “philosophy of science”, there didn’t seem to be much “philosophy of engineering” out there. I remembered noticing the same thing, and on Twitter I asked for book recommendations on the topic. The always-reliable Lorin Hochstein obliged, and a week later I had some reading to do!

Just as a disclaimer: this post is very much in theme of “thinking out loud”, and got a little long. šŸ™‚ This is mostly me discussing my experience of reading the book and some thoughts on software engineering I had after reading it. Very likely nothing here is at all original, and I am not an expert, but I wanted to get my ideas down in text after finishing the read. And having done so, I thought it might be worthwhile to share.

Ok, let’s dive in.

Continue reading

Questions your users will probably ask about the shared cluster

(Not intended to be exhaustive. ?)

On failures:

  • Why did my job fail?
    • Ok, I saw the error code, but what did that actually mean?
    • Can you make changes on the cluster itself so this will succeed next time?
  • What physical node did my job run on?
    • How can I make sure my retries donā€™t run on that node again?
    • Ok, I understand that the machines are identical, but can I just make sure to never run on that node ever again?
  • What can I do to prevent this failure from happening again?

On priorities or ā€œquality of serviceā€ mechanisms:

  • Why is my job taking so long to run? (And/or, why are my requests being throttled?)
  • How can I increase my priority to avoid this throttling again?
  • Why is that other userā€™s job running with a higher priority of mine?
    • Can you please throttle their jobs which are getting an unfair priority?
  • I have a critical deadline! Can you please override the priority score so my job runs immediately?
  • Can you reserve some set of resources for my dedicated use for some amount of time?
  • Can you provide some guarantee that my jobs will always run within a specified time?
  • Can I get an interactive way to run on the machine? I donā€™t want to deal with writing a job script.
    • (This may be requested for a single node, or at any scale up to the full cluster!)

On the development environment:

  • Can you please provide package X on the system? (Where X may be something the admin team has never heard of)
  • Why doesnā€™t the cluster provide the newest version of X? (Or an updated version of some API)
    • When can I get access to the newest version?
  • Why did you upgrade the cluster so quick? Now my workflow is broken.
    • Can you please go back to the version that works for my job?

Now, Iā€™ll grant you that I can get a little snarky sometimes, but all of these questions may have some valid business reason!

Even seemingly obnoxious requests, like the user who shows up on a Friday asking for exclusive access to the full cluster, might actually turn out to be the most important thing to do in that context.

And valid or not ā€” most of these are questions the users of any shared resource will eventually ask! Iā€™ve run across too many cluster stacks that canā€™t actually inspect their priority system; donā€™t provide any tracing for failures; or canā€™t even tell the user which machines they ran on.

One way or another, you should probably be able to answer these kinds of questions, or youā€™ll have a lot of trouble operating your system over time.

Having trouble with fun in the time of COVID

I have a lot of friends who are struggling with focus at work given our current shelter-in-place conditions. Iā€™m running into a little bit of this myself ā€” I think my productivity is a little bit lower than usual, even though Iā€™m used to working at home. But for the most part Iā€™m doing ok getting work done.

Instead, what Iā€™m failing at is relaxing. Iā€™m finding it increasingly hard to focus, or enjoy myself at all, when I donā€™t have a clear ā€œto-doā€ item in front of me.

  • Iā€™m having trouble with any kind of fiction reading, which is usually not a problem at all.
  • Iā€™ve been taking more walks, but I spend them either thinking about work or worrying about the state of the world in general.
  • I can occasionally fool myself into baking, but only if I think of it as ā€œI need this loaf of breadā€ instead of ā€œitā€™s fun to bake!ā€
  • While Leigh and I are slowly working our way through Star Trek: Deep Space Nine, Iā€™ve watched it enough times that itā€™s as much background noise as media for me.

Instead, when I have any kind of downtime I just feel anxious. I stare at Twitter or the news, or just sit there and worry.

Needless to say this is not any good for work-life balance! Iā€™ve been doing okay at not over-working, mostly thanks to Leigh and the cats, but Iā€™m not sure hours of free-floating anxiety is all that much better.

Anyway. Not sure thereā€™s a point to this, but thatā€™s my current quarantine experience. If this sounds familiar, feel free to shoot me an email and happy to chat! (Apparently I could use the distraction…)

Invest in operational tooling

When you operate an evolving distributed system in production for a long time, you often accumulate a runbook of weird hacks for responding to rare events.

Three examples at random:

  • A service my team was on-call for would occasionally get into a specific weird state, and start intermittently dropping requests. Getting it healthy again was a complex multi-step process. It was also expensive and had its own production impact, so you didnā€™t want to do it by mistake!
  • Setting up new clusters for a different service required building multiple databases with very specific, environment-dependent configurations.
  • Another system had very complex internal state, and inspecting that state involved some fairly arcane and expensive SQL queries. We didnā€™t have to dig into it often, but this was needed for certain debugging and auditing processes.

Given enough years of operation and a complex enough environment, you can accumulate a long list of these kinds of rare procedures.

Fully automating these procedures is often difficult, because they might require some human inputs or judgement. This is especially true when the situation is rare and occurs only in production, so the causes are poorly understood. Faced with these problems, Iā€™ve seen a lot of teams end up with a big pile of wiki pages instead… which are not fun to parse at 3am when prod is broken.

However, Iā€™m a big fan of building partial automation to handle these kinds of procedures. Instead of making someone copy/paste their way through a complex wiki page at 3am, they should have a tool that can guide them through the procedure. This tool can ask for user input in the places itā€™s needed, and build in guard rails and confirmation prompts when youā€™re doing something dangerous.

The downside to building this tooling is that you now have a whole new software project to maintain! Because in my experience, you really do have to treat this as a first-class software project in its own right, maintained alongside your production services.

To put it another way, Iā€™m not advocating for a big pile of scripts. (though thatā€™s better than nothing…) Iā€™m saying you should build something like a kubectl or mysqladmin for your own services.

In the long run, though, I find that this investment really pays off. Having good tooling improves the maintainability of your systems and makes the on-call experience easier. It also translates institutional memory into code, which Iā€™ve found makes onboarding easier and gets people more comfortable with dealing with prod.

Practices of an intermittent developer

Hillel Wayne published a post yesterday on “The Hard Part of Learning a Language”, about all the little “getting started” challenges of learning a new programming language. It resonated with me so much, because I find myself going through this process pretty frequently.

I sometimes describe myself as an “intermittent” software developer, though really I’ve never worked as a developer: I’ve spent most of my career as either a scientist or in operational and support roles. (SRE, sysadmin, pick your job title…) While I’ve written code nearly every day for over a decade, I’ve rarely spent more than a few weeks at a time working on any given piece of software.

Instead, I’ve mostly worked on operational tooling, low-maintenance microservices, or wrote “one-off” code to support an analysis or duplicate an issue. I also spend a lot of time working on other people’s code, but mostly in the context of “fix the damn thing!” The result of this pattern is that I:

  • Frequently switch languages
  • Spend a lot more time reading and analyzing software than writing it
  • Often have weeks or months go by since the last time I touched a language or service
  • Rarely get to become deeply immersed in a given language’s idioms or practices

Because of this, I keep finding that the languages I like best are those that are relatively easy to put down for a while, and pick up again without a ton of friction. This isn’t exactly the same as having an easy learning curve, but more that they don’t require reloading a lot of mental context which is unique to them. The languages I like tend to have:

  • Large standard libraries
  • Minimal need for IDE support or editor plugins
  • Consistent community coding styles, and/or widely-used auto-formatting tools
  • Strong backward compatibility
  • Good documentation
  • Decent integration with Linux distro package managers
  • Communities that converge on “one way to do it” solutions, and make it obvious what they are!

So, for example, I’m a pretty big fan of Go. It’s not very interesting, and I find writing it a bit repetitive (if err != nil ...). But I can go six months without writing any Go, sit down to fix a bug in a project I’ve never worked on before, and generally expect to get my bearings fast. I also tend to like Python a lot, despite some messy spots, because I can almost always work within a pretty stable core consisting of the standard library and a few large, stable packages.

The biggest downside, though, is that I frequently bounce off of languages that I think are exciting but feel like they’d require too much consistent attention to keep up with. For example, I think Rust is one of the most interesting languages out there today… but I’ve been challenged by the combination of a small standard library and relatively fast pace of change (in the ecosystem, not the language!). That combination makes me skeptical that I could follow any kind of “intermittent” pattern with Rust; I feel like I would keep getting lost every time I came back!

To be clear, I don’t think this means the languages I have trouble with should change! They’re clearly really successful, and many are doing really interesting things.

But I do think there’s a lot of value in building tools that are “low-maintenance”, and that language stability has a lot going for it. Without doing a real analysis, I suspect that communities with a lot of part-time developers will often gravitate to languages that change slowly. Certainly scientific computing seems to write a lot of Python, C++, and Fortran — and older versions of those languages at that! And the SRE community definitely publishes a lot of Go.

Then again, maybe I’m wrong! Are there any fast-changing languages popular with part-time developers? Feel free to shoot me an email and let me know. šŸ™‚