Ian Duncan has written a great post on CI orchestration called No, Really, Bash Is Not Enough: Why Large-Scale CI Needs an Orchestrator. It does a good job of distinguishing between the simple cases where bash and make really are good enough for CI, and when you actually need a full-featured CI system.
Continue readinggood reads
There are 6 posts filed in good reads (this is page 1 of 1).
Quoting Charity Majors
Charity’s latest post, Bring back ops pride, is an excellent discussion (rant?) on the importance of operations for software systems and why it’s a bad idea to try and pretend it isn’t a real concern, or make conventional application teams do the work in addition to their regular job.
“Operations” is not a dirty word, a synonym for toil, or a title for people who can’t write code. May those who shit on ops get the operational outcomes they deserve.
You should absolutely go read the full piece, as well as Charity’s earlier post on the Honeycomb blog: You had one job: Why twenty years of DevOps has failed to do it.
Below find several pull quotes from the post itself, because there were just too many to choose from.
Continue readingQuoting Lorin Hochstein
It’s useful to compare Lewis’s book with two other recent ones about Silicon Valley executives: John Carreyrou’s Bad Blood and Sarah Wynn-Williams Careless People. Both books focus on the immorality of Silicon Valley executives (Elizabeth Holmes of Theranos in the first book, Mark Zuckerberg, Sheryl Sandberg, and Joel Kaplan of Facebook in the second). These are tales of ambition, hubris, and utter indifference to the human suffering left in their wake. Now, you could tell a similar story about Bankman-Fried. In fact, this is what Zeke Faux did in his book Number Go Up. but that’s not the story that Lewis told. Instead, Lewis told a very different kind of story. His book is more of a character study of a person with an extremely idiosyncratic view of risk. The story Lewis told about Bankman-Fried wasn’t the story that people wanted to hear. They wanted another Bad Blood, and that’s not the book he ended up writing. As a consequencee, he told the wrong story.
Telling the wrong story is a particular risk when it comes to explaining a public large-scale incidents. We’re inclined to believe that a big incident can only happen because of a big screw-up: that somebody must have done something wrong for that incident to happen. If, on the other hand, you tell a story about how the incident happened despite nobody doing anything wrong, then you are in essence telling an unbelievable story. And, by definition, people don’t believe unbelievable stories.
From Telling the wrong story on Lorin’s excellent blog, Surfing Complexity.
Quoting Nicholas Carlini
Because when the people training these models justify why they’re worth it, they appeal to pretty extreme outcomes. When Dario Amodei wrote his essay Machines of Loving Grace, he wrote that he sees the benefits as being extraordinary: “Reliable prevention and treatment of nearly all natural infectious disease … Elimination of most cancer … Prevention of Alzheimer’s … Improved treatment of most other ailments … Doubling of the human lifespan.” These are the benefits that the CEO of Anthropic uses to justify his belief that LLMs are worth it. If you think that these risks sound fanciful, then I might encourage you to consider what benefits you see LLMs as bringing, and then consider if you think the risks are worth it.
From Carlini’s recent talk/article on Are large language models worth it?
The entire article is well worth reading, but I was struck by this bit near the end. LLM researchers often dismiss (some of) the risks of these models as fanciful. But many of the benefits touted by the labs sound just as fanciful!
When we’re evaluating the worth of this research, it’s a good idea to be consistent about how realistic — or how “galaxy brain” — you want to be, with both risks and benefits.
Robin Sloan: AGI is already here!
In Robin Sloan’s “pop-up newsletter” Winter Garden, he argues that artificial general intelligence has been with us since the development of GPT-3:
Continue readingIt’s not just about shorter sentences
A good long read from Henry Oliver in The Works in Progress, about how English has become easier to read:
Continue reading