Kill Null
Thursday, March 24, 2011 :: Tagged under: pablolife rants engineering essay. ⏰ 4 minutes.
Hey! Thanks for reading! Just a reminder that I wrote this some years ago, and may have much more complicated feelings about this topic than I did when I wrote it. Happy to elaborate, feel free to reach out to me! 😄
The other day, I tweeted:
"null is the billion-dollar design mistake. null breaks abstractions, composability. null eats live babies. null hammered my toe. null sucks."
And it had a lot to do with my frustrations in working with other people's code.
This blog sometimes tries to explain technical concepts to non-technical people,
so let me take a moment to explain to you what the hell null
is if you're not
into programming, and why it's such a buzzkill.
We'll start with a metaphor: writing programs is very much like writing recipes, and the computer is your team of chefs. Recipes and code have a similar structure with two major parts: you have a section for the stuff you're going to use (in a program we call them declarations, in recipes it's the list of ingredients), and you have a set of instructions telling you what to do with said stuff (in both cooking and programming, these are your instructions). The basic gist is that ingredients + instructions = delicious, delicious, product.
This isn't a perfect metaphor, but it serves. Now, like recipes, oftentimes for complicated products, you'll rely on other recipes; e.g. you'll need to make guacamole for your burrito, or a complicated cake batter to eventually make a wedding cake. As long as you follow ingredients + instructions, you will get delicious products, and you can build recipes on top of recipes to make grander, more delicious recipes without a problem. This allows your recipes to be composable: you can build them up from smaller ones. This is critical in making anything of scale.

There's a hidden step between ingredients and recipe, one that rarely gets written down, because it's the one that sucks the most: preparing the ingredients. When your cooking time is advertised as 40 minutes but you have to mince garlic, have potatoes cooked and peeled and in cubes, that alone will add about an hour. Cooking shows look effortless, but when you try the same things at home you find it's not so easy when the ingredients don't come expertly prepared in beautiful glass bowls.
So suppose you write a brilliant recipe, and you hand if off to the cooks to make. They see "4 cups shallots in a blue bowl" in the ingredients. Later, they have a bunch of whole shallots, and they see the instruction "place the shallots from the bule bowl into the pot..." What do they do?
In programming, we often "return null" when something doesn't go as expected, which is the equivalent of the chef reading the directions deciding to put dogshit into the pot.

Here's the rationale: if you bite into dogshit, you know the recipe went wrong and will immediately stop eating. Another reason: you can't cover every case, so if we put in some dogshit, we can still keep cooking! Why shut down the entire kitchen because one stupid chef was careless with one of their directions?
Do you see the problem though? There's dogshit in the goddamned food.
Just like the weird shallot example, we have it (very, very frequently) in programming: you can declare that you'll use some data, and you can write instructions using that data, but you might miss the step of saying what that data is, or by the time the data gets to that instruction it's no longer what the coder thought it would be, and the current instruction is impossible.
null
is the Scumbag Steve of programming.
Like many of Computer Science's worst blights (e.g. goto), this was originally a feature! You can imagine the inventors thinking "we'll let programmers be responsible people, no need to baby them!"
BAD. THIS IS BAD THINKING. While you shouldn't tie the hands of programmers, you also shouldn't give them something that never, ever provides a useful value, like keeping dog shit in the kitchen.
"But Paul!" says the imperative programmer, "I frequently use null checks
when I want to say that something went wrong, such as fopen
not getting a
valid file handle! To which Paul says, "couldn't you use an error
code? Proper exceptions? This is 2011, kids!"
Sorry. Back to cooking.
Given that most chefs of the 90's were making food in kitchens with dogshit flowing to and from them, what did we end up doing? We ended up writing recipes that looked like this:
Make guacamole.
*Check if it's dogshit*
If it isn't, make refried beans.
*Check if it's dogshit*
After that, make some rice.
*Check if their dogshit.*
The punchline is, if you didn't check, you'd only know you had dogshit when you took a bite of food. And if you did find dogshit, what would you do? Normally, return more dogshit for someone else to cook with or figure out.
And the frequent checks for dogshit make for uninteresting and sad recipe writing.
All that is minor annoyance, the real issue is this: you can't compose your recipes or programs if there is random dogshit flowing from all sides. You need a certain level of trust in other code that it will either work as promised, or if it fails, you can prepare adequately without your first warning having to be biting into dogshit.
The functional languages (unsurprisingly) are on top of this: by eschewing state
and favoring evaluation of expressions rather than execution of statements, you
rarely end up with uninialized data. In the rare cases you want to, even a solution
as simple as the Maybe monad provide you null-like device that's statically
typed, and any function that calls it knows that it needs to handle the
Nothing
case.
Blah. Rant over. Just code responsibly, comrades, as people break down and cry because of code. I mean, I haven't. But people have, I'm sure.
Thanks for the read! Disagreed? Violent agreement!? Feel free to join my mailing list, drop me a line at , or leave a comment below! I'd love to hear from you 😄