Much ado about null
null
has a controversial history. It's been called "the billion-dollar mistake" by its creator. Most languages implement null
, but those few that do not (such as Rust) are generally lauded for their smart design by eliminating a class of errors entirely. Many developers (myself included) have argued that code that uses null or nullable parameters/returns is intrinsically a code smell to be avoided, while others (also myself included, naturally) have argued that is too draconian of a stance to take.
Anna Filina has a very good three-part series (properties, parameters, and returns) on how to avoid null
in PHP generally. Alexis King has a related post (excuse the Haskell) on how to avoid needing edge cases like null in the first place.
However, I want to go a bit deeper and try to understand null
from a different angle, and tease out the nuance of why one would really want to use it, and thus what we should be doing instead. To get there we'll have to go through a tiny little bit of type theory, but it will be quick and painless, I swear.
Continue reading this post on PeakD.