π€ Machine Learning: Programming's Asbestos π§
Friday, April 26, 2019 :: Tagged under: engineering culture essay. β° 7 minutes.
Hey! Thanks for reading! Just a reminder that I wrote this some years ago, and may have much more complicated feelings about this topic than I did when I wrote it. Happy to elaborate, feel free to reach out to me! π
π΅ The song for this post is the Title Screen theme for Pictionary for the NES, by Tim Follin. π΅
I'm reading a book about AI's impact on society. I'll write about it when I'm finished, but I first felt like writing a few things about AI, apart from the book.
Revolution in our field!
Many "unsolvable" problems in our field now have great answers. The big obvious one is image classification: xkcd was joking about how hard this was in 2014, but now a solution is right there on the homepage of startups:
There's no shortage of emotion-inspiring demos: Google's Duplex making reservations is a technical marvel. DeepFakes are creepy and their use in porn makes me think of Yet Another Way women's lives in public will be a special hell. We can change weather conditions in photos. The dangers are well-illustrated with world leaders. This is the end of the "real world" as we know it; at least, it's the end of recorded media having any veracity (though Photoshop killed this for photos long ago). This guy is even calling it "Software 2.0!"
But, lol
At the end of the day, there's no magic here. It's all just machines, used by people, which means the same things that have applied to tech advances of the past will apply to these too. In particular:
- They are software, and many challenges are structurally baked into what neural nets literally are.
- We've had hype cycles before, and we almost always hit a wall, comically.
- It's not about their capabilities, it's about the people using them and their limitations.
Challenges of Software
First, a two-minute explanation of what modern machine learning is, versus software systems coded by people:
Standard software is a set of instructions written by people. Another computer program called a compiler takes those instructions and produces an artifact. The artifact is run on the computer and presumably does what you programmed it to. If you want to make changes to the artifact, the programmer makes educated guesses on the initial set of instructions, changes the code, and feeds it back to the compiler to produce a new artifact. Then they test it to see if it does the right thing. This iteration happens on human timescales (so a change can be made in minutes for small projects, or weeks for larger projects), and its guided by our judgement.
Machine learning also produces an artifact: it's a program that generates another program, immediately, that tries to do something you aim it at. Its first version absolutely sucks β it might as well be guessing randomly! But then you have a stack of questions and answers for the task at hand (think a huge stack, like, millions of examples). It runs the first example through its shitty program, sees if it got the correct answer (it probably didn't), and in the case it didn't, like the human programmer, it tweaks its program so it's more likely to get the correct answer. Unlike the human programmer, there's no "educated guesses" based on judgement, the "correction" is a purely mechanical process based on some math, and it only corrects its model given what you just tested it on. Then it pulls another example from the stack and repeats the process. It does this on machine-level timescales: millions of times, every hour.
After hours or days of this "training" on a giant dataset, the final version of the artifact, in theory, does a pretty good job at solving the problem, because its tweaks have been guided by millions of question-and-answer examples.
Said another way:
There are a few key takeaways about this:
The human program used code, but ML requires code and data. Lots of it. A corollary is that for Machine Learning, the data often matters way more than the code. If the problem you tried to solve was "identify if there's a person in this image" but you had no images of Black people, the ML model is hopeless.
This happened with my partner and I in 2014! I asked it what was in this photo, and it very obviously had a biased data set:
Another fun example:
Google, powering its Knowledge Graph with AI:
I mostly point these out to remind everyone that mistakes like this are structural to this kind of AI. There isn't simple fix that will eliminate this or make it generalizable; to suggest we get over this kind of thing soon is like saying "we'll be able to plant crops on the Moon once we fix that whole "needing sun, water, and nutrients" thing."
(on a slightly different note, when I asked on Twitter recently, about the failures of Open Source/Free Software compared to the late-nineties/early aughts:
what are the
gcc
orgdb
or Linux or Apache or MySQL of the last 5 years? something that virtually anyone can use, for cheap, to unlock major possibilities previously only enjoyed by megacorps?
one obvious answer would be things like Tensorflow or PyTorch. But what value are these code frameworks without the data?)
The big difference I don't see mentioned much: The human program is reproducible, the ML one is not. When I said "it tweaks its program so it's more likely to get the correct answer [using math]," this isn't a deterministic process. If I fed the same ML framework the same data with the same hyperparameters, it will produce a different artifact! The human-written code, by contrast, will always produce the same artifact when fed to the same compiler.1
This means that while you might get lucky, you'll never create an artifact with properties you can build on reliably. One person calls it the high-interest credit card of technical debt. Another calls it "a reproducibility crisis." Software maintenance and managing complexity is already the most challenging part of software development (especially over time), so how do you build a robust, adaptable, socially responsible artifact when you can barely reproduce results you've already achieved?
Others in the field note that it's very easy to see models be brittle and that their quality degrades over time.
My impression of AI is that it's like discovering what an amazing insulator asbestos is, and now companies are putting it everywhere. It is good at what it's advertising! And we will definitely find many uses for it. But I don't think it's a great general-purpose tool, nor have we figured out the various ways its failures will impact the products containing it more broadly.
Hype cycles
Here's an article from 2014 called "The Blockchain is the New Database, Get Ready to Rewrite Everything." It was correct! Nobody writes in traditional datastores anymore, Everyone rewrote and is rewriting it all on the blockchain these days π.2
Here's a retrospective on Windows Vista. It's a very long read that I find pretty interesting for a myriad of other reasons, but the main thing for this discussion is an understanding of WinFS: a radically different understanding of what computing would look like if Microsoft successfully shipped this and succeeded in early efforts to stymie the web. Many very intelligent people were planning for a future where all your software was still on your desktop, but the programs can declaratively query and share data, as in SQL, and computers speaking over TCP/IP + HTTP + browsers would be as obscure as the Gopher protocol is today.
Here's a delightful talk by Bret Victor, in full costume for the 1970's and using transparencies on an overhead projector instead of presentation tech like Keynote, with a better idea of where computing could have gone based on predictions of the time.
When I was at Google, scuttlebutt was that self-driving cars were "almost ready" (like, within a year) to ship out, once we ironed out the kinks. That was 2013.
Almost nobody in the 80's predicted the impact the Internet would have. Almost nobody from the early aughts predicted smartphones.
All this to say β we should be wary of hype cycles. We've even had an AI Winter before. It's worth having a bit of healthy skepticism about AI, despite its early promises in narrow fields.
It is still a big deal. Just⦠verify. If you can't, wait a bit.
Remember the guy I linked above calling this "Software 2.0"? He's the head of AI at Tesla. It's great for his brand if we all believe AI is the most critical thing to invest in right now and any other way of building software systems is "dead." Many storytellers are selling something.
To be clear: AI is important, and transformational! We will see it in more places! I just suspect the easiest wins are already behind us. We'll see a lot more applications of what we've just conquered, and we'll also see the first sets of models begin to collapse without appropriate reproducibility.
1. ^ If we're being pedantic, the artifact might not be the exact same if the compiler uses probabilistic optimization passes, and you can argue the chipset won't execute the same binary the same ways if it's prefetching + multicore optimizations do funny things. Those cases don't really matter for what I'm after given the semantic equivalence, but I can hear a little voice bugging me about it, so maybe you've got that voice too.
2. ^ I wrote about blockchain and its continued irrelevance in the face of hype here.
Thanks for the read! Disagreed? Violent agreement!? Feel free to join my mailing list, drop me a line at , or leave a comment below! I'd love to hear from you π