💻 That Consumption life: The Big Nine 9️⃣
Friday, May 10, 2019 :: Tagged under: culture. ⏰ 14 minutes.
Hey! Thanks for reading! Just a reminder that I wrote this some years ago, and may have much more complicated feelings about this topic than I did when I wrote it. Happy to elaborate, feel free to reach out to me! 😄
🎵 The song for this post is Frog Fractions, by Khotin. 🎵
The Flash Forward book club continues! (I wrote up the last one here). Two books in, it's now lasted twice as long as my last distributed book club!
April's book was The Big Nine: How the Tech Titans & Their Thinking Machines Could Warp Humanity, by Amy Webb. This was more of a challenge for me. I think most book-reading people would enjoy it but I personally have trouble feeling great about it, mostly given where I am in my life (being a working software engineer, angry at the lack of accountability of those in power, and an insufferable pedant). Apart from this book and its arguments, I previously wrote about the hype in AI with an eye to what it actually is, and why I'm skeptical about it.
After some talk of the current state of AI, Webb presents three possible futures: an optimistic one, a middle-ground one, and a catastrophic one, and AI's role in it. She concludes with some prescriptions to steer us the Good Way and not the Bad Way. As much as there can be spoilers in a non-fiction book, they'll be included going forward!
Basic Correctness
There's a lot in this book that ranges from "not how I've seen it described" to "flat-out wrong."
Benign: many uses of "AI" or "algorithm" suggest a fuzzy understanding of them and their boundaries. Small examples include suggesting a company's values are "its algorithm" (an algorithm is a finite set of repeatable steps to achieve a concrete result; what "company values" provide are fuzzier inputs, fuzzier outputs, and not well-described as "finite set of executable steps") or suggesting that AI-enabled dating would allow us to screen partners for certain properties ("Jewish, lives within 50 miles of Cleveland"—this is just search filters). It's not that these two very specific examples undermine the whole work for me, just that my notes contain a couple dozen of them around a lot of other concepts related to computing. Reporting on something you don't work on every day + having readers who do is hard.
The futures chapters presume the idea of a "Personal Data Record:" a record of all the data a person generates (their clicks, pageviews, medical records, photos, etc.) which, in the optimistic scenarios, is something a citizen has control over, gives companies permission to use, can revoke permissions for, etc. This… can't exist as described: you can't share this data with companies without them being able to copy it (this is what Cambridge Analytica did). Even if they copied "anonymized" versions, data anonymization really, really hard, and usually fails against even unsophisticated attacks. She handwaves a bit with the word "blockchain" in the chapter that introduces it, but not only would that mad inefficient (look at how much energy Bitcoin uses, a relatively niche game for speculators; compare that to all of the data hundreds of millions of people would generate) it would also make all that data immutable and undeleteable, which goes against other requirements of the PDR.
While someone can say the above is splitting hairs, some more clear-cut examples of incorrectness are the use of Moore's Law or Conway's Law. Moore's Law hasn't been true for years. She uses Conway's Law to mean "the values of a company's employees are reflected in the product," which I absolutely agree with, but it's not Conway's Law, which is a much more boring and limited statement about codebase structure. This mistaken use of Conway's Law and its implications gets several pages of treatment.
There are major practical concerns about any of the AI futures as she speculates: the sheer amount of computing resources and energy it would take to power them would conservatively require the building of dozens of datacenters and several millions of dollars of networking infrastructure; doing this while a younger generation is looking very critically at energy use for a boiling planet is a complicated prospect. Almost everyone I know in the Big Data game says that it's not so much about collecting data or training models, it's about cleaning data and that's not immediately automatable and scalable, and our early efforts in that are demonstrating massive issues of bias and accuracy, to say nothing of the philosophical questions of how models based on existing data are mostly built to reinforce their own structures.
China
China gets a lot of treatment in the book. A lot of the "horror scenarios" are of China becoming a global colonial power doing many of the same things the West did during their expansionist colonial days. While I don't want to be ruled by a colonial, authoritarian state, to talk breathlessly about how horrible it would be and not give much awareness to how it's just following the Western playbook (that many would argue we still largely follow) isn't my favorite way to read about a country's politics.
Additionally, there's a lot of confidence that China (through its authoritarianism) will succeed in many of its projects. I only feel this is optimistic because half of all software projects fail, and like I said in my last post on AI, you can see the cracks in the surface to how AI makes long-term building and maintenance even harder. Even if they have many things that the West lacks (political will and investment), meaningful software construction is still incredibly hard, AI produces specific challenges, and I'm not sure it's something they'll just cruise past.
There's a lot less substantiating this, but I got a whiff of "yellow panic" about the treatment of China. Yes, it's an authoritarian state, and maybe my brain is poisoned and beaten-down by too much Twitter, but is that really the bigger threat to our democracy and "way of life," as it were? Is China becoming a giant global empire within 30-40 years bigger than how our current companies enable fascism and are susceptible to state attacks? Consider companies like CloudFlare harboring white supremacists. Consider Jack Dorsey can't recognize right-wing grift for what it is. Consider Facebook is going "what about the Jews like Soros?!" in response to understandable civilian criticism?
Additionally, if AI is the next big thing and they become a global superpower, we presumably don't and… is that so bad? The US can be just another place that used to have gobs of power but now has people mostly doing their thing, like France? How horrible!
Again, I appreciated a lot of ideas I hadn't given much thought to, and I don't think about China and the trends there as much as I should. But in the context of the treatment of the other book's subjects, I didn't view its ascendance as a threat, given how the US itself is dismantling its own government and sliding into fascism.
The Benevolence of American Tech Companies

From the delightful and creepy This Person Does Not Exist, where you can refresh the page and see a new face of someone completely generated by AI.
Where I mostly clawed my face at was the treatment of Google, Microsoft, Apple, Facebook, IBM, Amazon (she shortens it to G-MAFIA). From reading this book, these companies can only one thing wrong, which is be bad at diversity. But anything else? No, that's because someone else is making them that way. They'd totally be willing to compete with China if only Wall Street didn't make them accountable to quarterly returns. How can they keep doing any long-term work if the people who literally own them demand they make money in the short-term?
This is also why they won't collaborate and share AI resources or data: Mean Old Wall Street and returns on revenue! The leaders of those companies hate being billionaires who get richer every quarter, they'd rather work on hard AI problems for the benefit of society.
That phrase, by the way, is maddeningly underspecified: there's a lot of talk about how and we must steer AI to the "betterment of humanity" and not get too lost on the politics, without acknowledging how intrinsically political any interpretation of that phrase would be. We can't even get people to agree with "fascism is wrong" anymore, so I don't know what hope we have for "is this good for humanity"?
The truth is, the leadership of these companies can do a lot of things: take a stand and tell shareholders to hold tight for a few quarters while they fix their culture or start long-tail research. They could work to change the system that leads to so many corrupt boards and/or makes growth hacking/IPOs attractive (SV has no problem getting into lobbying). They could start new companies that aren't publicly-traded or VC-funded. There's a ton of extremely straightforward things tech leadership could do that they take off the table because it would mean dropping share price in any way (in other words, if they were willing to give literally anything up); they'd rather be rich and/or friendly with people they make rich (I go further into this here and here).
Another theme: the lack of focus on Bigger Problems in AI is because of consumers (especially those damn millenials!), with their Face Books and their Snap Chats and their selfies. Page 180, explaining the causes of a less-good future, emphasis mine:
None of us—not individual consumers, journalists, or analysts—give the Big Nine any room for error. We demand new products, services, patents, and research breakthroughs on a regular cycle, or we register our complaints publicly. It doesn't matter to us that our demands are distracting AI's tribes from doing better work.
page 190, same chapter
Around the world, everyone is talking about our learned helplessness in the age of AI. We can't seem to function without our various automated systems, which constantly nudge us with positive or negative feedback. We try to blame the Big Nine, but really, we're the ones to blame.
It's been especially hard on Millenials, who thirsted for feedback and praise when they were kids and initially loved our varied AI systems—but developed a psychological tick that's been hard to shake. When the battery in our AI-powered toothbrush dies, a Millenial (now in her 40s) must resort to brushing her teeth the old-fashioned way, which provides no affirming feedback. An analog toothbrush gives no feedback, which means she can't get her expected hit of dopamine, leaving her anxious and blue.
🙄
By 2023, we have closed our eyes to artificial intelligence's developmental track [...] We helped the Big Nine compete against itself as we indulged in our consumerist desires, buying the latest gadgets and devices, celebrated every new opportunity to record our voices and faces, and submitting to an open pipeline that continually siphoned off our data.
And remember: when employees at Google get free lunches (one memorable one from my time there included champagne gelato, during a "get scrappy" initiative asking employees to find ways to cut costs), we shouldn't berate their perks!
[...] The food on the G-MAFIA's campuses isn't remotely comparable: organic poke bowls at Google in New York, and seared diver scallops with maitake mushrooms and squid-ink ride at Google's office in L.A. For Free.
[...]
My point is this: it's really hard to make the case for a talented computer scientist to join the government or military, given what G-MAFIA has to offer. We've been busy funding and building aircraft carriers rather than spending money on talented people. Rather than learning from G-MAFIA, we instead mock or chastise their perks.
I actually strongly agree with the larger point of people investment for public programs (though, again, context: it's hard when we shut down the government regularly over partisan battles to assuage old racists in the Dakotas); the bolded part is tonally what irked me. You can think government should invest in hiring + developing people and also think that tech's benefits are more for a crèche than a workplace, and a sign of some kind of market failure when they make enough money to do this while SF is covered in feces and they pay no taxes.

Little Deep Dream break. Found the image here.
She does frequently criticize the lack of representation of AI's key shops and encourages them to do better. But where does it figure in that they absolutely haven't in the decades we've been having that conversation? Is there any awareness of the zero improvement of the numbers they were publishing in the mid-2010s, with a "commitment to improve"? That you can fit all the Black people Facebook hired in 2015 on a bus? That Twitter's (at the time) only Black Engineering Manager left specifically naming this as something they were hopelessly bad at? A Black Facebook executive writing "Facebook is failing its black employees and its black users"?
We should trust the people who paid Andy Rubin $90m to leave after abusing his
power? The people who colluded to keep engineer wages down? The
culture that produced the James Damore memo (which, as a Xoogler, I can tell you
was only remarkable in that it leaked. eng-misc
was full of opinions like
this)?
There are Bad Actors in her world, but they're oddly never these people. Living in this world and paying attention, I promise you: if you look long enough from pig to man and man to pig, you soon see there isn't a difference. Many of the people on the boards and senior leadership of tech are responsible for why its bad, and can't be trusted to make ethical decisions on their own unless another monster with some teeth arrives.
I think this would be regulation, but she's solidly in the camp of NEVER EVER REGULATE. The "don't regulate" note gets hit many times in the book, but this, at the very end, is literally the deepest dive into why it would be bad that I could find:
Lastly, regulations, which might seem like the best solution, are absolutely the wrong choice. Regardless of whether they're written independently by lawmakers or influenced by lobbyists, a regulatory pursuit will shortchange our future. Politicians and government officials like regulations because the tend to be single, executable plans that are clearly defined. In order for regulations to work, they have to be specific. At the moment, AI progress is happening weekly—which means that any meaningful regulations would be too restrictive and exacting to allow for innovation and progress. We're in the midst of a very long transition, from artificial narrow intelligence to artificial general intelligence and, very possibly, superintelligent machines. Any regulations created in 2019 would be outdated by the time they went into effect. They might alleviate our concerns for a short while, but ultimately regulations would cause greater damage in the future.
The above paragraph is unspecific on why government is unsuitable to tackle these problems but the lumbering giants of G-MAFIA are (who, by the way, frequently fight within themselves—look at Google's failed chat apps or payment products over the last decade). Decisions are slow to come out of tech giants too: see standardization of new web tech via the W3C, where it takes years for the development of web tech. Tech companies also like plans to be specific and actionable. And limiting "innovation and progress" might also limit "damage," which Facebook and Twitter and Google have more than demonstrated they're capable of producing.
The main argument, as best as I can read it, is that AI moves too fast to regulate, and that regulations would slow them down. But slowing them down to limit damage is something she not only understands well, she herself advocates for it. At least if it's done by an organization she wants to exist but currently doesn't (she calls it GAIA, it's like the W3C but for AI and with more interdisciplinary folks. Emphasis mine):
GAIA members should voluntarily submit to random inspections by other members or by an agency within GAIA to ensure [a values framework the author proposes] is being fully observed. [...] This process would most assuredly slow down some progress, and that's by design.
And when she talks about freeing these Nice Tech Companies from Wall Street, she once again says slowing them down is not only Just Fine, but Necessary (emphasis mine) if it comes from market-based solutions via controls on investment:
[...] any financial investment accepted or made by the Big Nine should include funding for beneficial use and risk mapping. For example, if Google pursues generative adversarial network research, it should spend a reasonable amount of time, staff resources, and money investigating, mapping, and testing the negative consequences. A requirement like this would also serve to curb the expectations of fast profits. Intentionally slowing the development cycle of AI is not a popular recommendation, but it's a vital one.
There's no acknowledgement of the various kinds of regulations one could impose (it's just "regulation: no"). There aren't comparisons of where we've regulated other industries (e.g. has HIPAA hurt the tech medical sector?). There could be mention of how regulation can shape behavior independent of how well or poorly it's enforced (tax laws aren't enforced very well, does it mean it was bad to draft them?). There's no mention of how moralsuasion has failed.
It's disappointing to see such a dismissal of the only mechanism with teeth specifically for the purpose of advocating for the public good, which already exists, especially after seeing government mobilization and civic engagement at record highs after Trump's election.
Another little break. Sound on. Couldn't find original author of this viral video, I pulled it from here.
What is a "futurist"?
This all got me thinking about what a "futurist" even is, and what expectations one should have of them. Like treating Fox News as a reputable journalism outlet, or calling mid-aughts Daily Show/Colbert Report "just comedians," I feel like "futurist" lets someone do the fun parts of journalism and fiction while evading the downsides of either.
The interviews, the supporting evidence, the anecdotes are all of Earth, today but as shown above can be pretty wrong or missing important context. In a reporting context, I feel like there'd be more care to avoid overreaching, and some of these errors would be considered misleading or embarassing. I don't know if we hold "futurism" to the same standards.
Similarly, the futures presented at the end take pretty great liberties both with what's possible given what we already know, but also fails to demonstrate a solid understanding of how volatile and multifaceted the future really is (do we really think nobody new will disrupt this space and it'll be These Nine the whole time? Remember that in the 90's nobody had heard of Google or Facebook, and even in the mid-aughts we'd not heard of Uber, Spotify, or Slack. Do we think people will really be so uniform in their adoption and acceptance? How will other global politics play it out?).
In the context of pure fiction, this is fine, but part of our brains are supposed to presume "this could really happen, on our real Earth!" and the power given to that presumption irks me. It's like taking Malcolm Gladwell too seriously.
You might still like it
Broadly speaking, my beefs with the book fit into two categories:
- Its politics don't align with mine, particularly on accountability for what sucks about the world today and what we could do to make it better.
- It gets a lot of particulars wrong.
But, broadly speaking, I feel that:
- Most people's politics don't align with mine and are more centrist or classically liberal, so this probably won't bother them at all.
- Most people are content to call the monster "Frankenstein"; and find the guy telling you Frankenstein was actually the doctor to be a killjoy they're just fine without.
I'm personally inclined to believe self-described futurists who produce work like this aren't doing us a favor: it feels like cheerleading the powerful with vague, non-collectable promises of candy. I'd rather go to the candy that exists today, in front of me (the present, with lots of cool shit happening), or the fun promises that know they're just fun promises (fiction).
But there's a good chance I'm just being an asshole, and I feel like if you do enjoy consuming this kind of thing, you should absolutely go with Amy Webb. The book is stimulating while covering an impressive amount of breadth. It's clear there's an impressive amount of research involved; I was pleased to see my favorite adversarial example namechecked. It obviously provoked a lot of thinking and will probably steer how I follow some topics going forward.
When I was a teenager I read books like The Code Book and Mutant(s): On Genetic Variety and the Human Body which moved and inspired me, showing me how large and interesting the world is, filling my head with possibilities, and reminding me I could have agency in it. I'm currently in this industry and very mad with it, but I'm 1000% sure there are people who'd feel the same joy and inspiration after reading this. I'd give it to a hungry person in my life, then have a wine + cheese night with them debating all ☝️ with them.
Thanks for the read! Disagreed? Violent agreement!? Feel free to join my mailing list, drop me a line at , or leave a comment below! I'd love to hear from you 😄