Reading Update - February 25, 2026
Superintelligence (Nick Bostrom)
I read part of Superintelligence by Nick Bostrom, one of the earliest AI researchers and philosophers genuinely worried about AI. At the start, I conflated superintelligence with artificial intelligence. Turns out AI is just one of many roads to superintelligence — which is really any process that exceeds human intelligence. Biological evolution could get us there too. Whole-brain emulation. Genetic selection. The definition is broader than I expected.
On the surface, Bostrom's conceptual formations are grounded and clear. He was writing at a time when there was no dominant path toward superintelligence, so he enumerates many scenarios: powerful models and data centers on one end, biological evolution on the other. Obviously we now know which path pulled ahead. My book group often found redundancy in explaining the alternative routes — there's a certain irony in meticulously mapping roads nobody ended up taking.
The book narrows into doomsday territory surprisingly fast. By chapter 8 we're already discussing how to prevent total human annihilation. I remember the chapter on perverse instantiations; if you set a terminal goal for an AI, it will generate intermediate subgoals that are totally contradictory to human flourishing. Want to maximize human happiness? The AI might just perpetually produce dopamine while still keeping you alive. Eerie.
But here's where I push back. These scenarios are too logical for this messy world. They assume a singleton — a one all-powerful AI system that achives superintelligence. I don't think that's the right picture yet. We see how available this tool is for humanity. In the early takeoff phase, incredibly diverse use cases will emerge, driven by human motivations that steer directions. The AI won't follow one misaligned value... it'll follow thousands. There is no clear intention or steerability of this technology, and I'm doubtful coordination of human institutions will solve this in time. We should have started years ago. Given how ill-prepared we are, we need to be as dynamic in our thinking as the models are in their capabilities. Think in 1st order or 2nd order. An increasingly steep hill to climb.
Eventually, my book club gave up on Superintelligence and pivoted to philosophy, which I will talk briefly in my next section.
The Story of Philosophy (Will Durant)
We picked up The Story of Philosophy by Will Durant. I won't say too much because we've barely scratched the surface. But chapter 3 covers Francis Bacon — one of the first philosophers to bridge religion with science.
In the final years of his life, Bacon wrote New Atlantis, a fictional story weaving together his ideas on what an ideal society would look like. A crew of sailors stumbles upon a hidden island civilization. The natives are technologically superior and yet surprisingly generous — they gave the sailors an entire building to stay in, tended to their sick, asked nothing in return.
I immediately thought of Wakanda in Black Panther. A technologically advanced civilization hidden from the world, guarding its secrets because the outside is morally damaged. Knowledge is power, and the leaders of this nation understood that deeply. Bacon instills Christianity as the moral authority of this fictional country, establishing a relationship between science and religion rather than a conflict. This was like an early blueprint for technocracy.
The Interpretation of Dreams (Sigmund Freud)
Where do dreams come from? Freud's central claim: dreams are distorted wishes and desires that we bury in our unconscious.
I find myself disagreeing with a lot of it — partly from the logic, partly from personal experience. I tend to think of myself as not having a large subconscious (but I guess I'll never know). I'm aware of most of my experiences and replay them throughout the day, not when I'm sleeping.
My main concern is how difficult it is to call this a theory, let alone a science. It's empirically brutal to test because the method of observation is biased by your awareness and intention of performing the experiment. But the biggest issue is it's unfalsifiable. If a patient says "I don't notice any tied wish to my dream," seemingly disproving Freud, he would simply respond that the mind is actively hiding it, that the dream is so distorted you can't see through it. There needs to be an act of attention and awareness that converges toward a path from wish to dream. If the patient finds a connection, Freud is right. If the patient doesn't find one, Freud is also right. Haha you can't win.
Still, I think his mental models are useful tools for psychoanalysis. I understand why his bold claims persist to this day. It's a lens of sense-making for treating the particular individual, not a general theory applicable to all human beings.