In 1937, the Washington State Legislature allocated funds to construct a new bridge spanning the Puget Sound.
The initial design proposal came from Clark Eldridge, a local civil engineer. He suggested a tried-and-true conventional suspension bridge design, with a set of 25-foot-deep trusses to sit beneath the roadway and stiffen it against the wind.
Eldridge proposal would have looked like this:
But the State of Washington thought Eldridge’s design was too expensive, too many trusses, and so they went bridge shopping.
Leon Moisseiff (of Golden Gate Bridge fame), had recently published a groundbreaking theory called “elastic distribution”, which proved mathematically that you could build a bridge that was thin and flexible rather than wide and rigid, allowing the bridge to move with the wind rather than opposing it.
Moisseiff proposed a design with no trusses at all, only a very skinny, very flexible, bridge deck. The math checked out, and it was a lot cheaper than Eldridge’s proposal, so they went ahead and built it. This is what Moisseiff’s bridge looked like:
The Tacoma Narrows Bridge was elegant, economical, mathematically perfect. Moiseiff called it "the most beautiful bridge in the world."
Though there was a slight problem. When the wind picked up, the bridge would start doing a kind of wave that made the cars bounce up and down. The locals were a bit concerned, but also amused. They nicknamed the bridge “Galloping Gertie”.
This wave-motion is called resonance. Every object has natural frequencies at which it will vibrate. If you hit a wine glass with the perfect frequency, it’ll vibrate, and if you make the pitch loud enough, the wine glass will shatter. Same thing with bridges — if you hit the bridge deck with a steady wind from the perfect angle, it’ll start to oscillate in a waveform.
The problem of resonance is well understood in bridge design, so Moiseff was not that worried. His engineers added tie down cables, moved around weights, installed hydraulic dampers, and assured the public everything was under control, they knew what the problem was and were solving it.
On November 7, 1940, at 10am, the Puget Sound experienced a fairly typical 40 mph windstorm, nothing the bridge had not already handled. Moiseff’s bridge began to do its normal galloping. And then, for no clear reason, the bridge began to display a novel form of oscillation that had never been seen before in this bridge or in any bridge ever built — it started twisting.
Instead of moving up and down in a waveform, the bridge deck began rotating along its central axis, reaching an angle of 45 degrees back and forth. There’s incredible footage here — absolutely worth watching. Unable to handle the rotational stress, the bridge tore in two.
What happened?
It turns out Moiseiff’s novel bridge had uncovered a novel mode of structural failure for bridges — what became known as aerostatic flutter. Wind passing over the bridge created small vortexes, which caused the bridge to twist in just the right way, creating more vortexes, causing the bridge to twist more, etc. — in an explosive feedback loop.
Moiseff had modeled the elastic distribution math correctly, but what he didn’t count on was the unique intersection of structural resonance and aerodynamics creating a never-seen-before twisting feedback loop that would destroy the bridge (and his career) in a matter of minutes. Classic Black Swan.
Won’t Happen Again
Just days after the bridge fell, a whole army of experts, researchers, physicists, engineers, and emergency commissions descended in order to find answers to who, what, when, where and why — to figure out what happened and understand it all.
They realized fairly quickly that the twisting may have been related to aerostatic flutter, which had previously only seen before in airplanes. They debated it and argued about it and built little wind tunnel models, and wrote dozens of papers, and they argued about whether it was truly an example of resonance or not (and continue to argue about it to this day). They published articles and wrote papers and added aerostatic flutter to checklists of bridge engineers everywhere.
Again, it’s this pathalogically need to sap the Black Swan of it’s horrifying unknowability by explaining it, defining it, naming it, and making all kinds of specific preparations to assure ourself this will never happen again.
But this kind of one-off thinking totally misunderstands the nature and sheer volume of Black Swans out there. The next Tacoma Narrows Bridge won’t be a bridge, it’ll be a Boeing 737 crashing into the runway, or a Silicon Valley Bank collapsing in 24 hours, Accutane getting recalled for causing birth defects, Democrats assuming Donald Trump could never actually win, top-tier investors shoveling billions into FTX months before it was uncovered as a fraud, Biden playing NATO brinksmanship with Putin right up to the invasion of Ukraine.
It’s technically true that Moiseff’s bridge failed due to aerostatic flutter. But the real reason the bridge failed is because he assumed that his model of reality was an accurate depition of reality. He confused the map for the territory.
Another kind of bridge
The second bridge we’re going to talk about isn’t actually a bridge. It’s an investment philosophy from a really smart, really old guy called Benjamin Graham.
Benjamin Graham was a professional investor and finance lecturer at Columbia University in the 1930s. Today, he’s known as the "father of value investing". Ben wrote two of the most important investment books of the twentieth century: "Security Analysis" (1934) and "The Intelligent Investor" (1949). The central thesis of his work is a concept that he called the “margin of safety”.
Margin of Safety: Let’s say you’re considering buying a company you think is worth $100 million. Graham says: only buy the company if you can get it at a price of $70 million or lower. Why? Because you might be wrong! The macro-market might move in a way you don’t expect. There might be factors in the business you don’t fully understand.
The central teaching of the most brilliant investment mind of the 20th century is that you might be wrong.
This is a very good way to understand reality:
A margin of safety is achieved when a security is purchased at a price sufficiently below underlying value to allow for human error, bad luck, or extreme volatility in a complex, unpredictable and rapidly changing world.
Investing with a margin of safety is akin to driving with a seatbelt on; it does not prevent accidents, but it can significantly reduce the severity of their consequences."
The need for a margin of safety is due to the future being unknown and unknowable. It acts as a cushion against errors and unforeseen negative developments.
— Seth Klarman
This Klarman guy understands exactly what we’ve been talking about here — the future is unknown and unknowable. Human error, bad luck, extreme volatility are a fact of life.
And this is the central idea of the margin of safety: the only way to survive in a Black Swan world is to make plans that assume you don’t know everything, that assume you’re missing all kinds of important information, that assume things could go catastrophically wrong — and still make room for your survival.
In the wise words of Warren Buffet:
You don’t try to buy businesses you think are worth $83 million for $80 million. You leave yourself an enormous margin of safety. You build a bridge that 30,000-pound trucks can go across and then you drive 10,000-pound trucks across it. That is the way I like to go across bridges.
Now that’s a how you build a bridge.
Margin of Safety
This margin of safety idea is a wonderful way to starting place for developing a Black Swan strategy.
In some sense, margin of safety is a very familiar concept, a kind of folk wisdom that all grandmas and grandpas know.
When you travel internationally, bring some extra cash, just in case. Also get to the airport a few hours early. You never know.
Have a rainy day fund in case of unforseen accidents or expenses.
When you’re hosting a dinner, cook more food than you think people will eat — better to have too much than too little.
When you’re completing an assignment, aim to get it done a few days early, so that you don’t have to cram at the last minute or run out of time if something unexpected happens.
Have a spare key. Back up your files. Print an extra, just in case. Wear a seatbelt. Buy insurance. Bring a water bottle, even if you think you won’t need it.
We have so many little sayings about margin of safety — “better safe than sorry”, “don’t cut it close”, “have a plan B”, “leave room for error”.
But we should also learn to start thinking about margin of safety more broadly, not just in terms of concrete contexts:
In running a company, always having much more cash on hand than you need in order to survive an academic downturn.
In a relationship, invest in more communication and bonding than you think you’ll need.
When you build a skyscraper, have two staircases on opposite sides of the building, even if you think your building is fireproof (the World Trade Center was not built like this).
It means when you
Another way of framing margin of safety is to not to trust your models of the world and the future. I assume that I am wrong, therefore I build in extra capacity even when I don’t think I need it.
Note to reader: Ok I am stuck here. In some sense, I don’t feel that the direction I’m taking is very compelling. I really want to get into this idea about a different way of seeing the world — where instead of trying to predict it, you learn to see patterns of fragility. I think that’s the key idea. In linear systems (like school, work, games), you can basically figure out how the system works. Turn in homework and the sum of all your homework assignments = your grade. In work, do good work = your pay. In football, score points = whether you win or lose. So the way to play is to calculate a plan and execute it.
But then we have all these non-linear systems (the economy, startups, companies, geopolitics, a lifetime) that are actually incredibly difficult to understand. You do something and it may have all kinds of effects you don’t anticipate because there are feedback loops all over the place and non-linearities. E.g. the guy who kept running into the skyscraper window and then suddenly it gave way.
Theories, games, human-invented proccesses tend to be linear. Physical world, social world, economic world, geopolitical world, tend to be non-linear. In the linear world, the smarter you are, the more you know, the more you know what to do. The best plan is to be smart, predict the future, and execute on it. In non-linear world, you can’t ever really calculate the future, it’s too hard. And also there are all kinds of monsters and magic waiting out there for you. It’s a magical and horrifying place. So what’s the proper way to be in such a world?
One thing to point out is we often trick ourselves into thinking we are in a linear system when it’s actually non-linear. Sometimes it’s hard to tell for a while - that’s the nature of non-linearity. Again the guy running into the skyscraper window. In some sense, this is the broader lesson here — the map is not the territory. But even more so for complex systems, your model of reality will never even come close to the fidelity needed to predict reality. And when we think we’re non-linear we take this planning approach and (1) we make predictions again and again and get them wrong (2) we put oureslves in exessively risky positions without realizing it (3) we don’t understand how to increase our amount of surprise positive outcomes (there is a way to do this) because we are narrowly focused on only the positive outcomes we can imagine in our heads.
I think that’s a better approach to doing this.