Disrupting Hell!

 

 

screen-shot-2016-11-23-at-6-47-42-pm

 

Disrupting Hell:  Accountability for the Non-believer

by Professor Clayton M. Christensen, Craig Hatkoff and Rabbi Irwin Kula  
For millennia religion has been one of civilization’s primary distribution channels for moral and ethical development that both helps us determine what is right and wrong and that creates accountability for our actions. Yet, for an increasing number of people religion is no longer getting these two critical jobs done. We argue that without a strong moral and ethical foundation civilization’s prospects for peace and prosperity will remain elusive and at some point simply founder. Perhaps we need innovation in “moral and ethical” products, services and delivery systems for non-consumers of traditional religion.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The Seeing Eye of God and the inscripton "Annuit Coeptis" (Latin trans: He approves of our undertakings) first appeared in the dollar bill in 1935.
The Seeing Eye of God and the inscription “Annuit Coeptis” (Latin trans: He approves of our undertakings) first appeared in the dollar bill in 1935.

The effective functioning of capitalism and democracy depends not only upon clear rules and mechanisms for accountability for obvious transgressions but also upon the people voluntarily obeying  “unenforceable” rules– i.e. behaviors that occur when no one is watching.  America’s market economy has worked largely because its people have chosen to obey the rules even when they can’t or are highly unlikely to be caught. Why is that?  Mid-nineteenth century de Toqueville offered one explanation:  an individual’s fear of being eternally relegated to the netherworld strongly discouraged bad behavior. He warns, “when men have once allowed themselves to think no more of what is to befall them after life, they readily lapse into that complete and brutal indifference to futurity.”   

On the Great Seal of the United States, our Founders included the Divine Seeing Eye that determined whether we would be rewarded or punished in the next life.  But what happens when more and more people stop believing in that Divine Seeing Eye? Albeit in various forms, a temporal Seeing Eye starts to emerge: Big Brother is watching. Almost all successful market economies have historically had strong religious underpinnings with the notable exceptions of Singapore and China.  Those two particular regimes have relied upon command and control societies that function very effectively even without religion.  Everyone is watched and everything is seen; justice for transgressions is meted out swiftly and severely. With limited due process, bad behavior carries serious consequences– not exactly the environment where most of Americans would want to live.  But what happens when people neither believe in the Divine Seeing Eye nor want to live under a temporal Seeing Eye?  Who or what will hold us accountable? Perhaps we need another kind of Seeing Eye– the Seeing I.

Religion has wielded a powerful set of accountability technologies such as the promise of reward (Heaven) and the threat of punishment (Hell).  Yet today, religion’s role in fostering personal accountability is no longer getting the job done for a rapidly increasing number people; nearly one-third of the population under 35 sees no meaningful role for religion in their lives.  So what set of moral and ethical principles, codes, rules and practices will hold behavior in check?  And how will these be delivered if religion is no longer the primary distribution channel for moral and ethical conduct for a wide swath of civil society?

Disruptive innovation in religion can help find alternative products and delivery systems that cultivate moral and ethical behavior.  These new systems will emerge via widely distributed bottom-up movements and processes starting with the intuitions of a new breed of early moral adopters. These early adopters will need to reach non-consumers of our traditional moral and ethical products and services in unconventional ways.  Disrupting religion can help find alternative delivery systems for moral and ethical behavior.

Religions are facing serious product and distribution issues. Just as businesses are subject to the forces of disruptive innovation, religions’ “business models” are also subject to threats to their sustainability in addition to other challenges to their well-established business principles:  percentage utilization versus total capacity (e.g. what to do about those empty pews), marginal costs exceeding marginal revenues (where profitable activities prop up the less profitable or money losers altogether).  Like the cable companies who face cord cutting, religions are also facing pressure from product unbundling and disaggregation and an overall declining market share from changing demographics.  Their expensive bricks and mortar distribution models create exorbitantly high fixed costs of operations.

The Peanut Butter Paradox adds a whole new framework to the moral dimensions of accountability in an interconnected world.

Religions have become subject to the condition we call “creeping feature-itis” (too many features, too inaccessible, too much expertise required, too complicated, too expensive for many; religions might be well advised to revisit the jobs that need to get done and to find new ways to reach the non-consumer.  Disruptive Innovation Theory can help understand the market dynamics and develop new strategies and approaches. The incumbents in religion need to disrupt themselves or, otherwise, they too will face the music and be disrupted by external forces.  The onslaught will likely be from with low-end, user-friendly products and more efficient delivery systems that are good enough get the job done: to help us create a more ethical and moral society with real personal accountability.

The risk of unbundling in the cable industry provides a cautionary tales for religions. Is chord cutting next?
The risk of unbundling in the cable industry provides a cautionary tale.  Will religions face cord cutting as well?

But even if we do devise new modes of personal accountability, another perhaps more vexing challenge is the sheer complexity itself of today’s financial, political, and cultural systems that unintentionally undermine personal accountability.  Conflicting laws and regulations, arcane accounting, and byzantine systems of taxation reward those with highly specialized expertise and the knowledge of how to circumvent clumsily cobbled together sets of best intentions of lawmakers and regulators.  In a post-industrial society with hyper-connected, highly complex webs of interactions, the chain of personal responsibility gets hidden from view, absorbed into a larger networked set of activities and players. Bad behavior (both intentional and unintentional) is easy to commit and increasingly hard to identify. Responsibility becomes like peanut butter thinly spread across a piece of toast.

As the saying goes, one bad apple spoils the bunch.  In like manner,  it’s hard to identify which bad peanut caused the batch of peanut butter to spoil. If it’s difficult to even know when we are doing something wrong, and equally hard to find the transgressor, then we are on the cusp of an accountability deficit that limits the progress of civilization. We refer to this phenomenon as the Peanut Butter Paradox: all individuals are acting morally yet the collective behavior leads to utterly immoral results.  All the individual peanuts are all doing what they are supposed to do but the entirety of the effort produces rancid peanut butter. Which peanut should be held accountable?

Make vesus buy? Easy!
When was the last time you made peanut butter at home?

If you want peanut butter desperately, running out to the local supermarket to buy a jar of your favorite brand of peanut butter at the supermarket is more cost-effective than buying raw peanuts and making peanut butter at home. It’s the classic  ”make versus buy” decision that we take for granted these days. Admittedly, in a future filled with 3D printers things might change. But for now, with peanut butter, as with most necessities and conveniences of modern life, the “buy” usually beats out the “make” decision handily.  The obvious explanation is Adam Smith’s division of labor.

adam smith wealth of nations cover
Pinheads: Capitalism amoral? Greed is good? Invisibe Hand? Most “capitalists” have never read Adam Smith’s magnum opus–“The Theory of Moral Sentiments.” They routinely stick to “Wealth of Nations.”

At first blush, Smith seemed to believe the division of labor was the greatest thing since sliced bread.  Or at least it was the best way to make sliced bread. If you have managed to get about half way through the 800-odd pages of Wealth of Nations you probably stumbled upon a harsh indictment by Smith of the societal consequences of the division of labor.  Smith warns: “The man whose whole life is spent in performing a few simple operations… loses the habit of exertion [of thought] and generally becomes as stupid and ignorant as it is possible for a human creature to become.”

In a world of mass-market consumerism, the production of almost all goods and services entails more and more specialization. The agrarian peanut farmer has yielded to the meta-corporate-military-industrial- peanut butter complex: growing, harvesting, processing, shmushing, packaging, marketing, and distributing this throat-clogging delicacy that touches all aspects of human existence.

Smith nailed down the division of labor to ten or so men with 18 distinct operations for making pins. Together in one factory, they could bang out 48,000 pins in a day versus one man trying to make the entire pin by his lonesome who could scarcely make one pin in a day.  In today’s world, it is now almost unfathomable as to the number of tasks, sub-tasks, sub-sub-tasks along with the matrix of “specialists” needed to turn peanuts into peanut butter.  So will Smith’s caveat come home to roost? Will the division of labor lead us down a parallel path where we become morally “stupid and ignorant” as well?  Through no fault of our own, this might simply be due to the collective loss of thought.

Smith’s example of making pins just isn’t what it used to be. Human activity across the spectrum has become rather more complicated. It feels like everything spews forth from global networks and systems using advanced technology that involve many parties and many processes in far-flung places. There’s a department for everything.   Our minds begin to adopt a job shop mentality departmentalizing all aspects of humanity as well– home, work, family, school, charity, taxes, religion, sports, politics etc. We move from one disconnected life department to the next rarely taking the time or the inclination for reflection. Without ongoing reflection, we begin to lose perspective of the greater whole. We deconstruct everything and tend to look at our actions in atomized pieces.

There is a real danger when behavior gets viewed as a deconstructed series of discrete actions. It’s as if we are only examining the actions and intentions of the individual peanut and missing the entire batch of rancid peanut butter.  It allows the individual peanut to escape an overall societal accountability by reassuringly whispering to himself:“ but I was only doing my job and I did nothing wrong!”   For those with even a rudimentary understanding of quantum physics, the wave-particle duality might be helpful: the peanut is the particle and peanut butter is the wave.  And yes, it’s both at the same time.

With an onslaught of globalization and modernity, we construct webs of interconnected nodes, spheres, and linkages comprised of fluid complexes of activity. These layered complexes stretch across endless processes and the entire continuum of principal-agent whether individuals, institutions, companies, or countries, each with their own cultures, languages, contexts, and stakeholders.  But as the world just keeps on making more and more peanut butter no one seems to stop to see if the end product is edible.  Everyone tends to keep their eyes only on their own silo of activity. But when things go wrong, due to bad behavior, it starts to get pretty sticky. Finding the culprit and bringing him to justice is no easy task. Usually the accusations of an early moral adopter– or whistleblower– against corporate wrong-doing results in the predictably canned response:  ”these fanciful charges are without merit and we intend to vigorously defend ourselves against them!”

Governments, businesses and civil society operate through vast webs of organizations and institutions, global delivery networks and value chains where it is increasingly difficult to pinpoint responsibility for good behavior or bad. Historically bad behavior has been held in check in part by traditional accountability technologies (the Seeing Eye, Heaven, Hell, Purgatory, etc.). While admittedly a bit quirky (hey, it’s an off-white paper), it can be helpful to think of these accountability technologies as “moral apps.”  Until recently, moral and ethical behavior has been rather effectively delivered through religions and their respective portfolios of moral apps. While a bit of a stretch, a business analogy might be that the major console video game platforms (PlayStation, Xbox and the Wii) have been disrupted by casual games available for free or almost free from an endless selection on iTunes or Facebook. Remember Angry Birds and Farmville? There was even the Confession App cautiously embraced by the Vatican (albeit non-sacramentally) as a “useful tool.”  But the numbers undeniably show that religion has seriously begun to lose its grip.   Almost 60% of all Christians believe Hell is not a place but merely a symbol of evil. America’s millennials (40%) eschew religious accountability altogether. If accountability has broken down, what new set of moral products and services and delivery systems can help foster good behavior and/or keep people in line?

New accountability technologies will be necessary to fill the gap particularly where religion has left off for non-believers or “non-consumers” in disruptive innovation parlance.   But first we must help people recognize what and when they are doing something morally wrong.  This will not be easy. It is time to introduce disruption into the domains of religion and ethics to restore Adam Smith’s notion of “moral sentiments.” This requires a new toolkit of disruptive innovations initiated by fearless early moral adopters.  

Pope Francis might be a serious early adopter in the field of moral innovation.
Pope Francis might be a serious early adopter in the field of moral innovation.

Could Pope Francis be an early moral adopter and be on to something really big?  Recently in his catechesis on World Environment Day (and reinforced in his epic Evangelii Gaudium), Pope Francis offered an ethical innovation. Wasting food is stealing: “Remember, however, that the food that is thrown away is as if we had stolen it from the table of the poor, from those who are hungry!” (June 5, 2013)    OMG!  “Thou shalt not steal” suddenly has a new, expanded meaning and one that not everyone surely will agree with. Is wasting food really theft? For most of human history, where subsistence was the norm, waste wasn’t even an issue. But in a world of abundance will “Thou shalt not waste,” emerge, as Francis suggests, as a modification or a whole new commandment altogether for which we will be accountable?  Wasting food as theft just makes the discussion about “what is moral behavior?”  a hell of a lot more serious in a culture absorbed with consumerism and materialism.  What the justifications are for this new definition of stealing, how we monitor such behavior, and what punishments will be imposed  (in this world or the next) for wasting food we cannot say.  But it will be damn interesting to find out. But do I really go to Hell for wasting food?

Taco Bell might just have an interesting take on this new definition of stealing.  Recently one of their mischievous trainees was photographed licking a stack of taco shells. The photo above was posted on the Internet and then went viral. This amused neither the management of Taco Bell nor its customers. Another version of the story is the ‘playful’ employee was simply licking leftover taco shells used for training to be thrown out afterward.  The Pope’s new missive raises the bar: Was Taco Bell was really wasting real food for training that should otherwise be shared with the hungry?  Whose moral behavior would Pope Francis be more exercised about– the employee having fun or Taco Bell using real food for training?

Once upon a time life was simple. De Tocqueville keenly observed that it was reward and punishment, augmented by a healthy fear of what would happen in the “afterlife,” that kept the moral compasses pointing north. The American entrepreneur would otherwise exhibit a “ “brutal indifference to futurity.” Being productive might help you get to heaven but you better be good or else you’ll go to Hell. The act of God-fearing was deeply infused into the American psyche. Fear of consequences and personal accountability was a gravitational force that complemented Adam Smith’s Invisible Hand.  Smith knew well the danger of liberating greed, ambition, and materialism as the necessary, animating energy of capitalism.   Smith’s Theory of Moral Sentiments was his more important book– and the one that most people haven’t read. They tend only to read the first few pages of Wealth of Nations and leave it at that.  In TMS Smith proffered a moral philosophy to hold in check personal transgressions and egregious behavior inevitably unleashed by capitalism.   In a nutshell, Smith’s moral philosophy of the “impartial observer” is that empathy for others ensures both the production of wealth and a good society.

Over time people have become less God-fearing.  The terrifying images of Hell painted by artists such as 16th-century Dutch painter Hieronymus Bosch once spoke to the imaginations, fearful souls and consciences of the masses;  today those works of art seem almost quaint.  Nearly 30% of Americans under 35 and 20% of all Americans do not believe in the traditional notion of any religious accountability– a God who sees all and rewards and punishes us accordingly. Even among Catholics in the U.S., 99% of whom believe in God, only two-thirds strongly believe in Hell (and it is more likely a symbol, not a place).  Curiously, 85% believe there is a heaven (and it’s a place).  Go figure. Apparently, moral hazard has evaporated not only for the U.S. Banking system but also for a significant percentage of U.S. Catholics.  Be it God or the Federal Reserve, bailouts are available.

Hieronymus Bosch: a tad quaint but effective in its day
Hieronymus Bosch: a tad quaint but effective in its day

This is more important than many of us imagine because a successful market-based economy requires a high degree of trust that is undermined when there is little or no accountability.  Studies show that simply hanging a picture of a pair of eyes on a wall will actually change behavior in a room.  It’s no accident that “In God We Trust” is emblazoned on our currency.  Remarkably though probably unnoticed by most, the Founders built this moral app right into the Great Seal of the United States– they plastered the Divine Seeing Eye right above an unfinished pyramid on the seal.  In 1935 FDR embedded the Great Seal on the back of the one-dollar bill.  One of those funny Latin phrases, Novus Ordo Seclorum, (trans: New Order of the Ages) possibly helped him market the new deal.  But that other funny phrase, Annuit Coeptis (trans: He approves of our undertakings), might serve as a contemporary moral app.  Next time you pull out a dollar bill, take a moment and look at the eye on top of the pyramid and ask yourself: “Am I spending this dollar wisely?”     Let us know how that works out. This little ritual might help us move from the Divine Seeing Eye into the internalized Seeing I of self-regulation for those with little or no interest in the providential.

There are few models of successful market-based economies without strong religious underpinnings. There are the Singapore and China (still a work-in-progress) models meting out justice swiftly and severely.   The Divine Seeing Eye is replaced by a Temporal Seeing Eye that sees everyone and everything conjuring up a brutal Big Brother that most Americans would find unacceptable the NSA notwithstanding.  In Singapore, the apocryphal punishment for chewing gum is caning. In reality Singapore does permit chewing gum for medical purposes by prescription only.  But caning is alive and well.

In 1994 an American student prankster Michael Fay was enjoying a semester in Singapore; he was convicted of vandalism for stealing road signs and spraying graffiti. The authorities sentenced him to 83 days in jail. More infamously he also received four strokes of a rattan cane on the buttocks. Caning, either for chewing gum or spray painting, would hardly fare well on the scales of political correctness in this country.  Fay went on to lead a fairly troubled life but has most likely stopped chewing gum altogether. But in America, it seems neither God the Father nor Government the Big Brother is enough to keep us in line.

But there seems to be a more vexing problem that necessarily precedes the issue of accountability: do people clearly understand if or when they are doing something wrong?  In simpler times a good place to start learning how to behave was the Ten Commandments.  It seems we’ve gotten a little rusty on the content of those ten imperatives and prohibitions. While 62% of Americans know that pickles are one of the ingredients of a Big Mac, less than 60% know that “Thou shalt not kill” is one of the Ten Commandments.  (Then again, ten or so percent of the adults surveyed also believe that Joan of Arc was Noah’s wife.)  In fact, the majority of those surveyed could identify the seven individual ingredients in a Big Mac: two all-beef patties (80%), special sauce (66%), lettuce (76%), cheese (60%), pickles (62%), onions (54%) and sesame seed bun (75%).   “Thou shalt not kill” only beats out onions.  It’s enough to make you cry. “Two all-beef patties, special sauce, lettuce, cheese, pickles, and onions on a sesame seed bun.” In one Big Mac promotion, the company gave customers a free soda if they could repeat the jingle in less than 4 seconds.

bigmac_6
“Two all beef patties, special sauce, lettuce, cheese pickles and onion on a sesame seed bun!”

A catchy jingle undoubtedly made The Big Mac part of pop culture. Who could forget the 1975 launch of McDonald’s Big Mac?  That jingle-laden effort takes second place for “most clever marketing campaign” in advertising history. Yet it was the unsurpassed marketing genius of film producer extraordinaire Cecil B. De Mille that surely takes the cake.  De Mille gave the Ten Commandments a special place in contemporary American culture decades before they became fodder for the “culture wars” regarding separation of church and state.

In perhaps the greatest marketing coup in American history, De Mille ingeniously tapped into the angst of Justice E. J. Ruegemer, a Minnesota juvenile court judge who was deeply concerned with the moral foundation of the younger generation. Starting in the 1940s, Ruegemer had, along with the Boy Scouts’ Fraternal Order of Eagles, initiated a program to post the Ten Commandments in courtrooms and classrooms across the country as a non-sectarian “code of conduct” for troubled youth. But De Mille took us to the next level. In 1956 De Mille seized the opportunity to plug his costly epic film, The Ten Commandments.  He proposed to Ruegemer that they collectively erect granite monuments of the actual Ten Commandments in the public squares of 150 cities across the nation.  The unveiling of each monument was a press event typically featuring one of the film’s stars. “OMG!!! Isn’t that Yul Brynner?” or  ”that’s Charleton Heston!”  Absolutely brilliant!   Things remained copacetic until the 1970s as this public expression of a code of conduct was apparently non-controversial.  Then all hell broke loose leading to a firestorm that still burns to this day on the issue of separation of church and state.

Charleton Heston (aka Moses) with Cecil B. De Mille at unveiling of Ten Commandment monument. Film  P & A budgets  have been growing ever since.

     “Ten Commandments” as an expression didn’t even exist until the middle of the 16th century when it first appeared in the Geneva Bible that preceded by half a century the King James Bible, first published in 1604.  The Ten Commandments have the attributes of a disruptive innovation. They take the hundreds of laws in the bible and boil them down to ten fundamental pearls that are more accessible and usable for the masses. Obviously not covering all the bases they were clearly good enough to get the job done: to create a simple, accessible bedrock for moral and ethical behavior.  Serious biblical scholars of the Ten Commandments view the vastly oversimplified code as a synecdoche, or visual metaphor, for the much more complex set of laws and regulations.

With or without the Ten Commandments it seems that most people have internalized “Thou shalt not kill.” Cognitive scientist Steven Pinker points out that, as recently as 10,000 years ago, a hunter-gatherer had a 60% probability of being killed by another hunter-gatherer. At some point in history, the prohibition against murder became a conscious ethical innovation.  By the eighteenth century BCE the code of Hammurabi, our earliest surviving code of law, prohibited murder yet the prescribed punishment was based on the social status of the victim.  Over a period of several centuries, our moral horizon expanded once again.  The biblical commandment “Thou shalt not kill” evolved to eliminate any class distinction— clearly an upgrade.

Yul Brynner (aka Pharoah) at unveils Ten Commandments monument at press conference with Cecil B. De Mille
Yul Brynner (aka Pharoah) and Cecil B. De Mille unveil another slab of granite with Ten Commandments at a press conference .

Ethical innovations come in fits and starts.  Yet over the long run ethical and moral horizons of civilization widen. Today, according to Pinker, murder rates are at the lowest in history.   In the U.S. there were 13,000 murders in 2010. If our math is correct, that represents less than ½ of one-thousandth of one percent assuming no multiple murders and no suicides are included; about one person in 25,000 actually kills someone. While the numbers are less clear on adultery due to methodological challenges and interpretations (single incident versus multiple incidents, it takes two to tango, etc., etc., etc.), lowball estimates are 20-25% of adults are guilty.  Assuming 150 million adults, some tens of millions of Americans have committed adultery give or take a few.   It would seem not all commandments are created equal– or at least obeyed equally.

Morality has become more complicated.  In simple situations with a “one-to-one” correlation — one person murders another or someone steals a neighbor’s horse, or someone lies in court — we know what they have done is wrong and we hold them accountable. But we live in a society comprised of perpetual power imbalances, multiple stakeholders, ambiguous situations, and complex interdependent systems.  Is releasing classified documents as a response to questionable or inappropriate government surveillance heroic or traitorous? The July 10, 2103, Quinnipiac poll, 55% of Americans consider Edward Snowden a whistleblower/hero while 34% believe he is a traitor– a 20% shift in favor of Snowden since the earliest polls. Interestingly, normalized polarized Americans are uniting against the unified view of the nation’s political establishment.  The situation has become even more murky, now that Snowden’s activities have been indirectly validated through the Pulitzer Prize.   Determining what is right and wrong gets pretty dicey especially when we don’t know what we don’t know. What we do know is: a hell of a lot of Americans really don’t want Big Brother reading their emails. But what are they afraid of?

Does a banker taking advantage of asymmetric information or the sheer complexity of regulations, accounting and tax codes let alone multi-billion dollar obtuse securities constitute unconscionable behavior that deserves punishment? Who is responsible for lung cancer deaths– the smoker, the tobacco company, the tobacco worker or the FDA? Who is accountable for obesity-related deaths– McDonalds, corn subsidies, the overweight consumer or all of the above? Who is accountable for unsafe working conditions in the developing world– Apple, the Gap, Nike, Walmart, the local politicians, the builder– or the consumer who benefits from cheaper prices? Or are these just acceptable “costs of doing business?”

Perhaps we are undergoing the atomization of morality. Has morality has become subject to the “distributed network effect” such that every action can find a morally acceptable pathway of justification? When systems, stakeholders, and situations become complexes replete with interdependencies, conditionality and incalculable unintended consequences moral responsibility gets spread like peanut butter across a chain of activities, persons, corporations, partnerships, not-for-profit organizations, and government agencies. Complicating things further is that every agent has an obligation of loyalty to their principal. Peanut butter consists of a whole lot of peanuts, so which peanut or employee do we hold accountable for screwing up a batch of peanut butter? In like manner, one bad apple spoils the bunch but hard to know which one once it becomes applesauce. How should we parse accountability when there is a complex chain of activities where this atomization of morality allows every individual to avoid taking responsibility for the whole.  Often times we have only an intuition that something is amiss or doesn’t feel right even if we can’t exactly put our finger on it. Executive compensation feels like stealing, the selling of tobacco products feels like killing as does serving unhealthy food to the masses.  Are these moral intuitions, like Pope Francis’s about wasting food, actually expansions of our moral horizons or just plain silly?

A new awareness of what constitutes moral transgressions must come before accountability can be effectively scaled to help ensure enduring peace and prosperity.  This new moral awareness cannot arise without a great deal of civil discourse and yes, even uncivil discourse.  Slavery was okay until it wasn’t. Gay marriage was not okay until it was. Smoking in public places was permitted until it wasn’t.  Corporate polluters were acceptable until they weren’t. Changing the moral baseline is always painful.  Issues of accountability at scale are somewhat premature until we can collectively agree what constitutes right and wrong in an extended or networked chain of “values.”  It’s hard to build consensus when everything is so damn complicated with multiple stakeholders, complex systems, and ambiguous situations. Until there is an inflection point– that quantum moment– where enough people can be energized by the intuitions of early moral adopters to see something is terribly wrong and to take action there can be no wide spread accountability.

All this calls for a new breed of moral innovators.  It will be these early adopters– gifted with unusual charisma, empathy, and humility– who expand our moral horizon. They might even have to resort to civil disobedience. They will invariably face criticism and grave risks– financial, reputation, and even bodily harm– as they excite a new moral majority– many of whom will not believe in Hell but will embrace the possibility of better “Heaven” on earth.

Standard

Keith Richards’ Second Wind: Finding the Lost Chords of Rock and Roll

“Ah! Five strings, three notes, two hands and one asshole”.Keith Richards asked to explain Open G Tuning

In the late 1960’s, as best Rolling Stone guitarist Keith Richards can remember, it happened on a break during a recording session for the album Sticky Fingers (1969). The session was attended by guitar virtuoso Ry Cooder who played slide guitar and mandolin on several Stones albums (Cooder played mandolin on Love in Vain (Let It Bleed album) and also played slide guitar on Sister Morphine (Sticky Fingers album). Cooder was warming up in a corner and Richards’ unexpectedly heard something he couldn’t identify. He suspected Cooder had tuned his guitar differently than the usual “Standard E” tuning used by almost all rock guitarists. Richards asked Cooder why it sounded so different. Cooder’s response would change Keith Richards’ life and transform the sound of the Rolling Stones. “It’s Open G tuning.” Cooder had tuned his guitar like a banjo, a tuning sometimes used to played traditional Mississippi blues used almost exclusively with a slide.

The vast majority of guitarists play in Standard E: six strings, with five notes (E-A-D-G-B-E). But there is more than one way to tune a guitar known as ” alternate tunings. ” Open G only three different notes (D-G-D-G-B-D) on the a guitar’s six strings. Richards’ had rarely used alternate tunings and had never used open G. But there was something that Richards’ heard with his discerning ear that was epiphany of biblical proportions. In short order he started to experiment using Open G to play rock and roll instead of the blues. He started to create his own chords and riffs in Open G that enabled him to play rhythm and lead guitar at the same time. This new “vocabulary” would come to define the sound of the Rolling Stones. In his epic memoir Life (2010), Richards would reflect on Open G stating that “everything you play sounds like a god damn symphony.” It was a case where less is more.

But our story really begins in the early 1960’s. It was the era of the British Invasion led by the Beatle and the Rolling Stones. When theStones first formed in 1962, they built their reputation on electrifying performances of rhythm and blues covers of the greats- Robert Johnson, Muddy Waters, Howlin’ Wolf, Chuck Berry to name a few. However, it quickly became clear that to truly succeed and endure in the music industry, they needed to create their own material. The pressure to write original material was amplified by the phenomenal success of their contemporaries and rivals, the Beatles and the magical songwriting of John Lennon and Paul McCartney.

The band’s manager, Andrew Oldham, recognized the financial and artistic importance of original compositions and pushed Richards and Mick Jagger to start writing. As Richards later recalled, “We were told, ‘You’re going to be songwriters. Because that’s where the money is.'” This directive set the stage for one of the most successful songwriting partnerships in music history arguably second only to the Lennon-McCartney juggernaut.  The songwriting partnership of John Lennon and Paul McCartney had set a new standard in pop music, producing an astonishing catalog of original songs albeit the studio direction of producer George Martin (the fifth Beatle) added a great deal to the Beatles unique sound. By the time the Beatles disbanded in 1970, Lennon and McCartney had written nearly 200 songs together, almost all of them international hits. Lennon and McCartney had a transcendent ability to create hooks—most Beatle fans can recognize their songs within the first few bars.

This prolific output from the Beatles created a highly competitive atmosphere in the British rock scene. The Rolling Stones, along with other bands of the British Invasion, were not just competing for chart positions and album sales; they were in a race to define the sound and direction of rock and roll itself. The Stones needed to prove that they could match the Beatles not just in popularity, but in creativity and musical innovation.

Initially, Richards and Jagger rose to the challenge admirably. Over a five-year period in the mid-1960s, they penned an impressive string of hits including “Satisfaction,” “Get Off of My Cloud,” “19th Nervous Breakdown,” “Paint It Black,” and “Ruby Tuesday.” These songs showcased the Stones’ distinctive blend of blues-infused rock and provocative lyrics, establishing them as worthy rivals to the Beatles. The Beatles were clean cut while the Rolling Stones seemed to capture the grunge aesthetic. They were even forced to change the lyrics of their edgy song Let’s Spend the Night Together to Let’s Send Some Time Together for their infamous performance on the Victorian-esque Ed Sullivan Show

By the end of the 1960’s the well of compositional inspiration was beginning to run dry for Richards, the man responsible for crafting the band’s iconic guitar sound. The Stone’s blues-based rock was creatively restrictive; the Stones’ sound was inspired by and generally adhered to the Mississippi blues’ standard repeating structure of 8, 12, and 16-bar blues.

This method of tuning the guitar in Open G , where the strings are tuned to form a G major chord when strummed open, was not entirely new. In fact, it had deep roots in American folk and blues music, particularly in the playing of five-string banjos. This configuration allows for easy playing of major chords with just one finger barring across the fretboard. Richards saw something more in this simple tuning – a key to unlocking a whole new world of rock and roll. HIs discovery of Open G was like a bolt of lightening. In his own words: “It transformed my life.” But Richards didn’t simply adopt the tuning as it was traditionally used. Instead, he innovated, creating a unique approach that would come to define the Rolling Stones’ sound for decades to come. Richards made a crucial modification to his guitar. Finding that the lowest string (usually tuned to D in Open G) was just getting in his way, he simply removed it. This five-string approach, mirroring the five-string banjo that inspired the tuning, created more space in the sound and allowed for a tighter interplay with the bass guitar.

Rather than using Open G primarily for slide guitar playing as was common in blues, Richards began to develop his own vocabulary of chords and riffs. He discovered that he could create complex, full-bodied triad or three-note chord voicings using just two fingers, leaving his other fingers free to add embellishments and lead lines. This led to a unique style where rhythm and lead guitar parts were seamlessly interwoven.

The result was a sound that, in Richards’ words, was like “a goddamn orchestra.” The open strings created a constant drone that filled out the sound, while the simplified fingerings allowed for greater freedom and expressiveness in his playing. Songs like “Honky Tonk Women” and “Start Me Up” showcase this new approach, with their instantly recognizable riffs that are at once simple and complex, driving and melodic.

The adoption of Open G tuning also had a profound effect on the interplay between Richards and the Stones’ other guitarists. When Mick Taylor, and later Ronnie Wood, joined the band, they typically played in standard tuning. The combination of Open G and standard tuning created a rich, layered sound with each guitarist occupying a distinct sonic space. This “weaving” of guitar parts became a hallmark of the Stones’ sound, creating a texture that was greater than the sum of its parts.

To truly appreciate the significance of Richards’ innovation, it’s worth exploring the historical context of Open G tuning. The technique has its roots in the transition from banjo to guitar in American folk and blues music. In the late 19th century, the banjo was the instrument of choice in the southern United States, particularly among African American musicians. Its distinctive sound, combining syncopated rhythms with simple melodic lines, was a crucial element in the development of blues and early jazz.

When affordable guitars became widely available through mail-order catalogs in the early 20th century, many banjo players made the switch. However, rather than learn the more complex standard tuning, many of these players simply tuned their guitars to mimic the open tuning of a banjo. This allowed them to transfer their existing skills to the new instrument and maintained the drone-like quality that was characteristic of banjo playing.

This tuning found particular favor among blues guitarists, especially those playing with a slide. The open chord allowed for easy sliding between harmonically related positions, creating the expressive, vocal-like quality that is a hallmark of delta blues. It’s this tradition that Ry Cooder was tapping into when he demonstrated the tuning to Richards.

What makes Richards’ adoption of Open G tuning so radical is the way he applied it to rock and roll. Rather than using it primarily for playing slide guitar or to mimic banjo techniques, Richards developed a whole new approach. He created a style that combined the rhythmic drive of rock with the harmonic richness of open tuning, all while maintaining the raw, bluesy edge that was the Stones’ trademark.

In many ways, Richards’ use of Open G tuning can be seen as a disruptive innovation in music. It took an existing technique and applied it in a new context, creating something that was at once simpler yet, in other ways, more complex. The simplified fingerings made it easier to play certain types of phrases, but the richness of the open strings and the new chord voicings opened a universe of new musical possibilities.

This innovation also demonstrates a crucial principle of creativity: sometimes, constraints can be liberating. By limiting himself to five strings and a specific tuning, Richards paradoxically expanded his creative palette. The restrictions forced him to think differently about how he approached the guitar, leading to new ideas and techniques.

The impact of Richards’ adoption of Open G tuning extends far beyond the Rolling Stones. Many guitarists, inspired by Richards’ distinctive sound, have incorporated the tuning into their own playing. It’s become particularly popular in various forms of roots rock and alternative country music, where its resonant, earthy quality is prized.

In the context of the Stones’ rivalry with the Beatles, Richards’ innovation came at a crucial time. While the Beatles were winding down their career, the Stones were entering a new phase of creativity. The fresh sound provided by Open G tuning helped the Stones maintain their relevance and continue producing original material long after many of their contemporaries had faded from the scene.

In the end, Keith Richards’ exploration of Open G tuning stands as a testament to the power of innovation in music. By taking a technique rooted in traditional American music and applying it to rock and roll in a new way, Richards not only revitalized his own playing but helped shape the sound of rock music for generations to come. It’s a reminder that in music, as in all creative endeavors, true innovation often comes not from inventing something entirely new, but from seeing new possibilities in the familiar. In doing so, Richards and the Stones were able to meet the challenge of continual originality, cementing their place in rock history alongside their illustrious rivals, the Beatles.

The contrast between these two legendary bands is striking and illuminating. The brilliant songwriting of Lennon and McCartney, masters of unique-sounding, groundbreaking pop music, defined the Beatles — arguably the most successful band of all time. However, it was Open G tuning that came to define the sound of the Rolling Stones, cementing their status as perhaps the world’s most enduring and successful rock and roll band. This distinction underscores how different paths of musical innovation can lead to equally monumental legacies. The Beatles revolutionized pop music through their songwriting prowess, while the Rolling Stones found their unique voice through Richards’ innovative approach to guitar playing.

In the grand tapestry of music both bands stand as giants, each leaving an indelible mark on music. The Beatles showed the world the heights that pop songwriting could reach, while the Rolling Stones, through Keith Richards’ Open G revelation/revolution, demonstrated how a seemingly simple change in approach could create a wholly new and enduring sound. Together, they exemplify the diverse ways in which musicians can push the boundaries of their art, inspiring generations of artists to find their own unique voices and innovations.

Take away: If you want to change the world, tune your guitar like a banjo

Standard

An SEC for Politics Could Rebuild Trust in Democracy: Distinguishing FACTS, FICTS, FUCS and OUTRIGHT LIES

By Craig Hatkoff and Irwin Kula

August 17, 2024

GLOSSARY:
FACT- Factually Accurate Claim of Truth

FICT- Factually Inaccurate Claim of Truth

FUCT- Factually Unverifiable Claim of Truth

OUTRIGHT LIEFactually Ridiculous Claim of Truth

In the United States, political lies have become a central feature of public life, and trust in democratic institutions has plunged to historic lows. The cascade of misinformation—spanning everything from fabricated election results to distorted policy claims—has left citizens questioning the integrity of our political system. For years, this flood of falsehoods has gone largely unchecked, eroding the foundations of democracy itself. It’s time to draw a line. Just as we regulate financial markets to protect consumers and maintain trust in the economy, we should implement an SEC-style body to penalize outright factual lies in politics. This approach, focused narrowly on verifiable factual claims, offers a path forward for restoring trust in our political process.

The Parallel Between Finance and Politics

The financial world offers a compelling precedent for how lies can be regulated without stifling the freedom to debate, predict, or speculate. The Securities and Exchange Commission (SEC) plays a crucial role in ensuring that companies do not deceive investors. Companies are allowed to make forward-looking statements—projections, forecasts, and opinions—but they are required to be truthful about established facts. Misleading investors about current or past realities is met with strict penalties. This model provides a blueprint for how we can preserve free speech while introducing accountability for political lies.

A Focus on Factual Accuracy, Not Opinions

Critics often argue that regulating political speech would violate the First Amendment and suppress free expression. However, this concern conflates two very different kinds of speech: opinions and factual claims. Political debate thrives on the exchange of opinions, interpretations, and predictions. Disagreements about tax policy, healthcare reform, or foreign relations are the lifeblood of a democratic society. But the deliberate spread of factual falsehoods—such as claiming millions of illegal votes were cast or that an opponent was born in another country—is not a legitimate part of this debate. Such lies are not interpretations or predictions; they are assertions about reality that can be verified and should be held to a standard of truth.

Under the proposed model, politicians and candidates would remain free to argue for or against policies, project the impact of their proposals, and critique their opponents’ views. What would be regulated is only factual claims that are objectively false and demonstrably harmful. For instance, a candidate could say, “I believe my plan will reduce unemployment,” even if that prediction is debatable. But if they claim, “Unemployment has doubled under my opponent’s leadership” when it has in fact decreased, they would be subject to penalties for spreading a falsehood about a verifiable fact.

Rebuilding Trust Through Accountability

The consequences of unchecked political lying have been catastrophic. Public trust in government has plummeted to record lows. According to Pew Research, only 20% of Americans trust the government to do what is right most of the time, a sharp decline from previous decades. Meanwhile, trust in election results, the media, and even basic facts has eroded, leading to increased polarization and violent confrontations. The January 6th Capitol riot was fueled by persistent lies about the 2020 election, demonstrating that the costs of allowing misinformation to run rampant are no longer theoretical—they are manifest in our streets.

An SEC-like body for politics could begin the long process of restoring trust. By establishing a clear distinction between protected political speech and deceptive factual claims, such a body would send a message that truth still matters in public discourse. Penalties for repeated falsehoods—ranging from fines to public corrections—would serve as a deterrent for those who currently see no downside in spreading misinformation.

How It Would Work: Lessons from the SEC

The SEC’s model for regulating forward-looking statements offers a clear framework for how a political equivalent might operate. The key lies in differentiating between speculative, predictive statements and those grounded in current or past facts. For instance, if a politician claims that their tax plan “will create millions of jobs,” this falls under the realm of prediction and would not be subject to regulation. However, if they claim that their opponent’s tax policy has “led to the highest unemployment in 50 years” when this is factually untrue, they would face consequences.

The fact-checking process would need to be rigorous, transparent, and impartial. An independent body—composed of experts in law, media, and political science—could be tasked with verifying claims. This body would not pass judgment on political opinions or policy preferences but would focus narrowly on statements that are factually incorrect and easily verifiable. For instance, they could investigate claims about vote counts, economic data, or the content of specific laws and policies.

Once a false claim is identified, the politician or campaign responsible would be required to issue a correction or face penalties. Just as companies must file corrected earnings reports if they mislead investors, politicians would need to retract false statements publicly. Repeated offenses could lead to escalating consequences, such as fines, limits on campaign funding, or even temporary restrictions on public communication channels.

The Challenges and Criticisms

No system is without challenges, and this proposal is no exception. One major concern is the potential for bias. Critics might argue that any regulatory body tasked with policing political speech could be influenced by partisan agendas. To mitigate this risk, the body would need to operate with strict safeguards against bias, including diverse representation from across the political spectrum and transparent decision-making processes. Additionally, a robust appeals process would ensure that decisions can be contested, preventing one-sided enforcement.

Another concern is the chilling effect on political speech. Critics may worry that politicians will become overly cautious, avoiding certain claims for fear of penalties. However, the focus on factual accuracy—not opinions or rhetorical flourishes—means that political debate and persuasion would remain vibrant. Politicians would still have ample room to argue their positions, project future outcomes, and critique their opponents. The regulation would only target false factual claims that can be clearly disproven.

Finally, some argue that voters should be the ultimate arbiters of truth in a democracy. While it’s true that informed citizens are vital to a functioning democracy, the sheer volume of misinformation today makes it difficult for voters to discern fact from fiction. Just as consumers rely on the SEC to protect them from financial fraud, voters could benefit from a system that curbs deliberate deception in the political arena. Democracy works best when citizens are making choices based on accurate information, not lies.

A Realistic Path Forward

Implementing this model would require careful planning and broad consensus. The first step would be a pilot program that focuses on the most blatant and harmful lies. For example, false claims about election integrity, public health, or major economic indicators could be prioritized. The goal would be to demonstrate the effectiveness of the system in curbing misinformation without stifling legitimate political debate.

The next step would be establishing the legal framework. While regulating political speech has historically been met with judicial resistance, focusing narrowly on verifiable factual claims could overcome some of these hurdles. The key would be to clearly define what constitutes a factual falsehood versus an opinion or prediction, drawing on existing legal standards for defamation, fraud, and truth-in-advertising laws.

Public education would also be crucial. Citizens need to understand that this isn’t about policing opinions or stifling dissent—it’s about holding politicians accountable for lies that have real-world consequences. Just as we accept that financial markets require oversight to function fairly, we must recognize that political markets need similar protections against fraud and deception.

Conclusion: Reclaiming Truth in Politics

The unchecked rise of political lies has brought us to a tipping point. Democracy cannot survive if truth is no longer a shared value. An SEC for politics, focused narrowly on penalizing false factual claims, offers a way to rebuild trust without compromising free speech. By holding politicians accountable for the lies they tell, we can begin to restore integrity to our political process and ensure that voters are equipped to make informed decisions based on truth, not deception.

The time for handwringing is over. We need to recognize that democracy is at stake and take concrete steps to protect it. Regulating lies in politics isn’t a threat to free speech—it’s a necessary defense of truth, trust, and democracy itself.

Standard

Disrupting Hell in Louisiana: The Ten Commandments Revisited

The Following is an excerpt from an Off White Paper published back on April 27, 2014. It was the final piece penned with our guiding light, the late Professor Clayton Christensen. This excerpt is part of a larger discussion of accountability that focuses on the role of religion as a set of accountability technologies. The Ten Commandment is one of the great moral technologies that probably covers 80% of the virtues needed for a just society– whether or not you are a believer or a non-believer in a higher power.

Ironically the Ten Commandments is also a great example of a disruptive religious innovation. It took the 600 plus laws (the mitzvot) of the Torah and reduced it down tot he Big Ten. Simpler and more accessible than the Old Testament, this innovation scaled magnificently particularly after Cecil B. De Mille came up with arguably the greatest marketing campaign ever– just ask Charleton Heston and Yul Brynner.

…There seems to be a more vexing problem that necessarily precedes the issue of accountability: do people clearly understand if or when they are doing something wrong?  In simpler times a good place to start learning how to behave was the Ten Commandments.  It seems we’ve gotten a little rusty on the content of those ten imperatives and prohibitions. While 62% of Americans know that pickles are one of the ingredients of a Big Mac, less than 60% know that “Thou shalt not kill” is one of the Ten Commandments.  (Then again, ten or so percent of the adults surveyed also believe that Joan of Arc was Noah’s wife.)  In fact, the majority of those surveyed could identify the seven individual ingredients in a Big Mac: two all-beef patties (80%), special sauce (66%), lettuce (76%), cheese (60%), pickles (62%), onions (54%) and sesame seed bun (75%).   “Thou shalt not kill” only beats out onions.  It’s enough to make you cry. “Two all-beef patties, special sauce, lettuce, cheese, pickles, and onions on a sesame seed bun.” In one Big Mac promotion, the company gave customers a free soda if they could repeat the jingle in less than 4 seconds.

bigmac_6
“Two all beef patties, special sauce, lettuce, cheese pickles and onion on a sesame seed bun!”

A catchy jingle undoubtedly made The Big Mac part of pop culture. Who could forget the 1975 launch of McDonald’s Big Mac?  That jingle-laden effort takes second place for “most clever marketing campaign” in advertising history. Yet it was the unsurpassed marketing genius of film producer extraordinaire Cecil B. De Mille that surely takes the cake.  De Mille gave the Ten Commandments a special place in contemporary American culture decades before they became fodder for the “culture wars” regarding separation of church and state.

In perhaps the greatest marketing coup in American history, De Mille ingeniously tapped into the angst of Justice E. J. Ruegemer, a Minnesota juvenile court judge who was deeply concerned with the moral foundation of the younger generation. Starting in the 1940s, Ruegemer had, along with the Boy Scouts’ Fraternal Order of Eagles, initiated a program to post the Ten Commandments in courtrooms and classrooms across the country as a non-sectarian “code of conduct” for troubled youth. But De Mille took us to the next level. In 1956 De Mille seized the opportunity to plug his costly epic film, The Ten Commandments.  He proposed to Ruegemer that they collectively erect granite monuments of the actual Ten Commandments in the public squares of 150 cities across the nation.  The unveiling of each monument was a press event typically featuring one of the film’s stars. “OMG!!! Isn’t that Yul Brynner?” or  ”that’s Charleton Heston!”  Absolutely brilliant!   Things remained copacetic until the 1970s as this public expression of a code of conduct was apparently non-controversial.  Then all hell broke loose leading to a firestorm that still burns to this day on the issue of separation of church and state.

Charleton Heston (aka Moses) with Cecil B. De Mille at unveiling of Ten Commandment monument. Film  P & A budgets  have been growing ever since.

  “Ten Commandments” as an expression didn’t even exist until the middle of the 16th century when it first appeared in the Geneva Bible that preceded by half a century the King James Bible, first published in 1604.  The Ten Commandments have the attributes of a disruptive innovation. They take the hundreds of laws in the bible and boil them down to ten fundamental pearls that are more accessible and usable for the masses. Obviously not covering all the bases they were clearly good enough to get the job done: to create a simple, accessible bedrock for moral and ethical behavior.  Serious biblical scholars of the Ten Commandments view the vastly oversimplified code as a synecdoche, or visual metaphor, for the much more complex set of laws and regulations.

With or without the Ten Commandments it seems that most people have internalized “Thou shalt not kill.” Cognitive scientist Steven Pinker points out that, as recently as 10,000 years ago, a hunter-gatherer had a 60% probability of being killed by another hunter-gatherer. At some point in history, the prohibition against murder became a conscious ethical innovation.  By the eighteenth century BCE the code of Hammurabi, our earliest surviving code of law, prohibited murder yet the prescribed punishment was based on the social status of the victim.  Over a period of several centuries, our moral horizon expanded once again.  The biblical commandment “Thou shalt not kill” evolved to eliminate any class distinction— clearly an upgrade.

Yul Brynner (aka Pharoah) at unveils Ten Commandments monument at press conference with Cecil B. De Mille
Yul Brynner (aka Pharoah) and Cecil B. De Mille unveil another slab of granite with
Ten Commandments at a press conference .

Ethical innovations come in fits and starts.  Yet over the long run ethical and moral horizons of civilization widen. Today, according to Pinker, murder rates are at the lowest in history.   In the U.S. there were 13,000 murders in 2010. If our math is correct, that represents less than ½ of one-thousandth of one percent assuming no multiple murders and no suicides are included; about one person in 25,000 actually kills someone. While the numbers are less clear on adultery due to methodological challenges and interpretations (single incident versus multiple incidents, it takes two to tango, etc., etc., etc.), lowball estimates are 20-25% of adults are guilty.  Assuming 150 million adults, some tens of millions of Americans have committed adultery give or take a few.   It would seem not all commandments are created equal– or at least obeyed equally.

Morality has become more complicated.  In simple situations with a “one-to-one” correlation — one person murders another or someone steals a neighbor’s horse, or someone lies in court — we know what they have done is wrong and we hold them accountable. But we live in a society comprised of perpetual power imbalances, multiple stakeholders, ambiguous situations, and complex interdependent systems.  Is releasing classified documents as a response to questionable or inappropriate government surveillance heroic or traitorous? The July 10, 2103, Quinnipiac poll, 55% of Americans consider Edward Snowden a whistleblower/hero while 34% believe he is a traitor– a 20% shift in favor of Snowden since the earliest polls. Interestingly, normalized polarized Americans are uniting against the unified view of the nation’s political establishment.  The situation has become even more murky, now that Snowden’s activities have been indirectly validated through the Pulitzer Prize.   Determining what is right and wrong gets pretty dicey especially when we don’t know what we don’t know. What we do know is: a hell of a lot of Americans really don’t want Big Brother reading their emails. But what are they afraid of?

Does a banker taking advantage of asymmetric information or the sheer complexity of regulations, accounting and tax codes let alone multi-billion dollar obtuse securities constitute unconscionable behavior that deserves punishment? Who is responsible for lung cancer deaths– the smoker, the tobacco company, the tobacco worker or the FDA? Who is accountable for obesity-related deaths– McDonalds, corn subsidies, the overweight consumer or all of the above? Who is accountable for unsafe working conditions in the developing world– Apple, the Gap, Nike, Walmart, the local politicians, the builder– or the consumer who benefits from cheaper prices? Or are these just acceptable “costs of doing business?”

Perhaps we are undergoing the atomization of morality. Has morality has become subject to the “distributed network effect” such that every action can find a morally acceptable pathway of justification? When systems, stakeholders, and situations become complexes replete with interdependencies, conditionality and incalculable unintended consequences moral responsibility gets spread like peanut butter across a chain of activities, persons, corporations, partnerships, not-for-profit organizations, and government agencies. Complicating things further is that every agent has an obligation of loyalty to their principal. Peanut butter consists of a whole lot of peanuts, so which peanut or employee do we hold accountable for screwing up a batch of peanut butter? In like manner, one bad apple spoils the bunch but hard to know which one once it becomes applesauce. How should we parse accountability when there is a complex chain of activities where this atomization of morality allows every individual to avoid taking responsibility for the whole.  Often times we have only an intuition that something is amiss or doesn’t feel right even if we can’t exactly put our finger on it. Executive compensation feels like stealing, the selling of tobacco products feels like killing as does serving unhealthy food to the masses.  Are these moral intuitions, like Pope Francis’s about wasting food, actually expansions of our moral horizons or just plain silly?

A new awareness of what constitutes moral transgressions must come before accountability can be effectively scaled to help ensure enduring peace and prosperity.  This new moral awareness cannot arise without a great deal of civil discourse and yes, even uncivil discourse.  Slavery was okay until it wasn’t. Gay marriage was not okay until it was. Smoking in public places was permitted until it wasn’t.  Corporate polluters were acceptable until they weren’t. Changing the moral baseline is always painful.  Issues of accountability at scale are somewhat premature until we can collectively agree what constitutes right and wrong in an extended or networked chain of “values.”  It’s hard to build consensus when everything is so damn complicated with multiple stakeholders, complex systems, and ambiguous situations. Until there is an inflection point– that quantum moment– where enough people can be energized by the intuitions of early moral adopters to see something is terribly wrong and to take action there can be no wide spread accountability.

All this calls for a new breed of moral innovators.  It will be these early adopters– gifted with unusual charisma, empathy, and humility– who expand our moral horizon. They might even have to resort to civil disobedience. They will invariably face criticism and grave risks– financial, reputation, and even bodily harm– as they excite a new moral majority– many of whom will not believe in Hell but will embrace the possibility of better “Heaven” on earth.

Standard

Stray Cat Strut at the Feral Reserve

Herd on the Cliff

STRAY CAT STRUT AT THE FERAL RESERVE

Is the Federal Reserve trying to thread Cleopatra’s needle with an over cooked piece of spaghetti while simultaneously herding a bunch of feral cats gathered at the edge of a cliff? Best of intentions aside, the Fed’s semi-coherent policy might save half the cats but the other half will likely fall into the abyss. The question is which half will be saved?

Regarding semi-coherence, the Fed might learn a thing or two from the old adage most frequently attributed to retail baron John Wanamaker about the advertising industry:

Half the money I spend on advertising is completely wasted; the trouble is, I don’t know which half.”

Google’s pay-per-click solved that problem for the advertising industry. Maybe the Fed should figure out a pay-per-click model for monetary policy.

Old business models die hard. When it comes to central banking the foundational policy wisdom about financial crises revolves around two ideological constructs: “too big to fail” and “lender of last resort.” But book-ended by the 2008 Great Financial Crisis and the Covidolypse it seems the Fed has been exploring and tinkering, lest I say experimenting, with their business model. From ZIRP (zero interest rate policy) to NIRP (negative interest rate policy), QE1 (Quantitative Easing I) , QE2 (Quantitative easing 2) , TARP (Troubled Asset Relief Program) , TALF (Term Asset-Backed Securities Loan Facility) and so forth resulted in low inflation and free money that begets sloppy (undisciplined) capital setting the stage for asset bubbles across almost all asset classes and sectors. Ultimately all Frankenbubbles burst after the “over-lending” comes to a halt, the regulators, auditors and SEC and like magic a Minsky Moment is upon us.

According to ChatGPT 4.0

“a Minsky Moment is is a term used in economics to describe a sudden, sharp collapse of asset prices after a period of rapid growth or excessive speculation. The concept is named after American economist Hyman Minsky, who extensively studied financial crises and developed the Financial Instability Hypothesis.

According to Minsky’s hypothesis, there are three stages in the financial cycle: hedge finance, speculative finance, and Ponzi finance. During the hedge finance stage, borrowers can repay both the principal and interest on their loans. In the speculative finance stage, borrowers can only afford to pay the interest, relying on the continued appreciation of asset prices to refinance their debt. In the Ponzi finance stage, borrowers cannot even pay the interest on their loans and rely solely on further appreciation of asset prices to service their debt.

The Minsky Moment occurs when asset prices stop rising and the Ponzi finance stage is no longer sustainable, leading to a sudden and severe market collapse. As investors realize that they cannot rely on rising asset prices to pay off their debt, they rush to sell their assets, causing prices to plummet further. This rapid decline can trigger a financial crisis, as was the case in the 2008 global financial crisis, which had elements of a Minsky Moment.”

-ChatGPT 4.0 (note this definition was added after the original December 2022 publication date)

There is a famous anecdote about going bankrupt: When asked how Joe went bankrupt Joe responded, “first slowly then all at once.”

As we enter the Minsky Moment phase another financial phenomenon starts to take hold: fair value accounting moves to center stage further exacerbating an already unstable situation. Trying to establish permanently impaired values in a collapsing market with few to no willing buyers or sellers is a fool’s errand that makes matters worse. But that’s how the system works… or is supposed to work to re-establish credibility and trust in the numbers. Contagion risk, loss of confidence, market confusion etc start to foment an enormous self-fulfilling prophecy. The Feds start trying to put the toothpaste back in the tube but sadly it’s too late and extraordinary extemporaneous “solutions” are thrown into the mix without the wisdom, foresight, or experience to understand the inherent nature of markets and all the unintended consequences or knee jerk policies that tend to come up with temporary solutions instead of addressing the long term structural problems further stressing the system. Or said more succinctly, we rush to come up with seemingly plausible solutions– often a solution to the wrong problem.

Horror Vacui “(Nature abhors a vacuum)

Aristotle

Another lesson for the Fed comes from the oracle of Omaha himself, Warren Buffett, who cautions , “never try to sell a poodle to someone looking to buy a beagle.” Seems the market only wants to buy a beagle and Powell & Co. only have poodles for sale. Just as nature abhors a vacuum, financial markets abhor uncertainty. Given the unprecedented confluence of global challenges, the Fed’s response, seems to be as likely making a 7-10 split (professional bowlers have only a .7% probably of making that spare). The Feds lack of coherence have left the markets uncertain what they are even uncertain about…other than everything. But even worse is the loss of confidence in the Fed (sorry Chairman Powell) and The Treasury Department (sorry Secretary Yellen). But loss of confidence undermines trust and all the banking system has to run on is trust. That era really began on August 15, 1971 aka the “Nixon Shock” when the dollar was decoupled from gold and the printing presses have been rolling for nearly 50 years of printing dollar. Floating currencies mean that everytime any rate or price change occurs the entire matrix of global of assets gets repriced.

The world today doesn’t make any sense so why should I paint pictures that do?

“Pablo Picasso”

The dilemma-saddled Fed Chairman Jay Powell finds himself in a bit of a quandary. He’s trying to sell the market a Picasso (a poodle or perhaps a labradoodle) , when what they really want is a beautiful, easy to understand Rembrandt or a Reubens or even a Monet (a beagle). Representational art was displaced by abstract art with the advent of the camera– George Eastman’s Kodak camera. But markets hate abstraction even more than they hate a vacuum. Yet central banking only works well, if history rhymes let alone repeats itself. Control ambiguity is necessary lest we encounter the death of moral hazard necessary to try to keep the squidlets in line (good luck with that.)

It wouldn’t be too difficult to make the case: “Half of Federal Reserve monetary policy is completely wasted; unfortunately we don’t know which half.” Not only is it wasted it’s downright dangerous bordering on an existential threat to civilization. There’s a 50% chance (a completely non-scientific conjecture) that 10 or 20 years from now people will look back at the period 2008- 2023 and equate Federal Reserve monetary policy to bloodletting as a state of the art medical procedure. (The same might also be said about our country’s fiscal policy.)

The idea that one man’s weathervane decision-making ripples through the financial markets at speed of light should call into question the wisdom of building a global financial system upon the shakiest of foundations. The three biggest risks confronting central bankers are: contagion, contagion, and contagion.

Having lived thru the Fed Chairman regimes of short-termer Arthurs Burns, Bill Miller, Paul “Tough Love” Volcker, Alan “the Put” Greenspan, Ben “the Put 2.0” Bernanke and Janet Yellen it is abundantly clear that Federal Reserve policy is anything but a science. A word of advice: the words “Federal Reserve” and “experiment” should never appear in the same sentence.

Before 2008, the regularly occurring financial crises of the 80’s and 90’s were mere annoyances dealt with deftly by an understanding of how to stabilize and clear the market over 24-36 months plus or minus. The standard protocol is to project confidence, stand ready to flood the market with enough liquidity and subtly remind people that Citibank is never going under and depositors will get 100 cents on the dollar. And don’t forget about moral hazard. Remember Enron? Even though a major financial crisis was averted Arthur Andersen was dragged into the courtyard and very publicly got their brains blown out for their egregiously bad behavior. Accountability for the people whose business was accountability. People even went to jail! Enron’s Ken Lay died before his sentencing but Jeff Skilling got to practice up on his golf while spending a dozen years in the slammer.

Up until the Great Financial Crisis federal reserve policy had a predictable pattern and rhythm for the myriad of crises since the early 90s. The crises came, they were swiftly and deftly managed (relatively speaking), confidence was restored, and then like magic in 24-36 months business was pretty much back to normal. But the yield curve configuration gave the Fed plenty of room to maneuver. Some lessons were learned, others not so much, and some not at all. But the Fed steady-as-she-goes by 2008 realized they were going to need a bigger boat… a much bigger boat. Controlled ambiguity, avoiding moral hazard, accountability and unlimited liquidity at the quick and ready could usually stabilize a market crisis in 24-36 months– give or take. Just enough time for Blackstone, Apollo, Carlyle, Goldman “the Vampire Squid” Sachs to make billions. By 2010 Goldman Sachs no longer had clients, they only had counterparties according to the Squid Master Lloyd Blankfein

THOUGHT EXPERIMENT

Think of this as a thought experiment for dealing with the unprecedented GFC in 2008. WHAT IF….

  1. …WHAT IF the Fed had simply bought in the $300 billion or so of subprime mortgages, ring-fenced them in a bad bank a la Resolution Trust Corp? Instead we let $300 billion of subprime NINJA loans (no income, no job, no assets) morph into a $100 trillion global meltdown. Everything is interconnected. Contagion, contagion, contagion. As Sam Zell likes to say, “when the tide goes down we get to see who’s wearing a bathing suit.”
  2. …WHAT IF the Fed and the U.S. Treasury had jointly announced to the world that the financial markets we no longer functioning properly and therefore fair value accounting would be adjusted; fair value accounting by definition implies willing buyers and willings sellers. If the markets are not functioning properly the fundamental tenet of fair value goes out the window. That would throw a wrench into fair value accounting which distorts everything in a market experiencing a death spiral. Every asset is connected to every other asset – whether correlative or non-correlative. What brought down the house of cards in 2008 wasn’t really the subprime detritus; it was only $300 billion crisis and easy enough to manage. But the real culprits? Those who made dubious margin calls at the speed of light. The margin calls were made in the lenders’ “sole discretion” with no recourse forcing all whose positions were financed with repo (repurchase agreements) to top up their collateral (within 2 hours) or forfeit their collateral altogether. Hardly a willing buyer and a willing seller. As all securities are interconnected, securities lenders were grabbing collateral to cover their repo loans like drunken sailors at nickel beer night. Ironically for the vast majority of the market, the securities, excluding the subprime crapola, were performing as were the underlying assets that constituted the array of rated and unrated securities. There were many instances where lenders made egregious margin calls simply because they could. Since most language repo agreements at the time provided the lender with sole discretion in marking to market. If you disagreed with the lenders marks, tough bananas.
  3. …WHAT IF Fed and Treasury had simply set up a post facto tribunal (think of South Africa’s Truth and Reconciliation Panels) to arbitrate bad faith or inappropriate margin calls and provided a clawback mechanism? Perhaps a modicum of accountability might have moderated the downward spiral as the markets seized up completely. And if it were determined that the margin calls were inappropriate, painful penalties could be assessed against the transgressor. A little accountability never hurt anyone. Keep in mind antitrust damages are subject to treble damages lending true accountability for bad behavior. Remember those rating agency text messages? Remind me again– who went to jail?
  4. .. WHAT IF the Fed had focus on stabilizing spreads with global master credit spread swap agreements rather than flooding the market with unlimited liquidity (whether M1 or M2) that created massive asset bubbles resulting in zero or even negative interest rates of unprecedented proportions. The Fed was simultaneously manipulating the dead cat yield curve with artificially low interest rates and playing chicken with the market. Remember at some point those chickens will come home to roost. They always do.

Note to self: in dysfunctional markets fair value accounting creates epic anomalies in the structure of credit spreads across the entire debt spectrum from AAA to BB-rated securities. When BB spreads on mortgage back securities instantaneously widen from 275 basis over treasuries to 600 over then 1200 over to “no bid” what the hell do you think is going to happen to valuations? Keep in mind other than the subprime “crap” the securities and assets underlying the securities were by and large performing.

When moral hazard was thrown to the wolves, Wall Street learned that too big to fail coupled with the Fed “put” turned monetary policy into a game of cat-and-mouse flipping coins: heads I win, tails you lose. Asymmetric risk is the breakfast of champions for Masters of the Universe.

Geitner and Paulson’s QE1, QE2, TARP, TALF etc. staved off a short term crisis by kicking the can down the road inflating the Fed balance sheet to a whopping $2 trillion which by the way has ballooned to $31 trillion due to deficits and the covidalypse. The Rahm Emanuel’s crisis had gone to waste and some 15 years later we are still trying to find the can. Let’s be clear: the can is still there. As Powell tries to herd all the stray cats who expect him to put a bowl of milk out on the front porch, seems they don’t believe him even when he says he’s serious. The Fed’s loss of credibility (i.e. trust), use of “uncontrolled” ambiguity and hard-to-read messaging could lead to a Roubini-esque self-fulfilling prophecy of biblical proportions. Clearly Powell has yet to master the art of Greenspeak who did at least understand and embrace controlled ambiguity.

The only way to restore functioning markets is to restore trust and credibility… everything else is an experiment. Oh yeah. FTX and SBF should be a clarion warning to politicians and regulators: don’t try to regulate something you don’t understand. Mazars resigning from auditing crypto (on the heels of resigning as Donald Trump’s auditors) should be a massive shot across the bow: if you can’t get auditors to audit proof of reserves, proof of assets or proof of liabilities no institution in its right mind will touch this stuff. For those of you have bothered to read or pretend to understand the pseudonymous Satoshi Nakamoto’s Bitcoin white paper one thing is clear. The fundamental premise of Bitcoin is that trust can be created between untrustworthy parties algorithmically without a trusted intermediary to avoid what’s called the double spend problem. Trying to create trust between inherent untrustworthy parties is a really bad idea. Or as Alan Greenspan testified in Congress regarding the GFC– “there was a flaw in my ideology.”

One former lender of last resort, J.P. Morgan himself, said it best in the Pujo Hearings in 1906:

The first thing…is character … before money or anything else. Money cannot buy it.… A man I do not trust could not get money from me on all the bonds in Christendom. I think that is the fundamental basis of business.

Maybe it’s time to get back to fundamentals. So when will things get back to normal and stabilize the global financial system? When trust trades at par.

ps. How’s this for a novel idea: if you want a soft landing bring in Sully Sullenberger to run the Fed.

NOTE TO READER: This article was published in December, 2022 before the collapse of SVB, First Republic and Signature Bank.

Standard

Modern Sardine Management Revisited

In Memory of Sam Zell

Sepulcrum Saltator (translation: The Grave Dancer)

In the 1980’s, I heard the lesson first-hand from the legendary Sam Zell, the Chicago based billionaire and real estate mogul. Sam was one of many mentors and partners over the past forty years.  Along with John Klopp, now head of Global Real Assets for Morgan Stanley, Sam, John and I went on to co-found Capital Trust in the 1990’s which was subsequently acquired becoming Blackstone Mortgage REIT (BXMT).

Sam had written an article in the Real Estate Review in 1986 called “Modern Sardine Management.” I am sure there are many versions of the trading sardine story, but Sam told it this way in the article:

Mr. A had a can of sardines. He sold them to Mr. B for $1.  Mr. B Sold them to Mr. C for $2.  Mr. C sold them to Mr. D for $3.  Mr. D opened them and found they were rotten.  He complained to Mr. C that he wanted his money back.  Mr. C said, “No, you don’t understand. There are eating sardines and there are trading sardines. These were trading sardines.”

Legendary value investor Seth Klarman of Baupost also uses the trading sardines metaphor in his 1991 very rare, out-of-print book Margin of Safety.  While they are slightly different tellings, the message of both versions about speculation is clear: there is price and then there is value. Sometimes the two are the same but more often than many people think the two are not the same.

One only need to look back at the 2008 financial crisis to realize that when markets are gripped by financial crisis, which occur with a surprising degree of regularity every 5-10 years or so, the value and price of assets become disconnected.  A dozen years ago, the Great Recession was initially triggered by real economics losses from poorly underwritten or fraudulent securitized pools of subprime residential mortgages. Who can forget the NINJA loan (no income, no job or assets)?  As the financial markets seized up, the prices of mortgage securities collapsed.  Severe misalignment– among brokers, mortgage bankers, investment banks, the rating agencies, regulators and the borrowers– was the real culprit.

But the contagion quickly spread from $300 billion in non-performing residential subprime mortgage loans to the broader commercial real estate CMBS whose underlying assets were frequently 100% performing.  Notwithstanding that the loans continued to perform throughout the crisis, prices for CMBS also collapsed as liquidity dried up, margin calls were made as the securities were marked to market.  While the value of the underlying collateral was fine, the prices of securities experienced enormous distress as everyone was heading for the hills and/or hiding under their beds.  The spreads on BB securities that had traded at 275-325 bp over suddenly shot up to 1200bp. Some securities became “no bid” as all trading came to a halt. Extraordinary measures by the Fed and the U.S. Treasury injected unprecedented liquidity into the market (TARP, TALF, QEI, QEII etc.) saved the day or perhaps just kicked the can down the road. When markets seize to function, all hell breaks loose. History has shown that with sufficient liquidity and the lender of last standing by market meltdowns can usually stabilized in 18-24 months.

For markets to function in an orderly manner a degree of speculative investment is needed. But the knowledge and expertise necessary to speculate successfully on a regular basis is best left to the professionals.  That doesn’t mean you should never invest based on an intuition or gut feeling that something looks too cheap to pass up.  But you should have to discipline to takes profits and losses if the the investment is in your trading portfolio.   But learn to tell the difference: when to invest for the long-term and when to trade for the short-term.

The moral of the story is that a trader knows the price of everything and, all too often, the value of nothing; one needs to learn the difference.

Caviar emptor… or is it caveat emperor? Perhaps both?

Will miss you Sam. Thanks for the lessons, your friendship and one helluva ride!

Standard

Innovator’s Guide to the Universe: An Introduction

What is innovation? What is invention?

There is no official or universally accepted definition of innovation, but a general consensus might be “making changes to something that already exists or is established.” Innovation, often confused and/or conflated with invention, involves rearranging or combining existing products, services or ideas in novel or unusual ways.

Invention, on the other hand, entails discovery of, or stumbling upon, something new altogether. Cavemen undoubtedly first saw fire from natural events-perhaps lightening or wildfires– but the harnessing of fire itself was clearly a human invention. Learning to start a fire striking two stones and spewing sparks onto leaves and transferring the embers to kindling then to logs could be considered one of, if not the most, important inventions in the history of civilization. So,while the mastering of fire itself can be considered an invention, using fire to cook roots and meat, heat the cave, light the path, burn someone at the stake or inspire rituals and stories at night might be thought of as innovations: using something already in existence in a new way.

In modern times, inventions tend to be culturally neutral dissociated from people’s identities while innovations increasingly have become impacted by human world views, values and belief systems-the building blocks of culture. In fact, there is a movement that suggests culture, rather than the technology, in increasingly becoming the dominant variable in the successful diffusion of innovation.

The distinction between an invention and an innovation can frequently become blurred and any effort to definitively characterize something as invention or innovation might best be left to Talmudic scholars. A more useful approach is to look at where something falls on a continuum with pure invention on one end and pure innovation on the other. The number of innovations spurred by any new invention, is likely be exponential.

Invention<————–>Innovation

This guide is intended to introduce major concepts, tools and general frameworks for innovation writ large so that the reader can begin to better understand and harness the power of innovation. While not everyone can be an inventor like Edison, Tesla or Marconi, with a minimum of consistent practice almost anyone can master some basics concepts to become an innovator.

Deconstructing Innovation

This guide is intended to provide a set of diagnostic tools and/or simple heuristics for rapidly deconstructing products, services and ideas into the core components of innovation. In today’s world of medicine, the suite of diagnostics would include body temperature, blood pressure, blood test, urine test, x-ray, gene editing, MRI, CT scan etc.– each providing a different POV, perspective or angle to look under the hood.

Our set of diagnostics are a bit more metaphorical and less scientific but might prove very helpful,nonetheless. If one tool is not cracking the code, try another… or better yet come up with your own tools.

Examples of our diagnostic tools are:

-Connectivity

-Network effect (Metcalfe’s Law)

-Centralized-Decentralized-Distributed Systems

-Strong Ties/Weak Ties (Mark Granoveter)

-Processing/Price-performance (Moore’s Law)

-Transistor Effect (Moore’s Law)

-Killer App

-Analog to Digital

-Share-ability

-Scalability

-Social

-Open Source (Cathedral and the Bazaar/Eric Raymond)

-Proprietary versus Open Source design

-Choice/Famous Jam Study (Sheena Y engar)

-Threshold Resistance (Alfred Taubman)

-Inter-operability (Plug and Play)

-Modularity

-Gig-ability

-Jobs-to-be-done (Christensen)

-UX (user experience)

–UI (user interface)

-Utility-centric

-Identity-centric

-Disruptive versus sustaining innovations

Products, Services and Ideas.

Putting aside the subtle distinction, we generally view products and services as the milieu for invention and innovation. But a more expansive view that gives an important perspective is invention or innovation of ideas … often big, or really big ideas: language, burial, religion, art, hunting and gathering, agriculture, democracy, philosophy, mathematics, war, psychology etc. This broader take on innovation and invention invites us to explore certain anthropological factors and insights into the evolution of human civilization. A deep dive into the interplay between technology and culture can’t but help lay the foundation for a better understanding ofthe full potential of innovation.

Are there really any new ideas? We learn by imitation. We remember by stories. One of the greatest (and most controversial) achievements in intellectual thought was Charles Darwin’s theory of evolution set forth in his magnum opus On the Origin of Species. Ironically neither of Darwin’s two most famous terms- “natural selection” and “survival of the fittest”were original thoughts. Natural selection was independently coined by biologist Alfred Wallace with whom Darwin had collaborated. It was also Wallace who suggested to Darwin he consider using philosopher Herbert Spencer’s more precise term “survival of the fittest” rather than “natural selection.” Darwin agreed incorporating Spencer’s phrase but not until the fifth edition of Origin. Open mindedness and willingness to adapt is a crucial virtue of great innovators.

In understanding innovation, it is helpful to remember the oft repeated saying: “good artists borrow; great artists steal.” (see T .S. Elliot, James Joyce, Keith Richard, Jimmy Page et. al.)

In his book The Fatal Conceit, one of F. A. Hayek’s great insights is that humans learn more by imitation rather than reason. Trends are often spurred by imitation, impulse, and intuition. In a world of social media influencers and celebrities, things that go viral do so not out of any particularly deep introspective process or rationale. Keeping up with the Jones or…. the Kardashian effect. Culture is forged in the crucible of imitation.

Malcolm Gladwell is perhaps best known for his book The Tipping Point, which is an excellent resource on understanding trends. The concept of a Tipping Point is what Gladwell refers to as the “biography of an idea” somewhat akin to the “straw that broke the camel’s back.” While Gladwell certainly popularized the concept, he seems to have borrowed the term Tipping Point itself from others as well. (Remember: good artists borrow, great artists steal.) 

He also learned to not to give credit in footnotes but rather cleverly acknowledged the work of others in endnotes (i.e. at the back of the book) rather than acknowledging “as-you-go” in footnotes at the bottom of the relevant page– a very clever innovation indeed!

As humans moved toward bipedalism a host of physiological changes occurred larger related to the size of the human brain. Fire enabled to cooking meats and roots radically altering the caloric intake of the human diet. It is estimated that the human brain is only 2% of the humanbody but consumes 20% of the calories. While Darwin’s survival of the fittest really appliedmore to adaptability rather than physical strength, one interesting case to note where the stronger Neanderthals species didn’t survive since their physical prowess did not create the need for more socialization and communication of the weaker homo sapiens. As it turns out strength became the Neanderthals’ weakness—a true paradox where their strength and resilience led to the demise of Neanderthals. 

So stay tuned and welcome to the Innovator’s Guide to the Universe

Standard

Quantum Capitalism: The Next Financial Paradigm?

If one were to examine the current global financial system, it is apparent that capitalism works well during the boom part of the boom-bust cycle but not so much during the bust phase that recurs with alarming regularity. In an increasingly complex system, the financial referees generate hundred and thousands of well-intended checks and balances: laws, rules, regulations, etc. that try to restrict or dampen the system risk using a hodgepodge of conflicting and inconsistent guardrails to prevent the systemically important institutions from self-immolation. But “there was a flaw in my ideology” as a famous central banker once said to Congress. One should bank on the cadre of incredibly smart seven-figure lawyers, eight-figure investment bankers, nine-and-ten-figure private equity firms and hedge funds will always outwit six-figure accountants, politicians, and regulators. Always have, always will. The job of the Olympic champions of finance to legally circumvent the best of intentions of the watchmen and protectors of the public good who will always come up short in this mismatch. Think of it as IQ arbitrage.

Archimedes famously espoused that with a large enough plank and a fulcrum he could leverage the universe. Great wealth has inured to those who have mastered this principle: it’s all about the leverage. Surely this metaphor must have been the inspiration of Tom Wolfe’s Masters of the Universe trope. Deployed wisely capitalism thrives on leverage but only in the boom part of the economic cycle. When things go south we bring out the workout artists and I don’t mean Jane Fonda. The opening chapters of Wolfe’s “A Man in Full” feature the character Charlie Croaker, a composite of real estate developers from Atlanta including John Portman who I had the privilege of representing in the 1990s.

The scheme is actually quite simple and brilliant. Invest at 12% with borrowed money at 6% leveraged 50% and the return on equity will be 18%; if leverage is increased to 80%, and suppose the price of debt increases to 8%, the return increases to 28%. A typical hedge fund or private equity deploying vast amounts of capital under the “2 & 20” model, it is easy to see how Stephen Schwartzman, Ken Griffin, Ray Dalio, Leon Black, and Henry Kravis got so rich. It’s all about Assets Under Management (AUM). Walk into any Park Avenue cocktail party and see how long it takes to be subtly informed by any serious player what their AUM is.

If a firm/individual has $100 billion AUM, they are making $2 billion a year give or take, just from their 2% annual investment management fees. Of course, the firm has to pay its employees, rent and operating expenses but the economics of the AUM model scale brilliantly. While there are a million variations of the scheme, it is generally referred orto as the “2 and 20” model: which means 2% annual management fees plus 20% of the profits. These management fees are taxed as ordinary income; financial rewards from the carried interests, ( a percentage, typically 20%, of the profits (the “carry” or “promote”), are taxed as capital gains. To Steven Schwartzman this preferential tax treatment of the carried interest is the single most import foundational pillar of good old fashion American capitalism. and in turn democracy There are a million variations of the 2 & 20 scheme: prefs (preferred returns), waterfalls, look-backs, catch-ups, ratchet-ups. It is the carried interest that what makes the world go round. As the returns to investors, known as the limited partners or LPs, increase, the carry (aka: the promote) to the General Partners or GPs (known as Masters of the Universe) also might get them an even bigger piece of the profit pie. Not surprisingly the math is pretty straight forward: that’s why we have so many billionaires spewing forth from the investment management industry. America. What a great country! Income and social inequality be damned. Some might even say its time for a new model to avoid the “heads on pikes” annoyance of the French Revolution.

Even though Albert Einstein was deeply troubled by Quantum Mechanics that obliterated Newtonian Physics as things start moving at the speed of light, he famously stated that “reality is an illusion, but a very persistent one.” All finance is relative, change the price of one asset or commodity and the prices of every other in the financial system also. The more volatility the more profits traders make simply trading the volatility. Hmmm. The Fed wants price stability and traders want volatility? Sounds like a recipe for disaster. But as Pope Francis famously said, “Who am I to judge?”

Standard

A Fearful Symmetry: Predator Drones, CDOs, and All the Bonds In Christendom

by Craig Hatkoff, Rabbi Irwin Kula with Professor Clayton M. Christensen

(This is a previously unpublished article from August 2010 being released in memory of Clay’s passing in light of the emerging COVID19 crisis.)

When Risk Goes Asymmetric Strange Things Can Happen

J. P. Morgan

About the culture at his previous gold trading firm, J. Aron, subsequently purchased by Goldman Sachs, former Goldman CEO Lloyd Blankfein, notoriously quipped,  “We didn’t have the word ‘client’ or ‘customer’ at the old J. Aron. We had counterparties…” That quote in our minds serves as a crease in time. Even as a joke, this little zinger sums up one of the great issues confronting capitalism: structural, asymmetric risk where information systems are dangerously skewed against the less informed. While not illegal, there is something borderline immoral or at least a bit creepy about the unchallenged feeder system between Goldman Sachs and the rest of the world’s power regimes populated with former high ranking Goldman alums. To dispel any notions of Pollyannaism, bacteria, rarely admired for is aesthetic qualities, is admittedly an essential part of any healthy, self-correcting systems in dynamic equilibrium. Try making yogurt without bacteria! Matt Taibbi’s epic article in Rolling Stone permanently etched the moniker for Goldman Sach as a giant vampire squid, sucking the blood out of America, Americans, and any other organism living or dead, into the world’s psyche. Let’s just face it, The Goldman Gangs are the smartest and best-informed guys/gals in the room. Just the way it is. Until…

But a larger storm arises in the midst of any good ole financial crisis, where the rich get richer for being smarter and better informed creating structural asymmetry where moral hazard, personal accountability, and outright humanity. When there is no face to a crisis it is hard to foment a communal empathic response. On July 25, 1985, Rock Hudson’s press secretary publicly confirmed he was suffering from HIV/AIDS; two months later he was dead. The announcement itself shook America out of its faceless indifference to a brewing storm: “Oh no! Rock Hudson has AIDS????” fretted an awakened citizenry. The AIDs epidemic was now real.

So far while many high profile celebs have contracted the COVID19 virus none have succumbed and emerges as a fatality statistic. This too shall change at some point. The demise of Tom Hanks, Idris Elba, Prince Charles or other similarly-ranked notables would quickly put a face on this novel coronavirus. This would become a psychological tipping point. One need only look to Kobe Bryant’s demise which led to a national depression and state of mourning.

Congressional testimony can be a hellish experience for bank titans.  In the aftermath of 2008 financial crisis, Goldman’s Blankfein gave a few bucks at the office after his Abacus went awry but he could do the math in his head that $500 million was a spit in the bucket to get back to the business of trying to wipe the calamari omelet off the firm’s face.  J.P. Morgan Chase’s Jamie Diamond emerged relatively unscathed but feeling like a bride betrayed left standing at the altar as a once-presumptive Secretary of the Treasury before the bar was lower to admit minors. 

Yet another lesson should not be forgotten from the Grand Overlord of finance and eponymous founder of the House of Morgan; J. P. Morgan himself barely survived his congressional testimony on the Panic of 1907.  Perhaps apocryphal, some historians have suggested that the stress from his Pujo Committee testimony (effectively ending the “money trust”) did in the already-ailing bulbous-nosed banker just three months after his appearance.  Morgan had served as the world’s lender of last resort long enough and was replaced by a new lender of last resort: the Federal Reserve.  Morgan had financed much more than just wars—he financed just about everything including unjust wars, banks, railroads, steel mills, and yachts.  Yet throughout his illustrious career, one principle was inviolate:  character and trust matter.  Consider Morgan’s famous exchange with Congressman Untermyer at the congressional hearings:

Congressman Untermyer: Is not commercial credit based primarily upon money or property?

Morgan: No, sir. The first thing is character.

Congressman Untermyer: Before money or property?

Morgan: Before money or anything else. Money cannot buy it … a man I do not trust could not get money from me on all the bonds in Christendom.

And while character determines trust so do consequences.  To paraphrase Sam Johnson: Nothing focuses the mind like a hanging.

So what roles do character, trust and consequences (the cauldron of morality) in the amoral framework and underpinnings of capitalism purportedly guided by self-interest, rational behavior and an invisible hand?  Remove consequences, trust, and character from the traditional banking relationship between borrowers and lenders and market behavior has all the predictability of Brownian motion.  Examining what happens when borrowers and lenders become decoupled from the risk of loss of capital through financial prestidigitation (aka financial innovation) could provide a provocative clue of what might happen when traditional combatants are also decoupled from the risk of loss.  A successful suicide bomber and death go hand-in-hand.  But a predator drone neatly eliminates the honor quotient inherent in hand-to-hand combat.  The risk relationship becomes inversely asymmetric and alters the nature of warfare itself.

As well he should, William Blake had a healthy fear of tigers, whose unsurpassed predatory skills had few, if any, rivals.  He certainly had a way with words but perhaps was a bit of a closet military strategist.  Remember the famous scene in the coming-of-nuclear-age film War Games, where WOPR (War Operation Plan Response), a slightly schizophrenic military uber-computer (and close relative to HAL from Kubrick’s 2001: Space Odyssey) has a truly senior moment and calculates that any nuclear scenario is completely unwinnable.  A frustrated WOPR blows a few fuses and whistles ultimately determining the only way to win is not to play; WOPR entreaties a peach-fuzzed Matthew Broderick with the classic “how about a nice game of chess?”

The lethal weapons of yesteryears’ military-industrial complex have simply too big to succeed resulting in deterrence as the only practical value.  Prophetically, War Games serves as a cautionary tale; today our most sophisticated aerial weapons and delivery systems are also over-powered, over-designed and over budget. Yet the UAV Predator Drone and its relatives are a stunningly disruptive military innovation that is a fraction of the cost; while a $4 million Predator, clearly cheaper and inferior to the $350 million F-22 is still good enough to get the job done: taking out troublesome non-state actors and other evil-doers around the globe.  Aquinas in Summa Theologica actual coins the phrase evil-doers in his exegesis on Just Wars.   The drone, not likely contemplated by Aquinas (who was no William of Ockham), is a construct of more modern, radical thinking.  Like the zany world of quantum physics and its wave-particle duality, disruptive innovation theory creates a paradoxymoron: less is more. 

Fast forward 25 years: meet the Volker Rule 2.0 for military engagement.  Hopefully, the oversight, safeguards, and regulations are more effective than our rating agencies and oil rig regulators. Now apply a fractured corollary to the Volker Rule (affectionately recapitulated as Volker Rule 2.0) to our arsenal of high-tech military technologies and massively damaging weapon systems: by and large, they are simply “too big to succeed.”  The capabilities of these feats of engineering supremacy and modern military miracle weapon systems have far outstripped the required response and resources to combat the true twenty-first-century threats: non-state actors. 

Slim Pickens in Dr. Strangelove

Now from the secluded safety of CENTCOM trailers outside Las Vegas at Nellis AFB, pilots in decal-decorated flight suits sit in front of Sony Play Station-like equipment, joysticks in hand, and take-out terrorists half-way around the world with.  The drone’s function is similar to a suicide bomber yet the drone pilot’s persona is antithetical to that of a successful suicide bomber who faces asymmetrical assured personal destruction (or at least a re-arrangement of genitalia for those less successful).  A drone pilot is an exact inverse: enjoying asymmetric personal risk i.e. no risk at all—at least physically.  Yes, there is the possibility of post-traumatic stress, but bodily harm is removed from the equation unless the pilot accidentally slips on the step or acquires a nasty case of carpal tunnel syndrome.  Joysticks do create an unavoidable occupational hazard.  Who can forget the searing imagery of Slim Pickens in Dr. Strangelove personally delivering rodeo-style Fat Boy and Little Man’s ancestor to its targeted destination on a nuclear bucking bronco; now that was honorable!

The drone is a perfect, but not the only, example of the infrequent disruptive military innovation: an intentionally inferior, cheaper, less complicated product that is good enough to get the job done.

Other disruptive military innovations have distanced the combatants’ risk but not eliminated entirely.  The inspiration for the Hundred Years War was the late 13th-century disruptive innovation known as the longbow.  With the dramatically increased range of these inexpensive bronze-penetrating missiles, the personal risk goal line was moved back a couple of hundred yards as English archers could now pierce the armor of French cavalry with their sky-blackening “rain of death” putting an end to the monopoly on fear where the French cavalry could literally steamroll opposing forces who were no match. 

But shock-mounted combat itself was enabled by an earlier disruptive military innovation–the stirrup–thought to have been invented as far back as 2500 B.C.  This one little innovation —a strap of leather or forged metal that could secure the rider to his steed– created the equivalent of a modern-day tank.  The horse and armor were not disruptive innovations themselves but this single inexpensive component of innovation, the stirrup, would knit together a complex linkage of capabilities enabling France to become complacent in its need for disruptive military innovation.  Sustaining innovations—better, stronger armor, jousting tournaments, a hinge or metallic do-hickey here or there.  But the invention of the longbow by the English would transform a smaller, poorer nation into a category killer; the raw trajectory power of this simple weapon could take out a knight and a horse in one shot.  A nation one fifth its size decimated the French with this incredibly less complicated weapon during the Hundred Years War

The reach of the longbow doubled the efficacy and power of the conventional bow and ended the monopoly of shock-mounted combat that debatably created the need for the feudal system itself:  horses, armored knights were an expensive proposition and economic evolution was needed to feed the beast.  Yet the product and the process of implementing the longbow strategy were inextricably linked.  All able-bodied Englishmen were required to practice archery on weekends if we can believe Froissart’s account, and no child was left behind.  Perhaps our national education system can take a page on working weekends to remain competitive in the global arena of economic warrior nation-states. 

But today’s pesky, invisible non-state warriors are not likely to show up at a modern-day equivalent of the Peace of Westphalia–even if we could figure out who to invite.  Westphalia was the recognition that gunpowder and guns would result in mutually assured destruction of Protestant and Catholic alike.  After all what’s the big deal about Protestants and Catholics getting along?  It was really economics and raison d’etat, not religious ideology, that was at the root of the struggle and balance of power politics.  Cardinal Richelieu had no problem backing Protestant forces when it served the interests of France. Nor did the flip-flopping Protestant-turned Catholic Henry IV who said, “Paris is well worth a mass.”  And the religiously checker-boarded Valtellines and Grison valley may have had more to do with Pope Urban VIII throwing his friend and beneficiary Galileo to the Inquisition than a disagreement between science and religion., Most scientist today understand Galileo’s Copernican proofs were rubbish as did the Jesuits at St. Ignatius’ Collegio Romano at the time. 

Shortly after September 11th, an Army four-star general was asked at a private lunch: how can we end terrorism?  He responded. “The military can’t end terrorism.  We can just kill terrorists.  We will track ‘em down one by one. Knock on their door.  Drag ‘em outside and kill ‘em.  Put a bullet in their head and go knock on the next door.  Ending terrorism?  That’s not our job.  That is up to you.”

In 2002 things suddenly changed by the deployment of a truly disruptive military innovation, a modern-day longbow that quietly arrived in the scene: The Predator Drone.  But unlike the longbow that pushed the combatants’ danger back a few hundred, the drone would push it back halfway around the world.  No more of that messy business of knocking on doors; we were now technically able with low-end performance to deliver colonoscopies to would-be evil-doers without them even knowing.  Drones are not visible cruising at 15,000 feet, no rattle, no hum.  Just a hellfire missile up the butt.

Predator drone launching Hellfire missile

According to the recent U.N. Special Rapporteur’s Report, in April 2002 the Russians armed forces took out “rebel warlord” Omar bin al Khattab in Chechnya. Then in November 2002 alleged al Qaeda leader Ali Qaed Senyan al-Harithi and five other men in Yemen were vaporized reportedly by a CIA-operated Predator drone using a Hellfire missile.

The efficacy of the drone as a disruptive innovation was captured in part by Captain Terry Pierce in 2005 in his book on disruptive military technologies. Pierce offers a glimpse as to the benefits as well as the difficulties associated of implementing wrenching change by resisting stakeholders, both inside and outside of the military.  Civilians and maverick officers play a critical role in innovation but only under certain very limited circumstances.  Disruptive innovations are scary business if you are a career military man, particularly during peaceful periods. 

But September 11th changed things dramatically.  It was a “crease in time” according to the same four-star.  Today Defense Secretary Robert Gates seems to get the need for disruptive military innovation; Gates has urged his weapons buyers to rush out “75% percent solutions over a period of months” rather than waiting for ‘gold-plated’ solutions”.  The Predator drone and its unmanned relatives seem to fill that order; it is estimated that nearly 1/3 of the drone fleet has crashed.  But at $4 million a pop you can lose a lot of drones and still beat losing one F-22.  The math is pretty straight forward. As Voltaire said: perfection is the enemy of the good. Or, as we say in the parlance of disruptive innovation: the good enough!

In an influential 1998 manifesto-styled essay-metaphor on the open source movement (The Cathedral and the Bazaar) Eric Raymond offers an off-handed second derivative yet provocative observation: “distributed problems require distributed solutions”.  Drones may be the ultimate distributed military solution for a clearly distributed problem—non-state actors.  But the risks and issues are great.  The recent U.N. report lashes out against “targeted killing” and is not amused by the joy-stick nature of the new War Games

When combatants are completely removed from risk of injury something very strange can happen.  Like the sub-prime crisis and related contagion where CDOs and credit default swaps decoupled bankers and mortgage brokers from owning any risk, behavior becomes very distorted.  The sanctity of the borrower-lender relationship breaks down.  Alignment and incentive as the time tested governors of free-markets also breaks down.  Amoral if not immoral behavior of the rating agencies further exacerbates the systemic failure as investors, CEOs and regulators were lulled into a sense of complacency like the French cavalry.  How can something go from AAA to a total loss overnight?  How could Alan Greenspan have been so wrong about free markets and rational behavior by market participants who had every incentive to self-regulate, certainly at the margins of three standard deviations?  But when the Volker Rule confirmed that “too big to fail” was not an option, a terrifying realization became apparent: moral hazard where  “heads I win, tails you lose”  behavior was acknowledged as truly risky business. 

Our Volker Rule 2.0 reveals a paradox: our military might be too big to succeed.  Cheaper, inferior weapons like drones that are good enough to get the job done might be the answer.  But we should proceed with great caution.  The decision to invade Iraq was subject to the Pottery Barn principle of “you break it, you own it”.  When secluded combatants no longer own the risk, like bankers who no longer own the risk, bad behavior is hard to manage. 

As a thought experiment just imagine a drone pilot securely situated in Las Vegas sees Osama Bin Laden in his sights, perhaps somewhere in Iran.  And just imagine the unspoken national interest is best served by the current containment rather than Bin Laden’s capture where we would face a lengthy trial and possible execution.  One click of the joy stick and a Hellfire Martyr-Maker hits its target and earns legendary status for the gamer who took out OBL.  Remember that the chain of command for a drone is not exactly like dropping a nuclear weapon.  Rumor has it Hollywood is feverishly working on War Games III: Attack of the Drones coming to a theater near you.  Rent before you buy. 

Oh, by the way, just wait until our non-state adversaries assemble their own fleet of predator drones operated by a stripped-down free twitter app currently under development.  The good news: the app, with full command and control features, will undoubtedly be priced at $3.99.  Thank you Steve Jobs.

http://www.csmonitor.com/CSM-Photo-Galleries/In-Pictures/Drone-jockey-New-Air-Force-poster-boys
Standard

Paradigm Rift: Negative Interest Rates (NIR) and the Highly Speculative Theory of Financial Relativity

From Black Sholes to Black Holes: Paradigm Shift-Drift-Rift

As every MBA knows or should know, there are perhaps four bedrock analytical tools of modern finance: i) Net Present Value (NPV); ii) Internal Rate of Return (IRR); iii) Black Sholes Option Pricing Model (Black-Sholes); and, iv) the Capital Asset Pricing Model (CAPM). As global interest rates, risk-free and risk-adjusted, have approached zero globally, the problem of the zero lower bound has created a vexing challenge. As more and more anomalies in the financial markets become evident, the conditions precedent for a classic Kuhnian paradigm shift are upon us. Like the center, the anomalies cannot hold.

As Thomas Kuhn pronounced in his seminal work, The Structure of Scientific Revolutions, within a given scientific field or domain, as anomalies, or deviations from the expected, are observed that are not predicted or explained by the existing paradigm a crisis begins to develop. When there is an emergent theory or model that can better explain or mitigate the anomalies this new and improved theory begins to resolve the crisis and there is a paradigm– new descriptive model– replaces the incumbent. However, a paradigm shift can only take place when a better theory is emergent. This begs the question when a plethora of anomalies present themselves but no better or alternate theory is in sight let alone on the distant horizon? We would posit that this paramount conditions precedent fails to present itself one of two descriptors will occur: i) a paradigm drift or ii) a paradigm rift.

America’s “Exorbitant Privilege”

Standard

In Memoriam: Professor Clayton M. Christensen

“A professor, a rabbi and an entrepreneur walk into a bar….”  As it turns out, the professor doesn’t drink, and the “bar” is the Harvard Business School Faculty Dining Room. Otherwise, what has been a running joke for years, is a true story. The year was 2007 when we met with Professor Clay Christensen for a remarkable lunch.  The “we” is Irwin Kula (the Rabbi) and Craig Hatkoff (the Entrepreneur.) That lunch with Clay would change our lives.  We had been meeting regularly for years to discuss innovation, technology, and religion. A new player, larger than life, was about to join us in our sacred space. Enter the Professor.

The Entrepreneur and the Professor had established a relationship back in 2000 when Clay invested in Hatkoff’s start-up, Story Orbit, an online children’s publishing venture. Clay had come across the start-up reading the term paper of one of his students, and invited Craig up to Harvard to discuss what he saw as vast disruptive potential and what was one of the better examples of disruptive innovation he had come across. At the time, Craig was unfamiliar with Clay, his emerging theory and the growing influence in corporate boardrooms. Clay had appeared the year before on the cover of Forbes magazine with Intel CEO Andy Grove who is credited with turbo-charging Clay’s reputation and crediting him as the inspiration for the Celeron chip.

But in 2000, and a far cry from the stature of Andy Grove and the Celeron chip, Clay’s intellectual curiosity in this children’s publishing start-up had piqued his interest enough to invite an unknown entrepreneur to meet with him to discuss Story Orbit; it was quintessential Clay. It reflected not only his curiosity but his passion and generosity as well. Clay made a modest investment and offered to serve as senior adviser. Unfortunately, Story Orbit turned out to be too far ahead of its time: it was a broadband application when only 2 million Americans had hi-speed internet.  Though the venture never launched, considerable pre-development funds had been spent. Hatkoff returned Christensen’s investment plus a rate of return based on Christensen’s historical returns—a rather unconventional gesture but one that created an enduring relationship. “It just seemed like the right thing to do,” reflected the Entrepreneur.

Fast forward to June 2007. For some reason, inexplicable to either of them, the Entrepreneur gave the Rabbi a copy of the Professor’s magnum opus, The Innovators Dilemma, with the following inscription: “I am not sure why I am giving you this book… but you might find it interesting.”  The next morning Kula called Hatkoff and said, “OMG!  I was up all night reading the book. I finally know who I am and what I do. I am a disruptive spiritual innovator!” This led to the self-described occasionally devout atheo-agnostic Entrepreneur and the Disruptive Rabbi to develop a social enterprise initiative that applied Clay’s theory to the realm of religion and spirituality. Realizing we were taking some broad liberties with Clay’s “baby” we thought it would be both respectful and wise to at least run the idea by him. We sent the proposal to Clay.  His office confirmed receipt of the email but cautioned it might take several months to get a reply.

Twenty minutes later, Hatkoff got a surprise call from Clay. “Do you think you and your rabbi could come to meet me for lunch? Craig, I don’t know if ever told you this, but I am like an Archbishop of the Mormon Church. You and your Rabbi are onto something I’ve thought about for many years.  I’d like to be involved.”

So, there we were at lunch. It was like being at Hogwarts with Professor Dumbledore.  But Clay, with his gentleness, grace, and warmth quickly put us at ease.  Towards the end of an extraordinary conversation about disruptive innovation and religion, the likes of which had undoubtedly never happened before in the HBS dining room, or perhaps anywhere else on the planet, Irwin nonchalantly said to Clay “I assume Professor, you always viewed Mormonism itself as a disruptive innovation?”  For what seemed like an eternity, Clay was silent.  “By Golly isn’t that interesting! I never thought of it that way!”  Clay then reiterated his desire to be involved with us and help us in any way he could. 

By 2009, the three of us had formed the Disruptor Foundation dedicated to advancing disruptive innovation theory in society-critical domains. We were honored as Clay would casually refer to us as the theory’s “advance research and development arm.” We were his para-theorists exploring and thinking about applications of his theory to new frontiers.

One of Clay’s remarkable qualities was his openness and commitment to learning about any and all anomalies that his theory did not explain. As a profound theorist, he embodied the rare combination of brilliance and humility.  Clay was never defensive! Always smiling, he was intellectually curious, empathic, present, joyful, and engaged.

By 2010, the three of us launched the Tribeca Disruptive Innovation Awards, now known simply as the Disruptor Awards.  The unspoken objective of the awards was to curate and celebrate innovations that fit the theory but also to showcase those anomalies that the theory, in its original form, did not explain.  As a genuine scholar, Clay enjoyed these exceptions as much as those things that confirmed his theory.

Over a ten-year period, we identified and honored over 250 innovators—ranging from the usual suspects, (Twitter, Square, Wikipedia, Snapchat, Uber etc.) to Nobel Prize winners to Ed Snowden, from the Mayors of Hiroshima and Nagasaki to a bevy of teen and pre-teen innovators.  Largely because of Clay’s reputation and endorsement we were able to create an ecosystem of some 400 plus fellows all of whom in some way were disrupting the status quo. Every year, the night before the awards would hold our annual Spaghetti and Meet-ball Dinner for all the Honorees and Fellows.

While he rarely knew who we were honoring in advance, one of the hallmarks of the awards was Clay’s closing reflections and commentary on that which he had just witnessed. Always delivered with great joy and gratitude, he would implore all of us to continue to search for innovations that would make him rethink his theory and challenge the community to keep disrupting the status quo for the public good.

By 2013, the fourth year of the Awards, we began to see an unexpected pattern that was a departure from the original theory’s focus on the core utility of technology and business models. It was something that intrigued Clay. As there was no existing platform to encourage such kinds of discussion, we decided to launch our own publishing platform, known as the Off White Papers.  Rather than serving as an academic or scholarly peer-reviewed platform, it was a space to provoke new thinking about Disruptive Innovation Theory liberated from the rigors, and yes, the stress and biases, attendant to more serious academic undertakings.

Our inaugural Off White Paper, which included Clay as an author, introduced the concept of identity-centric innovation where identity–people’s world views, values, and belief systems– i.e. culture writ large was a “formidable variable” in the successful diffusion of innovation. The essence of identity-centric innovation suggests that “…in those domains where stakeholders’ identities are being challenged the identity function can often overwhelm the more straightforward utility of the innovation.”  Clay’s closing quote represented a quiet sea change in his thinking about his theory: “If we are to develop profound theory to solve the intractable problems in our society-critical domains…we must learn to crawl into the life of what makes people tick.”What this meant to Clay is that in society-critical areas such as education, healthcare, government, capitalism, moral and ethical development etc., disruptive innovation would behave quite differently than in domains of pure utility-centric innovation. A bit of a stretch but Clay was comfortable using the playful analogy of Newtonian physics versus quantum mechanics. Indeed, the title of our first article was Disruptive Innovation Theory Revisited: Towards Quantum Innovation.  This notion of identity-centric innovation required refocusing on the role of culture and identity with a deep dive into the cultural ramifications, resistances, obstacles and rethinking the jobs-to-be-done framework altogether.  When there are multiple stakeholders with multiple jobs to get done, innovation behaves quite differently than just assessing whether new consumers would embrace the inferior MP3 file that clearly turned out to be good enough to get the job done.In Clay’s 2013 wrap-up of the Awards, he shocked the audience by stating that in his view three areas critical to our future desperately needed disruption: terrorism, parenting, and religion itself.  His concern was that strong moral underpinnings, largely absent in Adam Smith’s Wealth of Nations, were fundamental to the success of capitalism and as institutional religion was eroding in America so was moral accountability.  This led to a more ambitious and provocative second installment of the Off White Paper penned with Clay– Disrupting Hell: Accountability for the Non-believer.

In 2016, at the last Awards he was able to attend, instead of his normal ten-minute wrap-up, a tearful Clay was particularly moved by the slate of honorees threw away his remarks and unexpectedly delivered a very ­­­­­emotional sixty-second reflection that our words can’t capture but anyone who wants to understand the man, needs to watch.

Members of the Church of Jesus Christ of Latter-day Saints believe that every individual has the opportunity to dwell with God after this life in a state of eternal joy. Clay’s untimely demise leaves us profoundly saddened.  The Entrepreneur and the Disruptive Rabbi are eternally indebted to the Professor who transformed our lives with his extraordinary mind, heart, and spirit.  We have no doubt that Clay Christensen is indeed in a state of eternal joy. Our job-to-be-done is to continue to support his legacy for the public good.

Standard

From Archilochus to Isaiah Berlin: In Search of a Few a Good Hedgefoxes

“A fox knows many things, but a hedgehog one important thing” – Archilochus

The Greek philosopher Archilochus really started something. In today’s world, it is worth asking whether someone with many small tricks will have a better chance of success than someone with one big trick. Tricks of the trade can be thought of as tools for getting a job done. But there is a dynamic tension between having too many tricks or tools, as Erasmus reminds us “too many chefs, spoil the brew.”

The fox can escape the hounds through a portfolio of cunning subterfuges and stratagems while the hedgehog has only one trick: to curl up in a ball with its prickly quills pointing outward discouraging adversaries. One stratagem– but a very effective one at that. To effectively navigate a world of gray filled with ambiguity, fuzzy contradictions, and paradoxes that can often lead to what Isaiah Berlin referred to as “incommensurable truth or values.” Each is is true but to borrow Einsteins’s frame-of-reference” lens it all depends on your Point of View (POV ) particularly when it comes to metaphysical matters. A stationary observer watching a moving train (at a constant speed) from an embankment will have a different frame of refernece than the observer standing up in the train. This is physics and at the heart of the theory of relativity. When it comes to metaphysocal matters, POV is a function of one’s worldviews, values and belief systems. Right to life is not more or less true than the right to chose. They are incompatible matters not subject to simplified measurable analytics. These ar Berlin’s incommensurable truths/values. But it sure causes intractable responses from both sides of the tracks.

This is why our cultural ideologies are so hard to reconcile and create political dysfunction. A one trick approach of the hedgehog is to curl up in a ball and assume a defensive posture. The fox one the other hand has a more robust bag of tricks; empathy, compassion, deceit and cunning are all contextual tools that can provide a more adaptive, flexible response. Knowing when to act more like a hedge hog or a fox will be a valuable skill as we move into the future where as the military term VUCA comes into play. VUCA? What’s VUCA? As the War College tells us: Volatility, Uncertainty, Complexity and Ambiguity. Some might recognize this as the Fog of War. Welcome to the VUCA-verse!

So it would seem that learning to blend the cunning of the fox with the less clever one-trick pony might be the best strategy in this brave new world of innovation. Morphing from all fox to all hedgehog or a combination of the two leads us tao believe that when it comes to successful innovation perhaps we are in need of a new breed of animal: meet the hedgefox. In the world of incommensurables– trying to measure two things that are not measurable– a single POV of a hedgehog will likely be ineffective. Trying to bludgeon a climate change skeptics with data and science will only tend to harden their positions. Their view is most unlikely due to any scientific argument or position but their system of values.

With other litmus test issues such as gun control, abortion, gay rights, and gender equality for example, Isaiah Berlin’s notion of incommensurability suggests that the two (or more) sides are not even having a conversation about the same topic. In a Wittgenschtein, Kuhnian et. al. duck-rabbit framework, each side is capable of seeing either the duck or the rabbit and there is sociological, psychological, physiological blockage, or perhaps baggage, that preempts even a well-meaning attempt of empathy to fail.

The value of choice versus cherishing human life can not be reconciled. They are both two completely different value sets. These issues are deeply embedded into the individual and group identity where world views, values, and belief systems trump all forms of rational argument. So be sure to sign up for our soon-to-be-launched HEDGEFOX ACADEMY. We will be looking for a few good men and women, and other himans with less traditional gender identities.

Standard

Thomas Kuhn Revisited: Paradigm Shift, Rift or Drift?

The expression paradigm shift first appeared in 1962 in Thomas Kuhn’s seminal work, The Structure of Scientific Revolutions, one of the most important and cited works in all of academia. Kuhn examined the process of how, why and when the scientific establishment embraces new models or paradigms of how the world works. Kuhn claims that in normal science when there is an increasing number of “violations of expectation” i.e. more and more anomalies or aberrations appear that can no longer be explained by the theory, a crisis starts to emerge. When there is another competing model that can better explain these anomalies, then the crisis will turn into a revolution where the establishment will embrace a new theory. It is not a gentle or gradual process according to Kuhn. It has a tectonic effect and creates instability.

Premise: If no emergent theory or model is within grasp to replace the incumbent theory to resolve the crisis then the prevailing theory will either limp along or be destroyed as a bona fide paradigm shift will not be possible. The path of the crisis will progress and tranform into into one of two directions: either

i) a paradigm drift that can be thought of as “death by a thousand paper-cuts” which entails endless modifications of and/or bandaids on the incumbent theory or;

ii) a paradigm rift which in the words of Mark Zuckerberg results from “movng fast and breaking things.” The prevaliing theory will be abandonned with nothing as a replacement wreaking further anxiety and instability.

The timeframe for resolution of a drift will be considerable leaving the mounting anomalies unexplained and further gradual undermining of the prevailing model over time.

A rift on the other hand will tend to be more sudden and violent in the absense absent a any replacement theory or model. An expectation of an ongoing inability to control the crisis will ultimately lead to a completem loss of confidence with unintended consequences that will follow. The financial crisis of 2008 would be a quite realistic example of a paradigm drift.

Since the scientific revolution, scientists have attempted to describe and explain observable phenomena in the physical world works. The process of repeatable experimentation and observation is known as the scientific method. If an “experiment” is repeated, normal science accepted by the establishment should always predict the same result. But, a theory is nothing more than a model, or paradigm, that is the best prevailing explanation or description of how the natural world works.

When an unexpected, new or different result is observed that diverges from the expectation, rather than shouting “Eureka!” scientists might be better served by in words of Isaac Asimov, “gee, isn’t that funny?” When enough of these anomalies are observed, however, it is no longer a laughing matter and a real crisis emerges. If there happens to be is an alternative or competing model or paradigm that had not been accepted to date by the science establishment, but can better explain those annoying anomalies, the crisis blossoms into a full-blown revolution that results in the establishment embracing the new theory. This process of anomaly-crisis-revolution results in what Kuhn calls a paradigm shift.

In its simplest form, a paradigm shift is a wholesale change by the scientific community to a new model or description of reality, a paradigm, that better explains observable phenomena or better predicts behaviors. Initially applied to the hard sciences, over time the theory has been expanded to cover just about everything: sociology, politics, religion, psychology, art etc.

In Gladwellian terms, there will be a tipping point at which the crisis blossoms into a full-blown revolution. If there is an alternate theory that better explains or solves some of the unsolved puzzle-solution sets, the mainstream scientists who had embraced the old theory, the normal science, abandon ship and will begin to embrace or shift to the new theory which becomes the new normal.

What is often overlooked

Standard

In Search of Dynamic Equilibrium: How Disruptive Innovation Can Succeed in Traditional Cultures

Embracing Mizan: From the Grand Mosque to the Burke-Paine Great Debates to the Moody Blues and Koyaanisqatsi: balance is always the right problem-solution.

The famous African proverb tells us “if you want to go fast, go alone. If you want to go far, go together.” For traditional cultures wrestling with the impact of the break-neck rate of technological change, this piece of ancient wisdom frames the dilemma: how to go far and fast at the same time?

The prevailing western narrative of technology–two guys in a garage– is fast and furious. Don’t ask for permission, ask for forgiveness. “Move fast and break things. Unless you are breaking things you are not moving fast enough,” famously said Mark Zuckerberg. From fake news to cyber terrorism to unsupportable valuations in the field of unicorns, the risks and challenges of unfettered innovation are beginning to feel existential. Life out of balance.

The limiting factors and constraints of both modern and traditional cultures (laws, norms, customs, mores, etc.) broadly-defined are frequently ignored. Build the plane as you fly. The current rate of technological change lies somewhere between geometric to exponential. But the ability of even modern culture to process the change is at best linear with fits and starts and bumps in the road. In the case of traditional cultures which will tend to be more conservative, the ability to absorb these jarring changes is even more problematic. If the technology curve is exponential and the culture curve is linear there will be an ever-widening gap: meet the techno-cultural divide. Bridging this divide will not be easy even under the best of circumstances.

Only recently is serious attention being paid to the role of culture in the successful diffusion of disruptive technologies and related innovation. To focus solely on the technology without a rigorous analysis and understanding of the cultural context can negatively impact outcomes, by missing pain points and resistances that, in hindsight, should have been relatively easy to identify. This paper examines the art of the possible. It seems neither technology that ignores culture nor culture that ignores technology end well. We are currently living in the former scenario while history amply confirms the latter. It all catches up in the end.

By techno-cultural divide, we mean the destabilizing and often messy tension between technology and culture, one might reasonably conclude that focus on one comes at the expense of the other. If technology were the only consideration, one might go fast. On the other hand, if a reactionary culture prevents or impedes technological progress, a tribe or society might be left behind. But with the proper balancing of technology and culture, as we move forward, we just might get farther faster. The proper mix is merely a question of balance. Optimization over maximization. We refer to this balancing act as bridging the techno-cultural divide.

From the outsider’s perspective, the principle of Mizan is perhaps one of the most sublime and beautiful elements of the Islamic religion. Loosely translated as “balance”, this principle offers a fascinating insight inviting us to reimagine the process and role of innovation, not only is Islamic culture but other tradition-rooted cultures as well. An intriguing question arises as to whether Mizan can be employed as a useful tool to better understand and manage the impact of “progress” and the process of change in identity-laden societies.

One might look at Godfrey Reggio’s 1982 experimental film, Kaayanosqatsi punctuated by Philip Glass’ meditative but unnerving score; the dazzling visuals of industrialization layered by Glass’ music overwhelms the viewer’s senses in a disturbingly profound cinematic experience. Inspired by the Native American Hopi tribe’s ancient word, koyaanisqatsi, the movie’s title translates as life out of balance. It begs us to consider the question: yes to progress but at what cost? In a holistic worldview unencumbered by political, social and economic realities of the western narrative of progress where intellectual rigidity and arrogance rule the roost, one must begin to contemplate and understand that progress in one dimension will always come at a cost in another. Optimization over maximization. The proper mix is merely a question of balance.

The trajectory of western civilization has experienced and absorbed epic disruptions from human progress as man learned to control or harness the environment and the material world. Over time, the dominant worldview and belief system of mankind evolved from the ancient God-centric sovereignty to the human-centric sovereignty of the Enlightenment to the modernity-centric materialism at its core. This materialism relegated the traditional anchors of the Divine and spiritual elements to secondary fiddle-dom: still part of the orchestra but not garnering the standing ovations. Materialism also began to overwhelm the critical inter-relationships among the divine, natural and human realms.

Starting with the introduction of the printing press followed by the manifold inventions of the industrial revolution, the misery index of daily life was measurably reduced by civilization’s increasing base of knowledge and man’s creative ingenuity. Inventions such as the cotton gin, the automobile, railways, the radio, television, and most recently the internet have had undeniable effects generally perceived in the western world as improving the quality of life with quantifiable benefits to humanity. The western narrative of progress, however often makes short shrift or ignores altogether the set of thorny issues and challenges arising from the unintended consequences of these disruptions, not the least of which is painful dislocations and destabilizing effects in society at all levels. This poses challenges as to whether and how technology and culture can best be balanced to minimize the attendant loss of tradition, virtues, and values. Technology moves at an ever-increasing rate of change– perhaps somewhere between a geometric and exponential rate. Culture increases in fits and starts at somewhere between a linear yet less-than-geometric rate. The gap between the rate of technological change and cultural change (adoption/absorption) is ever-widening under the current narrative of progress. Things seem to have run amok and life is out of balance. Without a fundamental change in this model, the gap will continue to expand until the two become completely disconnected. The necessary mediation is what we refer to as bridging the techno-cultural divide… before it is too late.

Bridging the techno-cultural divide.

Let us momentarily make a somewhat arbitrary but important distinction between invention and innovation for reasons that will become apparent. Mankind has become increasingly adept at developing life-altering inventions but it remains an often tedious, labor-intensive and lengthy process perhaps best characterized by the maxim: “invention is 1% inspiration and 99% perspiration.” Even inventions or discoveries made by accident (silly putty or stickies are classic examples) ofter were part of an alternate laborious pursuit. While the word invention is frequently used interchangeably with the word innovation, for our purposes, a helpful way to think of the dichotomy is that invention, broadly defined, is a new technology altogether; innovation, on the other hand, entails rearranging existing inventions or components thereof (i.e products, services, ideas) in novel or unusual ways. While there is a fuzzy invention-innovation continuum, for purposes of clarity this wave-particle-like duality is being held constant.

New inventions often can and do unleash a torrent of innovation on an exponential scale. It is precisely our concern regarding the rate of unfettered innovation and the ever-diminishing timeframes for culture, particularly traditional cultures, to process these innovations in a balanced and thoughtful manner. Customs, laws, norms, traditions, regulations, etc. evolve over longer timeframes while the technology-innovation regime is fast and furious that increasingly precludes sufficient time for the affected society to adjust responsibly taking into consideration the inability to understand the unknowable unintended consequences and impacts on all the constituents and stakeholders. The western narrative of “ask for forgiveness not for permission” or Mark Zuckerberg’s “move fast and break things” works until one day it doesn’t. A reactionary resistance to change has equally perilous consequences as change is will be the only constant; being left behind can also pose existential risks and predicaments. Does the concept of Mizan offer an acceptable, if not happy medium to navigate the choppy waters ahead?

In the life sciences, theoretical concepts such as stasis, entropy, equilibrium, disequilibrium, etc. are models or mental constructs of scientists’ best efforts to describe and predict the immanent laws of nature. History has repeatedly shown that from time to time these models become unreliable, outmoded or untenable as anomalies or aberrations appear in the observable phenomena that the prevailing theory cannot explain. At some boiling point this mounting “violation of expectations” leads to a crisis with a field of scientific endeavor. This crisis that leads to a painful and destabilizing shift uprooting the established wisdom. This crisis is what leads to the first phase of a paradigm shift. The violation of expectations is typically revolutionary and jarring, not the more gentle Burkian reformation of the existing order. Thus true scientific progress is radical in nature.

The concept of a paradigm shift was first identified by Thomas Kuhn in his 1962 monumental opus, The Structure of Scientific Revolutions, one of the most influential and cited academic books in history. Classic examples of bona fide paradigm shifts include Heliocentrism, Newton’s Laws of Attraction, Einstein’s Theory of Relativity, Quantum Mechanics. Kuhn’s work has also been applied to many domains outside of the hard sciences: sociology, psychology, art, and religion to name a few such as Freud’s psychoanalysis, Picasso’s cubism Movement and Martin Luther’s Reformation.

The classic pattern of a paradigm shift takes place when aberrations or anomalies disturb the social, political and economic order of things, as they are known, cause us to search for a better model or description rather than to assume the underlying laws of nature have changed. In The Structure of Scientific Revolutions, Thomas Kuhn introduced a ground-breaking concept of a paradigm shift that occurs when there is a “violation of expectations”; a crisis erupts within a field or domain (usually scientific) and when, and only when, an emergent theory mitigates or resolves the anomalies, then, and only then, will the stakeholders abandon the old crisis-riddled theory and embrace the new model. This is the essence of a legitimate paradigm shift.

The path to successful innovation in traditional cultures might lie in the application of four lenses or “tools of the trade” for innovation: (1) a simple-to-apply version of the Hegelian Dialectical method (thesis-antithesis-synthesis); (2) Mark Granovetter’s “Strength of Weak Ties (1973);” and (3) Thomas Kuhn’s Structure of Scientific Revolutions and 4) the virtuous sensibilities of Mizan or balance.

While western sensibilities encourage and thrive on competitive instincts and zero sum outcomes, embedded in Mizan’s balancing principle are the virtues of civility and humility. This approach might enable us to mediate between two contradictory ideas, extract the non-conflicting elements and integrate the best of both positions: the synthesis. This process creates the conditions for establishing dynamic equilibrium, avoiding the contentious pitfalls of warring ideas.

History is replete with many well-known debates but to illustrate our case we have selected the “Great Debates” of Thomas Paine and Edmund Burke at the core of the liberal and conservative political philosophies. These two men engaged in a battle of ideas through an ongoing series of articles, essays, pamphlets over the last quarter of the eighteen century. The topic was: what was the best form of governmental structure to accommodate the inevitable progress unleashed by the industrial revolution.

In the left corner Paine, father of the liberal tradition favored radical change or outright revolution. In the right corner, Burke, father of the conservative movement, argued for much more measured reforms. In a nutshell: when it came to government Paine, the eternal optimist who supported and encouraged the American and French Revolutions, wanted to throw the baby out with the bathwater: to start anew. On the other hand, the more cautious Burke, a Monarchist argued strenuously to embrace the core principles and foundations of history, tradition, and precedent: keep the best and throw out the rest.

Contrary to the popular view that Burke was a reactionary he, in fact, embraced progress but believed in building upon the time tested institutional foundations and traditions. How to reconcile the optimism of Paine with the perceived skepticism of Burke is the true challenge at hand. Burkian ideas are unpalatable at the visceral level to the left but perhaps a little repackaging can go a long way to bridging the divide.

While not directly acknowledging Burke, in the market place of ideas, Frederick Hayek’s famous quote is instructive, “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” This is Burke’s point precisely. Since the complexity of the modern world leads to a vast trove of unintended consequences, even the most competent policymakers need to recognize their own limitations on the resulting political rigidity under conditions of extreme uncertainty. The ideal model for the more flexible free-market approach of experiment-evaluate-evolve to solve problems has great merit but is encumbered by grand-scale political processes. It all comes down to centralization versus decentralization of information and decision making.

.

In April 2013 we published our inaugural Off White Paper introducing the concepts of utility-centric and identity-centric innovation. The essence of the former was best captured in the seminal works on disruptive innovation theory, by Christensen et. al. (HBR ’95, Innovator’s Dilemma ’97) that focused on products and services that served as utilities that “were good enough to get the job done.” Brand loyalty at the low end seems to have played an insignificant role in disrupting industry incumbents; it was the ability of simpler, cheaper more accessible products and services that were “good enough to get the job done” that served as the catalyst in successful disruptive innovation.

“Thanks to our sullen resistance to innovation, thanks to the cold sluggishness of our national character, we still bear the stamp of our forefathers.” –Edmund Burke

As subsequent books by Christensen et. al. were published regarding the healthcare industry (Innovator’s Prescription 2009) and education (Disrupting Class 2008) the empirical results suggested that, in these domains, significant anomalies could be observed that were inconsistent with theory’s predictions and/or post-facto analytics. As more anomalies continue to emerge the serious question arises: what was it about these particular domains that would yield results unexplained by the core theory?

Our curiosity and subsequent anthropological investigations have led us to suggest a conjecture about something we refer to as the utility-identity function: that in domains where people’s world views, values, and belief systems- i.e. culture writ large- come into play, one should expect that the success and diffusion of innovations that impact people’s identity (identity-centric) will behave quite differently than innovations that perform more straightforward functions and tasks (utility-centric.)

It was reasonably clear that a 100% purity test would be oversimplistic and that all innovations contain both elements of identity and utility. An inexpensive Hyundai that will get from point A to point B largely exhibits utility characteristics, an electric Prius might make an environmental statement about the owner while a Ferrari suggests a different statement about economic status or stylishness. All three examples get you from A to B but exhibiting significantly different degrees of both utility and identity. A Hyundai might exhibit 90% utility, 10% identity but a Ferrari 10% utility, 90% identity.

While much work needs to be done, we have concluded that the utility-identity function could serve as a helpful lens in analyzing and predicting the likely success of innovations, particularly in identity-laden domains (such as health care, education, politics, ethics, religion, spirituality, music, fashion, etc.)

The circumstances of the world are continually changing, and the opinions of men change also; and as government is for the living, and not for the dead, it is the living only that has any right in it. That which may be thought right and found convenient in one age may be thought wrong and found inconvenient in another. ” -Thomas Paine

Standard

Disruptive Innovation Theory Revisited: Toward Quantum Innovation

screen-shot-2016-11-23-at-6-47-42-pm    

 Disruptive Innovation Theory Revisited: Toward Quantum Innovation

By Professor Clayton M. Christensen, Craig Hatkoff, and Rabbi Irwin Kula 

This is the first of an ongoing series of original articles, essays, and papers published by the Disruptor Foundation hereinafter known as The Off-White Papers. It is being published in conjunction with the fourth annual Tribeca Disruptive Innovation Awards that are being held on April 26th, 2013.

Dr. Joseph Lister was inspired by the work of Louis Pasteur. Carbolic acid was the main ingredient in his new antisepstic techniques and the statistical results were nothing short of astounding.

Dr. Joseph Lister was inspired by the work of Louis Pasteur. Carbolic acid was the main ingredient in his new anti-septic techniques and the statistical results were nothing short of astounding.

Perhaps it was on the limb-strewn battlefields during the Franco-Prussian war in the 1870s that one disruptive innovation gained great favor with a whole generation of medical adherents.  Young doctors on the frontlines readily embraced Dr. Joseph Lister’s new, and rather simple, technique for combat triage using anti-septic surgery for life-saving amputations and skin piercing compound bone fractures. Previously almost any large incision resulted in death from infection caused by unsanitary conditions.

Lister’s carbolic acid concoction was easy to use and quite effective at getting the job done– even in the field of battle.  It prevented infection from what turned out to be airborne microbes. Unfortunately, the U.S. medical establishment did not embrace Lister’s radical idea of germ theory even when presented with incontrovertible evidence.  They defended the long-standing medical wisdom that bad air, or miasma, was the source of infection– not invisible microbes.  Germ theory was outright rejected.  While there were ample documentation and statistics provided by Lister to the AMA and the establishment, it would take a public outcry after the assassination attempt and the unfortunate, and probably avoidable, death of U. S. President James Garfield to create a serious enough crisis to challenge the entrenched thinking of the old guard. A paradigm shift was at hand.

The medical establishment’s resistance to Lister’s technique is an instructive narrative in trying to better understand innovations that, on the face of things, should catch on and spread rapidly.  Yet in certain domains, where entrenched worldviews, attitudes, and values are deeply woven into the societal architecture, innovation can come to a grinding halt.  This is particularly noticeable in those domains with multiple stakeholders whose identities and livelihoods are being challenged by the threat of innovation. In those situations where simply getting well-defined jobs done a product or service’s utility is the main driver.  But in those domains where stakeholders’ identities are being challenged the identity function can often overwhelm the more straightforward utility of the innovation. In turn, the predictive power of disruptive innovation theory is diminished.

…in those domains where stakeholders’ identities are being challenged the identity function can often overwhelm the more straightforward utility of the innovation.

In spite of all the extraordinary technological progress that has taken place since disruptive innovation theory was first posited in 1997 (Innovator’s Dilemma, Christensen) certain domains have proved to be quite resistant or slow to adopt change. We have observed that these slow-to-change domains such as education, healthcare, religion, conflict resolution, the environment, politics, and the military to name a few represent some of the most critical areas waiting to be disrupted. It is in these areas that innovation will be most necessary to meet the societal challenges of the 21st century.

If we are to develop profound theory to solve the intractable problems in our societally-critical domains….we must learn to crawl into the life of what makes people tick.

 Disruptive innovations are simpler, cheaper, more, accessible products or services, often created by “two guys in a garage. ”  Yet these inferior products and services seemingly decimate an industry leader who keeps on making very profitable, perfectly good products better and better, outstripping the needs and pocketbooks of already well-served consumers.  But new consumers are very willing to “hire” a cheaper product or service to carry out a specific task if and only if that product is good enough to get the job done.  Disruptive goods and services are at first marketed to the non-consumer or a non-existent market altogether with little if any, profit—at first.  This is known as the innovator’s dilemma.  It seems the best-managed companies’ efforts to innovate are almost always ineffectual and most vulnerable to extinction: one only be reminded that there are no integrated steel mills, steamships, or buggy whip manufacturers left in America.

Our observations lead us to suggest that in areas where a product or service’s utility is the dominant consideration the original theory still holds up quite nicely.  However, in domains where the consumer’s identity or the identity of other stakeholders is challenged, the theory encounters quite a few anomalies that should be re-examined to see if a more profound theory can be developed.  By identity, we are referring to the bundle of values, opinions, customs, and webs of relationships that define who we are both individually and collectively that transcend pure utility.

Inherent in every product or service is both a utility function and an identity function.  Understanding each of these functions and the interaction between the two might shed light on some of the anomalies observed in the original theory.  It appears that in utility-centric products and services such as mini-mills, semiconductors, disk drives, MP3 files, Wikipedia, Amazon, and the like, the original theory does keep its predictive potency.  Consumers and non-consumers with no vested interest in anything other than “getting the job done” will change behaviors quickly and readily with little anxiety.  They are simply focused on the product or service’s utility—and the incumbent will be disrupted.  All you need to think about is how quickly we “consumers” (or the new “non-consumers” as the case may be) migrated from vinyl to cartridge to cassette to CDs to MP3s on our iPods; from Encyclopedia Britannica to Wikipedia; from Borders to Amazon.

In utility-centric innovations water runs downhill; there seems to be very little consumer resistance to successful adoption and diffusion. Resistance to change, however, does often come from within from industry incumbents whose jobs are dependent on maintaining the existing business model and power dynamics.  As Upton Sinclair said, “never expect someone to understand change when their livelihood depends on not understanding it.”  As the theory predicts, the resistances are structural even at the best-managed companies and hence the term innovator’s dilemma: what managers in their right minds wouldn’t pursue better margins from high-end consumers in predictable, well-established markets?  The answer is the disruptive innovator, an outsider, who creates a product or service for the non-existing consumer in a non-existing market for almost no profit. Hmmm.

“Never expect someone to understand change when their livelihood depends on not understanding it.”

In order to understand the difference between utility and identity-laden products just compare the $6,000 Kia with the $120,000 Ferrari.  The Kia clearly gets you from A to B much less expensively than a Ferrari.   But few Ferrari drivers are likely to be caught dead in a Kia and plenty of Kia owners would love to be driving Ferraris.  The real question is what jobs do the Kia and the Ferrari get done?  The Kia is more about the pure utility of getting from here to there where a Ferrari, it could surely be argued, is more about my economic and social status. It is about “who I am.”  While it will vary from person to person, a Kia might be 90% utility and 10% identity but a Ferrari might be 20% utility and 80% identity. To each his own.

We would argue that there is a continuum for all products and services for each person and group.  We refer to this continuum as the utility-identity (or util-identity) curve analogous to an indifference curve in preference theory. One might think one hammer is the same as the next; it is almost 100% utility.  Any hammer will do for most jobs around the house unless it is that well-worn hammer you inherited from your grandfather who happens to have been a master craftsman.  That is where the identity function kicks in and rear its head.   So just imagine if someone developed an inexpensive electric hammer that required little skill to drive in nails perfectly and what the predicted reaction to this innovation might be from carpenter union members, woodworking hobbyists, or any inheritor of a sacred family relic such as grandpa’s hammer. One could expect far more resistance than from consumers with no such identity attachments.

Innovations that challenge identity will have a much different set of dynamics than those innovations of pure utility.  So we shouldn’t be surprised that innovations that make healthcare cheaper, simpler, and more accessible, produce resistances previously not addressed. Healthcare, as opposed to automobiles, even Ferraris, has many “identity stakeholders.”  These include the AMA, AARP, patients, doctors, nurses, FDA, and the politicians who confront their own self-definitions in responding to the threshold issue of whether to accept or resist the innovation in question.

Innovations that challenge identity will have a much different set of dynamics than those innovations of pure utility.  So we shouldn’t be surprised that innovations  that make  healthcare cheaper, simpler and more accessible, produce resistances previously not addressed.

Acceptance of a utility-centric innovation is much simpler than working through the more complex psychological and emotional issues deeply embedded in the web of relationships in an identity-centric innovation such as healthcare.  Further complicating the innovation drama is that every identity-centric innovation also includes utility stakeholders.  In healthcare, this group would include insurance companies, pharmaceutical companies, and medical equipment manufacturers who are largely indifferent to these critical identity issues and focus almost entirely and “callously” on the math: regardless of intentions or morality any decisions made on the basis of the bottom line will be seen as a sign of cold indifference to those whose identities are at stake.

Joseph Lister’s antiseptic surgery was documented and accepted in Europe for nearly 30 years before it was accepted in the U.S.  It took the assassination of President Garfield to force the AMA’s acceptance of germs as the source of infection and the usefulness of anti-septic surgery.  The stethoscope, a seemingly obvious innovation, took over a decade before its use was widely accepted by the medical community.

Ignorance is Bliss and often Gross : Famous portrait by Thomas Eakins shows reknown surgeon Dr. Samuel Gross and his team perfroming an amputation in unsanitary conditons. ‘Little, if any faith, is placed by any enlightened or experienced surgeon on this side of the Atlantic in the so-called carbolic acid treatment of Professor Lister,’ said Gross

Ignorance is Bliss and often Gross: Famous portrait by Thomas Eakins shows renowned surgeon Dr. Samuel Gross and his team performing an amputation in unsanitary conditions. ‘Little, if any faith, is placed by an enlightened or experienced surgeon on this side of the Atlantic in the so-called carbolic acid treatment of Professor Lister,’ said Gross

The disruptive healthcare innovation of the nurse practitioner, which the theory predicts should be scaling much more rapidly, still faces significant resistance from doctors and patients (although this might be a generational consideration as we have seen with our young army medics) in spite of its obvious economic benefits and its ability to get the job done. But there is nothing more connected to a patient’s identity than his own health and mortality.  Similarly, palliative care and hospice encounter greater resistance than the existing theory predicts because end-of-life issues speak to our deepest values and beliefs, ethical considerations, and worldviews.  The emotions (or lack thereof) that surround cutting the cord on your landline and moving to your smartphone are very different indeed than “pulling the plug” on your loved ones.

It is not surprising, with such heavy issues of identity at stake, that all political conversations become polarized deteriorating quickly into the death panel versus quality of life debate.  Unlike the straightforward utility of a Kia for daily transportation, the basket of issues surrounding identity increases resistance and complicates the acceptance of hospice as a disruptive innovation. The “jobs to get done” by the disparate stakeholders in the end of life management are often in conflict: minimize pain, maximize the quality of life, observing religious beliefs, professional ethics, obligations, and responsibilities, sheer economic reality and/or mitigating potential guilt let alone expressing our love. One can see serious limitations in applying a predictive theory that works nicely for utility-laden products and services to such a highly charged, identity-centric domain with so many non-utilitarian considerations. In certain domains perhaps people do not simply behave as homo economicus or purely rational beings.

In The Innovator’s Solution (Christensen 2003) the role of modularity versus interdependence was explained and incorporated into the original theory. Think of modularity as “plug and play.”  In high-utility domains, more profitable, interdependent architecture and components will ultimately lead to severe cases of creeping feature-itis where the consumer is over-served and the product too expensive.  In this case, the market is ripe for disruption that often comes in the form of less expensive, modular products and services that consumers readily accept.  iTunes successfully introduced modularity to the consumer who could now buy singles rather than an entire album to the dismay of most record industry executives and to the occasional artist protestation.  The interdependence created by having to buy 16 songs when you only really wanted four might have been highly profitable for the record companies but over-served the consumer at a cost substantially higher than purchasing the four singles.  No wonder, as the original theory neatly predicted, disruption in the music industry was fast and ugly.

In high-identity domains, however, products and services are almost always highly interdependent, and successful modular architecture is elusive. Even when the consumer is over-served and the price too expensive, and modular solutions are “good enough” resistance is still encountered.  Is it really necessary for your high price physician to give you your physical in its entirety i.e. to take your temperature, blood pressure, heart rate, and blood test?  These functions or modules would be much cheaper and good enough if handled by a nurse practitioner.  Yet your yearly medical examination remains more like an album. Unlike the a la carte modularity offered by iTunes, the “modularizing” of the annual physical that would be a partial solution to the high cost and accessibility of health care, has not scaled as the original theory predicts.

Therefore, we are suggesting identity should be viewed as a formidable variable in predicting the success and scalability of disruptive innovations.  To date, the jobs-to-be-done theory has very successfully been applied to utility products and services that accomplish a single task or relatively simple set of tasks. We hire utility-centric products to satisfy an obvious need.   But we hire identity products and services to carry out multiple jobs, some of which, we would hypothesize, we are not even conscious of.  There are many more causal mechanisms involved in the decision to buy high-identity products or services than those with high utility. There are more stakeholders outside of the traditional value chain that exerts an invisible, gravitational force on our decision-making.  Identity-centric products tend to have interdependencies at the societal level with its deep roots and structures.

At the policy level, we must begin to study how the momentum of technological innovation and solutions can be impeded by societal inertia caused by the identity factor.  Technological architecture can no longer be viewed in isolation for designing or predicting the success of disruptive innovations; we will have to begin taking into account the deep structure and roots of societal architecture as well.  The utilidentity function might offer a new set of lenses in disruptive innovation theory.  If we are to develop a profound theory to solve the intractable problems in our societally-critical domains, most of which would appear to be heavily identity-centric,  we must learn to crawl into the life of what makes people tick.

April 12, 2013

Standard