• Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search
Tech Book of the Month
  • Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search

August 2023 - Capital Returns by Edward Chancellor

We dive into an investing book that covers the capital cycle. In summary, the best time to invest in a sector is actually when capital is leaving or has left.

Tech Themes

  1. Amazon. Marathon understands that the world moves in cycles. During the internet bubble of the late 1990s the company refused to invest in a lot of speculative internet companies. “At the time, we were unable to justify the valuations of any of these companies, nor identify any which could safely say would still be going strong in years to come.” In August of 2007, however, several years after the internet bubble burst, they noticed Amazon again. Amazon’s stock had rebounded well from the lows of 2001 and was roughly flat from its May 1999 valuation. Sales had grown 10x since 1999 and while they recognized it had a tarnished reputation from the internet bubble, it was actually a very good business with a negative working capital cycle. On top of this, the reason the stock hadn’t performed well in the past few years was because they were investing in two new long-term growth levers, Amazon Web Services and Fulfillment by Amazon. I’m sure Marathon underestimated the potential for these businesses but we can look back now and know how exceptional and genius these margin lowering investments were at the time.

  2. Semis. Nothing paints a more clear picture of cyclicality than semiconductors. Now we can debate whether AI and Nvidia have moved us permanently out of a cycle but up until 2023, Semiconductors was considered cyclical. As Marathon notes: “Driven by Moore’s law, the semiconductor sector has achieved sustained and dramatic performance increases over the last 30years, greatly benefiting productivity and the overall economy. Unfortunately, investors have not done so well. Since inception in 1994, the Philadelphia Semiconductor Index has underperformed the Nasdaq by around 200 percentage point, and exhibited greater volatility…In good times, prices pick up, companies increase capacity, and new entrants appear, generally from different parts of Asia (Japan in the 1970s, Korea in 1980s, Taiwan in the mid1990s, and China more recently). Excess capital entering at cyclical peaks has led to relatively poor aggregate industry returns.” As Fabricated Knowledge points out the 1980s had two brutal Semiconductor cycles. First, in 1981, the industry experienced severe overcapacity, leading to declining prices while inflation ravaged through many businesses. Then in 1985, the US semiconductor business declined significantly. “1985 was a traumatic moment for Intel and the semiconductor industry. Intel had one of the largest layoffs in its history. National Semi had a 17% decrease in revenue but moved from an operating profit of $59 million to an operating loss of -$117 million. Even Texas Instruments had a brutal period of layoffs, as revenue shrank 14% and profits went negative”. The culprit was Japanese imports. Low-end chips had declined significantly in price, as Japan flexed its labor cost advantage. All of the domestic US chip manufacturers complained (National Semiconductor, Texas Instruments, Micron, and Intel), leading to the 1986 US-Japan Semiconductor Agreement, effectively capping Japanese market share at 20%. Now, this was a time when semiconductor manufacturing wasn’t easy, but easier than today, because it focused mainly on more commoditized memories. 1985 is an interesting example of the capital cycle compounding when geographic expansion overlaps with product overcapacity (as we had in the US). Marathon actually preferred Analog Devices, when it published its thesis in February 2013, highlighting the complex production process of analog chips (physical) vs. digital, the complex engineering required to build analog chips, and the low-cost nature of the product. “These factors - a differentiated product and company specific “sticky” intellectual capital - reduce market contestability….Pricing power is further aided by the fact that an analog semiconductor chip typically plays a very important role in a product for example, the air-bag crash sensor) but represents a very small proportion of the cost of materials. The average selling price for Linear Technology’s products is under $2.” Analog Devices would acquire Linear in 2017 for $14.8B, a nice coda to Marathon’s Analog/Linear dual pitch.

  3. Why do we have cycles? If everyone is playing the same business game and aware that markets come and go, why do we have cycles at all. Wouldn’t efficient markets pull us away from getting too hyped when the market is up and too sour when the market is down? Wrong. Chancellor gives a number of reasons why we have a capital cycle: Overconfidence, Competition Neglect, Inside View, Extrapolation, Skewed Incentives, Prisoner’s Dilemma, and Limits to Arbitrage. Overconfidence is somewhat straightforward - managers and investors look at companies and believe they are infallible. When times are booming, managers will want to participate in the boom, increasing investment to match “demand.” In these decisions, they often don’t consider what their competitors are doing, but rather focus on themselves. Competition neglect takes hold as managers enjoy watching their stock tick up and their face be splattered across “Best CEO in America” lists. Inside View is a bit more nuanced, but Michael Mauboussin and Daniel Kahneman have written extensively on it. As Kahneman laid out in Thinking, Fast & Slow: “A remarkable aspect of your mental life is that you are rarely stumped … The normal state of your mind is that you have intuitive feelings and opinions about almost everything that comes your way. You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why; you feel that an enterprise is bound to succeed without analyzing it.” When you take the inside view, you rely exclusively on your own experience, rather than other similar situations. Instead, you should take the outside view and assume your problem/opportunity/case is not unique. Extrapolation is an extremely common driver of cycles, and can be seen all across the investing world after the recent COVID peak. Peloton, for example, massively over-ordered inventory extrapolating out pandemic related demand trends. Skewed incentives can include near-term EPS targets (encourages buybacks, M&A), market share preservation (encourages overinvestment), low cost of capital (buy something with cheap debt), analyst expectations, and champion bias (you’ve decided to do something and its no longer attractive, but you do it anyway because you got people excited about it). The Prisoner’s Dilemma is also a form of market share preservation/expansion, when your competitor may be acting much more aggressively and you have to decide whether its worth the fight. Limits to Arbitrage is almost an extension of career risk, in that, when everyone owns an overvalued market, you may actually hurt your firm by actively withholding even if it makes investment sense. That’s why many firms need to maintain a low tracking error against indexes, which can naturally result in concentrations in the same stocks.

Business Themes

The-Capital-Cycle.jpg
  1. Capital Cycle. The capital cycle has four stages: 1. New entrants attracted by prospect of high returns: investor optimistic 2. Rising competition causes returns to fall below cost of capital: share price underperforms 3. Business investment declines, industry consolidation, firms exit: investors pessimistic 4. Improving supply side causes returns to rise above the cost of capital: share price outperforms. The capital cycle reveals how competitive forces and investment behavior create predictable patterns in industries over time. Picture it as a self-reinforcing loop where success breeds excess, and pain eventually leads to gain. Stage 1: The Siren Song - High returns in an industry attract capital like moths to a flame. Investors, seeing strong profits and growth, eagerly fund expansions and new entrants. Optimism reigns and valuations soar as everyone wants a piece of the apparent opportunity. Stage 2: Reality Bites - As new capacity comes online, competition intensifies. Prices fall as supply outpaces demand. Returns dip below the cost of capital, but capacity keeps coming – many projects started in good times are hard to stop. Share prices begin to reflect the deteriorating reality. Stage 3: The Great Cleansing - Pain finally drives action. Capital expenditure is slashed. Weaker players exit or get acquired. The industry consolidates as survivors battle for market share. Investors, now scarred, want nothing to do with the sector. Capacity starts to rationalize. Stage 4: Phoenix Rising - The supply-side healing during the downturn slowly improves industry economics. With fewer competitors and more disciplined capacity, returns rise above the cost of capital. Share prices recover as improved profitability becomes evident. But this very success plants the seeds for the next cycle. The genius of understanding this pattern is that it's perpetual - human nature and institutional incentives ensure it repeats. The key is recognizing which stage an industry is in, and having the courage to be contrarian when others are either too optimistic or too pessimistic.

  2. 7 signs of a bubble. Nothing gets people going more than Swedish Banking in the 2008-09 financial crisis. Marathon called out its Seven Deadly Sins of banking in November 2009, utilizing Handelsbanken as a positive reference, highlighting how they avoided the many pitfalls that laid waste to their peers. 1. Imprudent Asset-Liability mismatches on the balance sheet. If this sounds familiar, its because its the exact sin that took down Silicon Valley Bank earlier this year. As Greg Brown lays out here: “Like many banks, SVB’s liabilities were largely in the form of demand deposits; as such, these liabilities tend to be short term and far less sensitive to interest rate movement. By contrast, SVB’s assets took the form of more long-term bonds, such as U.S. Treasury securities and mortgage-backed securities. These assets tend to have a much longer maturity – the majority of SVB’s assets matured in 10 years or more – and as a result their prices are much more sensitive to interest rate changes. The mismatch, then, should be obvious: SVB was taking in cash via short-term demand deposits and investing these funds in longer-term financial instruments.” 2. Supporting asset-liability mismatches by clients. Here, Chancellor calls out foreign currency lending, whereby certain European banks would offer mortgages to Hungarians in swiss francs, to buy houses in Hungary. Not only were these banks taking on currency risk, they were exposing their customers to it and many didn’t hedge the risk out appropriately. 3. Lending to “Can’t Pay, Won’t Pay” types. The financial crisis was filled with banks lending to subprime borrowers. 4. Reaching for growth in unfamiliar areas. As Marathon calls out, “A number of European banks have lost billions investing in US subprime CDOs, having foolishly relied on “experts” who told them these were riskless AAA rated credits.” 5. Engaging in off-balance sheet lending. Many European banks maintained "Structured Investment Vehicles” that were off-balance sheet funds holding CDOs and MBSs. At one point, it got so bad that Citigroup tried the friendship approach: “The news comes as a group of banks in the U.S. led by Citigroup Inc. are working to set up a $100 billion fund aimed at preventing SIVs from dumping assets in a fire sale that could trigger a wider fallout.” These SIVs held substantial risk but were relatively unknown to many investors. 6. Getting sucked into virtuous/vicious cycle dynamics. As many European banks looked for expansion, they turned to lending into the Baltic states. As more lenders got comfortable lending, GDP began to grow meaningfully, which attracted more aggressive lending. More banks got suckered into lending in the area to not miss out on the growth, not realizing that the growth was almost entirely debt fueled. 7. Relying on the rearview mirror. Marathon points out how risk models tend to fail when the recent past has been glamorous. “In its 2007 annual report, Merrill Lunch reported a total risk exposure - based on ‘a 95 percent confidence interval and a one day holding period’ - of $157m. A year later, the Thundering Herd stumbled into a $30B loss!”

  3. Investing Countercyclically. Björn Wahlroos exemplified exceptional capital allocation skills as CEO of Sampo, a Finnish financial services group. His most notable moves included perfectly timing the sale of Nokia shares before their collapse, transforming Sampo's property & casualty insurance business into the highly profitable "If" venture, selling the company's Finnish retail banking business to Danske Bank at peak valuations just before the 2008 financial crisis, and then using that capital to build a significant stake in Nordea at deeply discounted prices. He also showed remarkable foresight by reducing equity exposure before the 2008 crisis and deploying capital into distressed commercial credit, generating €1.5 billion in gains. Several other CEOs have demonstrated similar capital allocation prowess. Henry Singleton at Teledyne was legendary for his counter-cyclical approach to capital allocation. He issued shares when valuations were high in the 1960s to fund acquisitions, then spent the 1970s and early 1980s buying back over 90% of Teledyne's shares at much lower prices, generating exceptional returns for shareholders. As we saw in Cable Cowboy, John Malone at TCI (later Liberty Media) was masterful at using financial engineering and tax-efficient structures to build value. He pioneered the use of spin-offs, tracking stocks, and complex deal structures to maximize shareholder returns while minimizing tax impacts. Tom Murphy at Capital Cities demonstrated exceptional discipline in acquiring media assets only when prices were attractive. His most famous move was purchasing ABC in 1985, then selling the combined company to Disney a decade later for a massive profit. Warren Buffett at Berkshire Hathaway has shown remarkable skill in capital allocation across multiple decades, particularly in knowing when to hold cash and when to deploy it aggressively during times of market stress, such as during the 2008 financial crisis when he made highly profitable investments in companies like Goldman Sachs and Bank of America. Jamie Dimon at JPMorgan Chase has also proven to be an astute capital allocator, particularly during crises. He guided JPMorgan through the 2008 financial crisis while acquiring Bear Stearns and Washington Mutual at fire-sale prices, significantly strengthening the bank's competitive position. D. Scott Patterson has shown excellent capital allocation skills at FirstService. He began leading FirstService following the spin-off of Colliers in 2015, and has compounded EBITDA in the high teens via strategic property management acquistions coupled with large platforms like First OnSite and recently Roofing Corp of America. Another great capital allocator is Brad Jacobs. He has a storied career building rollups like United Waste Systems (acquired by Waste Services for $2.5B), United Rentals (now a $56B public company), XPO logistics which he separated into three public companies (XPO, GXO, RXO), and now QXO, his latest endeavor into the building products space. These leaders share common traits with Wahlroos: patience during bull markets, aggression during downturns, and the discipline to ignore market sentiment in favor of fundamental value. They demonstrate that superior capital allocation, while rare, can create enormous shareholder value over time.

    Dig Deeper

  • Handelsbanken: A Budgetless Banking Pioneer

  • ECB has created 'toxic environment' for banking, says Sampo & UPM chairman Bjorn Wahlroos

  • Edward Chancellor part 1: ‘intelligent contrarians’ should follow the capital cycle

  • Charlie Munger: Investing in Semiconductor Industry 2023

  • Amazon founder and CEO Jeff Bezos delivers graduation speech at Princeton University

tags: Amazon, Jeff Bezos, National Semiconductor, Intel, Moore's Law, Texas Instruments, Micron, Analog Devices, Michael Mauboussin, Daniel Kahneman, Peloton, Handelsbanken, Bjorn Wahlroos, Sampo, Henry Singleton, Teledyne, John Malone, D. Scott Patterson, Jamie Dimon, Tom Murphy, Warren Buffett, Brad Jacobs
categories: Non-Fiction
 

April 2021 - Innovator's Solution by Clayton Christensen and Michael Raynor

This month we take another look at disruptive innovation in the counter piece to Clayton Christensen’s Innovator’s Dilemma, our July 2020 book. The book crystallizes the types of disruptive innovation and provides frameworks for how incumbents can introduce or combat these innovations. The book was a pleasure to read and will serve as a great reference for the future.

Tech Themes

  1. Integration and Outsourcing. Today, technology companies rely on a variety of software tools and open source components to build their products. When you stitch all of these components together, you get the full product architecture. A great example is seen here with Gitlab, an SMB DevOps provider. They have Postgres for a relational database, Redis for caching, NGINX for request routing, Sentry for monitoring and error tracking and so on. Each of these subsystems interacts with each other to form the powerful Gitlab project. These interaction points are called interfaces. The key product development question for companies is: “Which things do I build internally and which do I outsource?” A simple answer offered by many MBA students is “Outsource everything that is not part of your core competence.” As Clayton Christensen points out, “The problem with core-competence/not-your-core-competence categorization is that what might seem to be a non-core activity today might become an absolutely critical competence to have mastered in a proprietary way in the future, and vice versa.” A great example that we’ve discussed before is IBM’s decision to go with Microsoft DOS for its Operating System and Intel for its Microprocessor. At the time, IBM thought it was making a strategic decision to outsource things that were not within its core competence but they inadvertently gave almost all of the industry profits from personal computing to Intel and Microsoft. Other competitors copied their modular approach and the whole industry slugged it out on price. The question of whether to outsource really depends on what might be important in the future. But that is difficult to predict, so the question of integration vs. outsourcing really comes down to the state of the product and market itself: is this product “not good enough” yet? If the answer is yes, then a proprietary, integrated architecture is likely needed just to make the actual product work for customers. Over time, as competitors enter the market and the fully integrated platform becomes more commoditized, the individual subsystems become increasingly important competitive drivers. So the decision to outsource or build internally must be made on the status of product and the market its attacking.

  2. Commoditization within Stacks. The above point leads to the unbelievable idea of how companies fall into the commoditization trap. This happens from overshooting, where companies create products that are too good (which I find counter-intuitive, who thought that doing your job really well would cause customers to leave!). Christensen describes this through the lens of a salesperson “‘Why can’t they see that our product is better than the competition? They’re treating it like a commodity!’ This is evidence of overshooting…there is a performance surplus. Customers are happy to accept improved products, but unwilling to pay a premium price to get them.” At this time, the things demanded by customers flip - they are willing to pay premium prices for innovations along a new trajectory of performance, most likely speed, convenience, and customization. “The pressure of competing along this new trajectory of improvement forces a gradual evolution in product architectures, away from the interdependent, proprietary architectures that had the advantage in the not-good-enough era toward modular designs in the era of performance surplus. In a modular world, you can prosper by outsourcing or by supplying just one element.” This process of integration, to modularization and back, is super fascinating. As an example of modularization, let’s take the streaming company Confluent, the makers of the open-source software project Apache Kafka. Confluent offers a real-time communications service that allows companies to stream data (as events) rather than batching large data transfers. Their product is often a sub-system underpinning real-time applications, like providing data to traders at Citigroup. Clearly, the basis of competition in trading has pivoted over the years as more and more banking companies offer the service. Companies are prioritizing a new axis, speed, to differentiate amongst competing services, and when speed is the basis of competition, you use Confluent and Kafka to beat out the competition. Now let’s fast forward five years and assume all banks use Kafka and Confluent for their traders, the modular sub-system is thus commoditized. What happens? I’d posit that the axis would shift again, maybe towards convenience, or customization where traders want specific info displayed maybe on a mobile phone or tablet. The fundamental idea is that “Disruption and commoditization can be seen as two sides of the same coin. That’s because the process of commoditization initiates a reciprocal process of de-commoditization [somewhere else in the stack].”

  3. The Disruptive Becomes the Disruptor. Disruption is a relative term. As we’ve discussed previously, disruption is often mischaracterized as startups enter markets and challenge incumbents. Disruption is really a focused and contextual concept whereby products that are “not good enough” by market standards enter a market with a simpler, more convenient, or less expensive product. These products and markets are often dismissed by incumbents or even ceded by market leaders as those leaders continue to move up-market to chase even bigger customers. Its fascinating to watch the disruptive become the disrupted. A great example would be department stores - initially, Macy’s offered a massive selection that couldn’t be found in any single store and customers loved it. They did this by turning inventory three times per year with 40% gross margins for a 120% return on capital invested in inventory. In the 1960s, Walmart and Kmart attacked the full-service department stores by offering a similar selection at much cheaper prices. They did this by setting up a value system whereby they could make 23% gross margins but turn inventories 5 times per year, enabling them to earn the industry golden 120% return on capital invested in inventory. Full-service department stores decided not to compete against these lower gross margin products and shifted more space to beauty and cosmetics that offered even higher gross margins (55%) than the 40% they were used to. This meant they could increase their return on capital invested in inventory and their profits while avoiding a competitive threat. This process continued with discount stores eventually pushing Macy’s out of most categories until Macy’s had nowhere to go. All of a sudden the initially disruptive department stores had become disrupted. We see this in technology markets as well. I’m not 100% this qualifies but think about Salesforce and Oracle. Marc Benioff had spent a number of years at Oracle and left to start Salesforce, which pioneered selling subscription, cloud software, on a per-seat revenue model. This meant a much cheaper option compared to traditional Oracle/Siebel CRM software. Salesforce was initially adopted by smaller customers that didn’t need the feature-rich platform offered by Oracle. Oracle dismissed Salesforce as competition even as Oracle CEO Larry Ellison seeded Salesforce and sat on Salesforce’s board. Today, Salesforce is a $200B company and briefly passed Oracle in market cap a few months ago. But now, Salesforce has raised its prices and mostly targets large enterprise buyers to hit its ambitious growth initiatives. Down-market competitors like Hubspot have come into the market with cheaper solutions and more fully integrated marketing tools to help smaller businesses that aren’t ready for a fully-featured Salesforce platform. Disruption is always contextual and it never stops.

Business Themes

1_fnX5OXzCcYOyPfRHA7o7ug.png
  1. Low-end-Market vs. New-Market Disruption. There are two types of established methods for disruption: Low-end-market (Down-market) and New-market. Low-end-market disruption seeks to establish performance that is “not good enough” along traditional lines, and targets overserved customers in the low-end of the mainstream market. It typically utilizes a new operating or financial approach with structurally different margins than up-market competitors. Amazon.com is a quintessential low-end market disruptor compared to traditional bookstores, offering prices so low they angered book publishers while offering unmatched convenience to customers allowing them to purchase books online. In contrast, Robinhood is a great example of a new-market disruption. Traditional discount brokerages like Charles Schwab and Fidelity had been around for a while (themselves disruptors of full-service models like Morgan Stanley Wealth Management). But Robinhood targeted a group of people that weren’t consuming in the market, namely teens and millennials, and they did it in an easy-to-use app with a much better user interface compared to Schwab and Fidelity. Robinhood also pioneered new pricing with zero-fee trading and made revenue via a new financial approach, payment for order flow (PFOF). Robinhood makes money by being a data provider to market makers - basically, large hedge funds, like Citadel, pay Robinhood for data on their transactions to help optimize customers buying and selling prices. When approaching big markets its important to ask: Is this targeted at a non-consumer today or am I competing at a structurally lower margin with a new financial model and a “not quite good enough” product? This determines whether you are providing a low-end market disruption or a new-market disruption.

  2. Jobs To Be Done. The jobs to be done framework was one of the most important frameworks that Clayton Christensen ever introduced. Marketers typically use advertising platforms like Facebook and Google to target specific demographics with their ads. These segments are narrowly defined: “Males over 55, living in New York City, with household income above $100,000.” The issue with this categorization method is that while these are attributes that may be correlated with a product purchase, customers do not look up exactly how marketers expect them to behave and purchase the products expected by their attributes. There may be a correlation but simply targeting certain demographics does not yield a great result. The marketers need to understand why the customer is adopting the product. This is where the Jobs to Be Done framework comes in. As Christensen describes it, “Customers - people and companies - have ‘jobs’ that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can ‘hire’ to get the job done. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently, and inexpensively as possible.” Christensen zeroes in on the contextual adoption of products; it is the circumstance and not the demographics that matter most. Christensen describes ways for people to view competition and feature development through the Jobs to Be Done lens using Blackberry as an example (later disrupted by the iPhone). While the immature smartphone market was seeing feature competition from Microsoft, Motorola, and Nokia, Blackberry and its parent company RIM came out with a simple to use device that allowed for short productivity bursts when the time was available. This meant they leaned into features that competed not with other smartphone providers (like better cellular reception), but rather things that allowed for these easy “productive” sessions like email, wall street journal updates, and simple games. The Blackberry was later disrupted by the iPhone which offered more interesting applications in an easier to use package. Interestingly, the first iPhone shipped without an app store (but as a proprietary, interdependent product) and was viewed as not good enough for work purposes, allowing the Blackberry to co-exist. Management even dismissed the iPhone as a competitor initially. It wasn’t long until the iPhone caught up and eventually surpassed the Blackberry as the world’s leading mobile phone.

  3. Brand Strategies. Companies may choose to address customers in a number of different circumstances and address a number of Jobs to Be Done. It’s important that the Company establishes specific ways of communicating the circumstance to the customer. Branding is powerful, something that Warren Buffett, Terry Smith, and Clayton Christensen have all recognized as durable growth providers. As Christensen puts it: “Brands are, at the beginning, hollow words into which marketers stuff meaning. if a brand’s meaning is positioned on a job to be done, then when the job arises in a customer’s life, he or she will remember the brand and hire the product. Customers pay significant premiums for brands that do a job well.” So what can a large corporate company do when faced with a disruptive challenger to its branding turf? It’s simple - add a word to their leading brand, targeted at the circumstance in which a customer might find themself. Think about Marriott, one of the leading hotel chains. They offer a number of hotel brands: Courtyard by Marriott for business travel, Residence Inn by Marriott for a home away from home, the Ritz Carlton for high-end luxurious stays, Marriott Vacation Club for resort destination hotels. Each brand is targeted at a different Job to Be Done and customers intuitively understand what the brands stand for based on experience or advertising. A great technology example is Amazon Web Services (AWS), the cloud computing division of Amazon.com. Amazon invented the cloud, and rather than launch with the Amazon.com brand, which might have confused their normal e-commerce customers, they created a completely new brand targeted at a different set of buyers and problems, that maintained the quality and recognition that Amazon had become known for. Another great retail example is the SNKRs app released by Nike. Nike understands that some customers are sneakerheads, and want to know the latest about all Nike shoe drops, so Nike created a distinct, branded app called SNKRS, that gives news and updates on the latest, trendiest sneakers. These buyers might not be interested in logging into the Nike app and may become angry after sifting through all of the different types of apparel offered by Nike, just to find new shoes. The SNKRS app offers a new set of consumers and an easy way to find what they are looking for (convenience), which benefits Nike’s core business. Branding is powerful, and understanding the Job to Be Done helps focus the right brand for the right job.

Dig Deeper

  • Clayton Christensen’s Overview on Disruptive Innovation

  • Jobs to Be Done: 4 Real-World Examples

  • A Peek Inside Marriott’s Marketing Strategy & Why It Works So Well

  • The Rise and Fall of Blackberry

  • Payment for Order Flow Overview

  • How Commoditization Happens

tags: Clayton Christensen, AWS, Nike, Amazon, Marriott, Warren Buffett, Terry Smith, Blackberry, RIM, Microsoft, Motorola, iPhone, Facebook, Google, Robinhood, Citadel, Schwab, Fidelity, Morgan Stanley, Oracle, Salesforce, Walmart, Macy's, Kmart, Confluent, Kafka, Citigroup, Intel, Gitlab, Redis
categories: Non-Fiction
 

January 2021 - Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages by Carlota Perez

This month we read Carlota Perez’s understudied book covering the history of technology breakthroughs and revolutions. This book marries the role of financing and technology breakthrough so seamlessly in an easy to digest narrative style.

Tech Themes

  1. The 5 Technology Revolutions. Perez identifies the five major technological revolutions: The Industrial Revolution (1771-1829), The Age of Steam and Railways (1829-1873), The Age of Steel, Electricity and Heavy Engineering (1875-1918), The Age of Oil, the Automobile and Mass Production (1908-1974), and The Age of Information and Telecommunications (1971-Today). When looking back at these individual revolutions, one can recognize how powerful it is to view the world and technology in these incredibly long waves. Many of these periods lasted for over fifty years while their geographic dispersion and economic effects fully came to fruition. These new technologies fundamentally alter society - when it becomes clear that the revolution is happening, many people jump on the bandwagon. As Perez puts it, “The great clusters of talent come forth after the evolution is visible and because it is visible.” Each revolution produces a myriad of change in society. The industrial revolution popularized factory production, railways created national markets, electricity created the power to build steel buildings, oil and cars created mass markets and assembly lines, and the microprocessor and internet created amazing companies like Amazon and Airbnb.

  2. The Phases of Technology Revolution. After a decently long gestation period during which the old revolution has permeated across the world, the new revolution normally starts with a big bang, some discovery or breakthrough (like the transistor or steam engine) that fundamentally pushed society into a new wave of innovation. Coupled with these big bangs, is re-defined infrastructure from the prior eras - as an example, the Telegraph and phone wires were created along the initial railways, as they allowed significant distance of uninterrupted space to build on. Another example is electricity - initially, homes were wired to serve lightbulbs, it was only many years later that great home appliances came into use. This initial period of application discovery is called the Irruption phase. The increasing interest in forming businesses causes a Frenzy period like the Railway Mania or the Dot-com Boom, where everyone thinks they can get rich quick by starting a business around the new revolution. As the first 20-30 years of a revolution play themselves out, there grows a strong divide between those who were part of the revolution and those who were not; there is an economic, social, and regulatory mismatch between the old guard and the new revolution. After an uprising (like the populism we have seen recently) and bubble collapse (Check your crystal ball), regulatory changes typically foster a harmonious future for the technology. Following these changes, we enter the Synergy phase, where technology can fully flourish due to accommodating and clear regulation. This Synergy phase propagates outward across all countries until even the lagging adopters have started the adoption process. At this point the cycle enters into Maturity, waiting for the next big advance to start the whole process over again.

  3. Where are we in the cycle today? We tweeted at Carlota Perez to answer this question AND SHE RESPONDED! My question to Perez was: With the recent wave of massive, transformational innovation like the public cloud providers, and the iPhone, are we still in the Age of Information? These technological waves are often 50-60 years and yet we’ve arguably been in the same age for quite a while. This wave started in 1971, exactly 50 years ago, with Intel and the creation of the microprocessor. Are we in the Frenzy phase with record amounts of investment capital, an enormous demand for early stage companies, and new financial innovations like Affirm’s debt securitizations? Or have we not gotten to the Frenzy phase yet? Is the public cloud or the iPhone the start of a new big bang and we have overlapping revolutions for the first time ever? Obviously identifying the truly breakthrough moments in technology history is way easier after the fact, so maybe we are too new to know what really is a seminal moment. Perez’s answer, though only a few words, fully provides scope to the question. Perez suggests we are still in the installation phase (Irruption and Frenzy) of the new technology and that makes a lot of sense. Sure, internet usage is incredibly high in the US (96%) but not in other large countries. China (the world’s largest country by population) has only 63% using the internet and India (the world’s second-largest country) has only 55% of its population using the internet. Ethiopia, with a population of over 100M people only has 18% using the internet. There is still a lot of runway left for the internet to bloom! In addition, only recently have people been equipped with a powerful computing device that fits in their pocket - and low-priced phones are now making their way to all parts of the world led by firms like Chinese giant Transsion. Added to the fact that we are not fully installed with this revolution, is the rise of populism, a political movement that seeks to mobilize ordinary people who feel disregarded by the elite group. Populism has reared its ugly head across many nations like the US (Donald Trump), UK (Brexit), Brazil (Bolsonaro) and many other countries. The rise of populism is fueled by the growing dichotomy between the elites who have benefitted socially and monetarily from the revolution and those who have not. In the 1890’s, anti-railroad sentiment drove the creation of the populist party. More recently, people have become angry at tech giants (Facebook, Google, Amazon, Apple, Twitter) for unfair labor practices, psychological manipulation, and monopolistic tendencies. The recent movie, the Social Dilemma, which suggests a more humane and regulatory focused approach to social media, speaks to the need for regulation of these massive companies. It is also incredibly ironic to watch a movie about how social media is manipulating its users while streaming a movie that was recommended to me on Netflix, a company that has popularized incessant binge-watching through UX manipulation, not dissimilar to Facebook and Google’s tactics. I expect these companies to get regulated soon -and I hope that once that happens, we enter into the Synergy phase of growth and value accruing to all people.

Yes, I do. I will find the time to reply to you properly. But just quickly, I think installation was prolonged by QE &casino finance; we are at the turning point (the successful rise of populism is a sign) and maybe post-Covid we'll go into synergy.

— Carlota Perez (@CarlotaPrzPerez) January 17, 2021

Business Themes

saupload_31850821249d4eb762b6cc.png
tumblr_63436aee14331420f570d452241e94ad_197e0e8c_500.png
tech-lifecycle.png
1920px-LongWavesThreeParadigms.jpg
images.jpg
  1. The role of Financial Capital in Revolutions. As the new technology revolutions play themselves out, financial capital appears right alongside technology developments, ready to mold the revolution into the phases suggested by Perez. In the irruption phase, as new technology is taking hold, financial capital that had been on the sidelines waiting out the Maturity phase of the previous revolution plows into new company formation and ideas. The financial sector tries to adopt the new technology as soon as possible (we are already seeing this with Quantum computing), so it can then espouse the benefits to everyone it talks to, setting the stage for increasing financing opportunities. Eventually, demand for financing company creation goes crazy, and you enter into a Frenzy phase. During this phase, there is a discrepancy between the value of financial capital and production capital, or money used by companies to create actual products and services. Financial capital believes in unrealistic returns on investment, funding projects that don’t make any sense. Perez notes: “In relation to the canal Mania of the 1790s, disorder and lack of coordination prevailed in investment decisions. Canals were built ‘with different widths and depths and much inefficient routing.’ According to Dan Roberts at the Financial Times, in 2001 it was estimated that only 1 to 2 percent of the fiber optic cable buried under Europe and the United States had so far been turned on.” These Frenzy phases create bubbles and further ingrain regulatory mismatch and political divide. Could we be in one now with deals getting priced at 125x revenue for tiny companies? After the institutional reckoning, the Technology revolution enters the Synergy phase where production capital has really strong returns on investment - the path of technology is somewhat known and real gains are to be made by continuing investment (especially at more reasonable asset prices). Production capital continues to go to good use until the technology revolution fully plays itself out, entering into the Maturity phase.

  2. Casino Finance and Prolonging Bubbles. One point that Perez makes in her tweet, is that this current bubble has been prolonged by QE and casino finance. Quantitative easing is a monetary policy where the federal reserve (US’s central bank) buys government bonds issued by the treasury department to inject money into the financial ecosystem. This money at the federal reserve can purchase bank loans and assets, offering more liquidity to the financial system. This process is used to create low-interest rates, which push individuals and corporations to invest their money because the rate of interest on savings accounts is really really low. Following the financial crisis and more recently COVID-19, the Federal Reserve lowered interest rates and started quantitative easing to help the hurting economy. In Perez’s view, these actions have prolonged the Irruption and Frenzy phases because it forces more money into investment opportunities. On top of quantitative easing, governments have allowed so-called Casino Capitalism - allowing free-market ideals to shape governmental policies (like Reagan’s economic plan). Uninterrupted free markets are in theory economically efficient but can give rise to bad actors - like Enron’s manipulation of California’s energy markets after deregulation. By engaging in continual quantitative easing and deregulation, speculative markets, like collateralized loan obligations during the financial crisis, are allowed to grow. This creates a risk-taking environment that can only end in a frenzy and bubble.

  3. Synergy Phase and Productive Capital Allocation. Capital allocation has been called the most important part of being a great investor and business leader. Think about being the CEO of Coca Cola for a second - you have thousands of competing projects, vying for budget - how do you determine which ones get the most money? In the investing world, capital allocation is measured by conviction. As George Soros’s famous quote goes: “It's not whether you're right or wrong, but how much money you make when you're right and how much you lose when you're wrong.” Clayton Christensen took the ideas of capital allocation and compared them to life investments, coming to the conclusion: “Investments in relationships with friends and family need to be made long, long before you’ll see any sign that they are paying off. If you defer investing your time and energy until you see that you need to, chances are it will already be too late.” Capital and time allocation are underappreciated concepts because they often seem abstract to the everyday humdrum of life. It is interesting to think about capital allocation within Perez’s long-term framework. The obvious approach would be to identify the stage (Irruption, Frenzy, Synergy, Maturity) and make the appropriate time/money decisions - deploy capital into the Irruption phase, pull money out at the height of the Frenzy, buy as many companies as possible at the crash/turning point, hold through most of the Synergy, and sell at Maturity to identify the next Irruption phase. Although that would be fruitful, identifying market bottoms and tops is a fool’s errand. However, according to Perez, the best returns on capital investment typically happen during the Synergy phase, where production capital (money employed by firms through investment in R&D) reigns supreme. During this time, the revolutionary applications of recently frenzied technology finally start to bear fruit. They are typically poised to succeed by an accommodating regulatory and social environment. Unsurprisingly, after the diabolic grifting financiers of the frenzy phase are exposed (see Worldcom, Great Financial Crisis, and Theranos), social pressures on regulators typically force an agreement to fix the loopholes that allowed these manipulators to take advantage of the system. After Enron, the Sarbanes-Oxley act increased disclosure requirements and oversight of auditors. After the GFC, the Dodd-Frank act mandated bank stress tests and introduced financial stability oversight. With the problems of the frenzy phase "fixed” for the time being, the social attitude toward innovation turns positive once again and the returns to production capital start to outweigh financial capital which is now reigned in under the new rules. Suffice to say, we are probably in the Frenzy phase in the technology world, with a dearth of venture opportunities, creating a massive valuation increase for early-stage companies. This will change eventually and as Warren Buffett says: “It’s only when the tide goes out that you learn who’s been swimming naked.” When the bubble does burst, regulation of big technology companies will usher in the best returns period for investors and companies alike.

Dig Deeper

  • The Financial Instability Hypothesis: Capitalist Processes and the Behavior of the Economy

  • Bubbles, Golden Ages, and Tech Revolutions - a Podcast with Carlota Perez

  • Jeff Bezos: The electricity metaphor (2007)

  • Where Does Growth Come From? Clayton Christensen | Talks at Google

  • A Spectral Analysis of World GDP Dynamics: Kondratieff Waves, Kuznets Swings, Juglar and Kitchin Cycles in Global Economic Development, and the 2008–2009 Economic Crisis

tags: Telegraph, Steam Engine, Steel, Transistor, Intel, Railway Mania, Dot-com Boom, Carlota Perez, Affirm, Irruption, Frenzy, Synergy, Maturity, iPhone, Apple, China, Ethiopia, Theranos, Populism, Twitter, Netflix, Warren Buffett, George Soros, Quantum Computing, QE, Reagan, Enron, Clayton Christensen, Worldcom
categories: Non-Fiction
 

October 2020 - Working in Public: The Making and Maintenance of Open Source Software by Nadia Eghbal

This month we covered Nadia Eghbal’s instant classic about open-source software. Open-source software has been around since the late seventies but only recently it has gained significant public and business attention.

Tech Themes

The four types of open source communities described in Working in Public

The four types of open source communities described in Working in Public

  1. Misunderstood Communities. Open source is frequently viewed as an overwhelmingly positive force for good - taking software and making it free for everyone to use. Many think of open source as community-driven, where everyone participates and contributes to making the software better. The theory is that so many eyeballs and contributors to the software improves security, improves reliability, and increases distribution. In reality, open-source communities take the shape of the “90-9-1” rule and act more like social media than you could think. According to Wikipedia, the "90–9–1” rule states that for websites where users can both create and edit content, 1% of people create content, 9% edit or modify that content, and 90% view the content without contributing. To show how this applies to open source communities, Eghbal cites a study by North Carolina State Researchers: “One study found that in more than 85% of open source projects the research examined on Github, less than 5% of developers were responsible for 95% of code and social interactions.” These creators, contributors, and maintainers are developer influencers: “Each of these developers commands a large audience of people who follow them personally; they have the attention of thousands of developers.” Unlike Instagram and Twitch influencers, who often actively try to build their audiences, open-source developer influencers sometimes find the attention off-putting - they simply published something to help others and suddenly found themselves with actual influence. The challenging truth of open source is that core contributors and maintainers give significant amounts of their time and attention to their communities - often spending hours at a time responding to pull requests (requests for changes / new features) on Github. Evan Czaplicki’s insightful talk entitled “The Hard Parts of Open Source,” speaks to this challenging dynamic. Evan created the open-source project, Elm, a functional programming language that compiles Javascript, because he wanted to make functional programming more accessible to developers. As one of its core maintainers, he has repeatedly been hit with requests of “Why don’t you just…” from non-contributing developers angrily asking why a feature wasn’t included in the latest release. As fastlane creator, Felix Krause put it, “The bigger your project becomes, the harder it is to keep the innovation you had in the beginning of your project. Suddenly you have to consider hundreds of different use cases…Once you pass a few thousand active users, you’ll notice that helping your users takes more time than actually working on your project. People submit all kinds of issues, most of them aren’t actually issues, but feature requests or questions.” When you use open-source software, remember who is contributing and maintaining it - and the days and years poured into the project for the sole goal of increasing its utility for the masses.

  2. Git it? Git was created by Linus Torvalds in 2005. We talked about Torvalds last month, who also created the most famous open-source operating system, Linux. Git was born in response to a skirmish with Larry McAvoy, the head of proprietary tool BitKeeper, over the potential misuse of his product. Torvalds went on vacation for a week and hammered out the most dominant version control system today - git. Version control systems allow developers to work simultaneously on projects, committing any changes to a centralized branch of code. It also allows for any changes to be rolled back to earlier versions which can be enormously helpful if a bug is found in the main branch. Git ushered in a new wave of version control, but the open-source version was somewhat difficult to use for the untrained developer. Enter Github and GitLab - two companies built around the idea of making the git version control system easier for developers to use. Github came first, in 2007, offering a platform to host and share projects. The Github platform was free, but not open source - developers couldn’t build onto their hosting platform - only use it. GitLab started in 2014 to offer an alternative, fully-open sourced platform that allowed individuals to self-host a Github-like tracking program, providing improved security and control. Because of Github’s first mover advantage, however, it has become the dominant platform upon which developers build: “Github is still by far the dominant market player: while it’s hard to find public numbers on GitLab’s adoption, its website claims more than 100,000 organizations use its product, whereas GitHub claims more than 2.9 million organizations.” Developers find GitHub incredibly easy to use, creating an enormous wave of open source projects and code-sharing. The company added 10 million new users in 2019 alone - bringing the total to over 40 million worldwide. This growth prompted Microsoft to buy GitHub in 2018 for $7.5B. We are in the early stages of this development explosion, and it will be interesting to see how increased code accessibility changes the world over the next ten years.

  3. Developing and Maintaining an Ecosystem Forever. Open source communities are unique and complex - with different user and contributor dynamics. Eghbal tries to segment the different types of open source communities into four buckets - federations, clubs, stadiums, and toys - characterized below in the two by two matrix - based on contributor growth and user growth. Federations are the pinnacle of open source software development - many contributors and many users, creating a vibrant ecosystem of innovative development. Clubs represent more niche and focused communities, including vertical-specific tools like astronomy package, Astropy. Stadiums are highly centralized but large communities - this typically means only a few contributors but a significant user base. It is up to these core contributors to lead the ecosystem as opposed to decentralized federations that have so many contributors they can go in all directions. Lastly, there are toys, which have low user growth and low contributor growth but may actually be very useful projects. Interestingly, projects can shift in and out of these community types as they become more or less relevant. For example, developers from Yahoo open-sourced their Hadoop project based on Google’s File System and Map Reduce papers. The initial project slowly became huge, moving from a stadium to a federation, and formed subprojects around it, like Apache Spark. What’s interesting, is that projects mature and change, and code can remain in production for a number of years after the project’s day in the spotlight is gone. According to Eghbal, “Some of the oldest code ever written is still running in production today. Fortran, which was first developed in 1957 at IBM, is still widely used in aerospace, weather forecasting, and other computational industries.” These ecosystems can exist forever, but the costs of these ecosystems (creation, distribution, and maintenance) are often hidden, especially the maintenance aspect. The cost of creation and distribution has dropped significantly in the past ten years - with many of the world’s developers all working in the same ecosystem on GitHub - but it has also increased the total cost of maintenance, and that maintenance cost can be significant. Bootstrap co-creator Jacob Thornton likens maintenance costs to caring for an old dog: “I’ve created endlessly more and more projects that have now turned [from puppies] into dogs. Almost every project I release will get 2,000, 3,000 watchers, which is enough to have this guilt, which is essentially like ‘I need to maintain this, I need to take care of this dog.” Communities change from toys to clubs to stadiums to federations but they may also change back as new tools are developed. Old projects still need to be maintained and that code and maintenance comes down to committed developers.

Business Themes

1_c7udbm7fJtdkZEE6tl1mWQ.png
  1. Revenue Model Matching. One of the earliest code-hosting platforms was SourceForge, a company founded in 1999. The Company pioneered the idea of code-hosting - letting developers publish their code for easy download. It became famous for letting open-source developers use the platform free of charge. SourceForge was created by VA Software, an internet bubble darling that saw its stock price decimated when the bubble finally burst. The challenge with scaling SourceForge was a revenue model mismatch - VA Software made money with paid advertising, which allowed it to offer its tools to developers for free, but meant its revenue model was highly variable. When the company went public, it was still a small and unproven business, posting $17M in revenue and $31M in costs. The revenue model mismatch is starting to rear its head again, with traditional software as a service (SaaS) recurring subscription models catching some heat. Many cloud service and API companies are pricing by usage rather than a fixed, high margin subscription fee. This is the classic electric utility model - you only pay for what you use. Snowflake CEO Frank Slootman (who formerly ran SaaS pioneer ServiceNow) commented: “I also did not like SaaS that much as a business model, felt it not equitable for customers.” Snowflake instead charges based on credits which pay for usage. The issue with usage-based billing has traditionally been price transparency, which can be obfuscated with customer credit systems and incalculable pricing, like Amazon Web Services. This revenue model mismatch was just one problem for SourceForge. As git became the dominant version control system, SourceForge was reluctant to support it - opting for its traditional tools instead. Pricing norms change, and new technology comes out every day, it’s imperative that businesses have a strong grasp of the value they provide to their customers and align their revenue model with customers, so a fair trade-off is created.

  2. Open Core Model. There has been enormous growth in open source businesses in the past few years, which typically operate on an open core model. The open core model means the Company offers a free, normally feature limited, version of its software and also a proprietary, enterprise version with additional features. Developers might adopt the free version but hit usage limits or feature constraints, causing them to purchase the paid version. The open-source “core” is often just that - freely available for anyone to download and modify; the core's actual source code is normally published on GitHub, and developers can fork the project or do whatever they wish with that open core. The commercial product is normally closed source and not available for modification, providing the business a product. Joseph Jacks, who runs Open Source Software (OSS) Capital, an investment firm focused on open source, displays four types of open core business model (pictured above). The business models differ based on how much of the software is open source. Github, interestingly, employs the “thick” model of being mostly proprietary, with only 10% of its software truly open-sourced. Its funny that the site that hosts and facilitates the most open source development is proprietary. Jacks nails the most important question in the open core model: “How much stays open vs. How much stays closed?” The consequences can be dire to a business - open source too much and all of a sudden other companies can quickly recreate your tool. Many DevOps tools have experienced the perils of open source, with some companies losing control of the project it was supposed to facilitate. On the flip side, keeping more of the software closed source goes against the open-source ethos, which can be viewed as organizations selling out. The continuous delivery pipeline project Jenkins has struggled to satiate its growing user base, leading to the CEO of the Jenkins company, CloudBees, posting the blog post entitled, “Shifting Gears”: “But at the same time, the incremental, autonomous nature of our community made us demonstrably unable to solve certain kinds of problems. And after 10+ years, these unsolved problems are getting more pronounced, and they are taking a toll — segments of users correctly feel that the community doesn’t get them, because we have shown an inability to address some of their greatest difficulties in using Jenkins. And I know some of those problems, such as service instability, matter to all of us.” Striking this balance is incredibly tough, especially in a world of competing projects and finite development time and money in a commercial setting. Furthermore, large companies like AWS are taking open core tools like Elastic and MongoDB and recreating them in proprietary fashions (Elasticsearch Service and DocumentDB) prompting company CEO’s to appropriately lash out. Commercializing open source software is a never-ending battle against proprietary players and yourself.

  3. Compensation for Open Source. Eghabl characterizes two types of funders of open-source - institutions (companies, governments, universities) and individuals (usually developers who are direct users). Companies like to fund improved code quality, influence, and access to core projects. The largest groups of contributors to open source projects are mainly corporations like Microsoft, Google, Red Hat, IBM, and Intel. These corporations are big enough and profitable enough to hire individuals and allow them to strike a comfortable balance between time spent on commercial software and time spent on open source. This also functions as a marketing expense for the big corporations; big companies like having influencer developers on payroll to get the company’s name out into the ecosystem. Evan You, who authored Vue.js, a javascript framework described company backed open-source projects: “The thing about company-backed open-source projects is that in a lot of cases… they want to make it sort of an open standard for a certain industry, or sometimes they simply open-source it to serve as some sort of publicity improvement to help with recruiting… If this project no longer serves that purpose, then most companies will probably just cut it, or (in other terms) just give it to the community and let the community drive it.” In contrast to company-funded projects, developer-funded projects are often donation based. With the rise of online tools for encouraging payments like Stripe and Patreon, more and more funding is being directed to individual open source developers. Unfortunately though, it is still hard for many open source developers to pursue open source on individual contributions, especially if they work on multiple projects at the same time. Open source developer Sindre Sorhus explains: “It’s a lot harder to attract company sponsors when you maintain a lot of projects of varying sizes instead of just one large popular project like Babel, even if many of those projects are the backbone of the Node.js ecosystem.” Whether working in a company or as an individual developer, building and maintaining open source software takes significant time and effort and rarely leads to significant monetary compensation.

Dig Deeper

  • List of Commercial Open Source Software Businesses by OSS Capital

  • How to Build an Open Source Business by Peter Levine (General Partner at Andreessen Horowitz)

  • The Mind Behind Linux (a talk by Linus Torvalds)

  • What is open source - a blog post by Red Hat

  • Why Open Source is Hard by PHP Developer Jose Diaz Gonzalez

  • The Complicated Economy of Open Source

tags: Github, Gitlab, Google, Twitch, Instagram, E;, Elm, Javascript, Open Source, Git, Linus Torvalds, Linux, Microsoft, MapReduce, IBM, Fortran, Node, Vue, SourceForge, VA Software, Snowflake, Frank Slootman, ServiceNow, SaaS, AWS, DevOps, CloudBees, Jenkins, Intel, Red Hat, batch2
categories: Non-Fiction
 

January 2020 - The Innovators by Walter Isaacson

Isaacson presents a comprehensive history of modern day technology, from Ada Lovelace to Larry Page. He weaves in intricate detail around the development of the computer, which provides the landscape on which all the major players of technological history wander.

Tech Themes

  1. Computing Before the Computer. In the Summer of 1843, Ada Lovelace, daughter of the poet Lord Byron, wrote the first computer program, detailing a way of repeatedly computing Bernoulli numbers. Lovelace had been working with Charles Babbage, an English mathematician who had conceived of an Analytical Engine, which could be used as a general purpose arithmetic logic unit. Originally, Babbage thought his machine would only be used for computing complex mathematical problems, but Ada had a bigger vision. Ada was well educated and artistic like her father. She knew that the general purpose focus of the Analytical Engine could be an incredible new technology, even hypothesizing, “Supposing, for instance, that the fundamental relations, of pitched sounds in the science of harmony and musical composition were susceptible to such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity.” 176 years later, in 2019, OpenAI released a deep neural network that produces 4 minute musical compositions, with ten different instruments.

    2. The Government, Education and Technology. Babbage had suggested using punch cards for computers, but Herman Hollerith, an employee of the U.S. Census Bureau, was the first to successfully implement them. Hollerith was angered that the decennial census took eight years to successfully complete. With his new punch cards, designed to analyze combinations of traits, it took only eight. In 1924, after a series of mergers, the company Hollerith founded became IBM. This was the first involvement of the US government with computers. Next came educational institutions, namely MIT, where by 1931 Vanneaver Bush had built a Differential Analyzer (pictured below), the world’s first analog electric computing machine. This machine would be copied by the U.S. Army, University of Pennsylvania, Manchester University and Cambridge University and iterated on until the creation of the Electronic Numerical Integrator and Computer (ENIAC), which firmly established a digital future for computing machines. With World War as a motivator, the invention of the computer was driven forward by academic institutions and the government.

Business Themes

102680080-03-01.jpg
  1. Massive Technological Change is Slow. Large technological change almost always feels sudden, but it rarely ever is. Often, new technological developments are relegated to small communities, like Homebrew computing club, where Steve Wozniak handed out mock-ups for the Apple Computer, which was the first to map a keyboard to a screen for input. The development of the transistor (1947) preceded the creation of the microchip (1958) by eleven years. The general purpose chip, a.k.a. the microprocessor popped up thirteen years after that (1971), when Intel introduced the 4004 into the business world. This phenomenon was also true with the internet. Packet switching was first discovered in the early 1960s by Paul Baran, while he was at the RAND Corporation. The Transmission Control Protocol and Internet Protocol were created fifteen years after that (1974) by Vint Cerf and Bob Kahn. The HyperText Transfer Protocol (HTTP) and the HyperText Markup Language (HTML) were created sixteen years after that in 1990 by Tim Berners-Lee. The internet wasn’t in widespread use until after 2000. Introductions of new technologies often seem sudden, but they frequently call on technologies of the past and often involve a corresponding change that address the prior limiting factor of a previous technology. What does that mean for cloud computing, containers, and blockchain? We are probably earlier in the innovation cycle than we can imagine today. Business does not always lag the innovation cycle, but is normally the ending point in a series of innovations.

  2. Teams are Everything. Revolution and change happens through the iteration of ideas through collaborative processes. History provides a lot of interesting lessons when it comes to technology transformation. Teams with diverse backgrounds, complementary styles and a mix of visionary and operating capabilities executed the best. As Isaacson notes: “Bell Labs was a classic example. In its long corridors in suburban New Jersey, there were theoretical physicists, experimentalists, material scientists, engineers, a few businessmen, and even some telephone pole climbers with grease under their fingernails.” Bell Labs created the first transistor, a semiconductor that would be the foundation of Intel’s chips, where Bob Noyce and Gordon Moore (yes – Moore’s Law) would provide the vision, and Andy Grove would provide the focus.

Dig Deeper

  • Alan Turing and the Turing Machine

  • The Deal that Ruined IBM and Catapulted Microsoft

  • Grace Hopper and the First Compiler

  • ARPANET and the Birth of the Internet

tags: IBM, Microsoft, Moore's Law, Apple, Alan Turing, OpenAI, Cloud Computing, Bell Labs, Intel, MIT, Ada Lovelace, batch2
categories: Non-Fiction
 

April 2019 - Only the Paranoid Survive by Andrew S. Grove

This book details how to manage a company through complex industry change. It is incredibly prescient and a great management book.

Tech Themes

  1. The decoupling of hardware and software. In the early days of personal computers (1980s) the hardware and software were both provided by the same company. This is complete vertical alignment, similar to what we’ve discussed before with Apple. The major providers of the day were IBM, Digital Equipment Corporation (DEC - Acquired by Compaq which was acquired by HP), Sperry Univac and Wang. When you bought a PC, the sales and distribution, application software, operating system, and chips were all handled by the same Company. This created extreme vendor lock-in because each PC had different and complicated ways of operating. Customers typically stayed with the same vendor for years to avoid the headache of learning the new system. Over time, driven by the increases in memory efficiency, and the rise of Intel (where Andy Grove was employee #3), the PC industry began to shift to a horizontal model. In this model, retail stores (Micro Center, Best Buy, etc.) provided sales and distribution, dedicated software companies provided applications (Apple at the time, Microsoft, Mosaic, etc.), Intel provided the chips, and Microsoft provided the operating system (MS-DOS, then Windows). This decoupling produced a more customized computer for significantly lower cost and became the dominant model for purchasing going forward. Dell computers were the first to really capitalize on this trend.

  2. Microprocessors and memory chips. Intel started in 1968 and was the first to market with a microchip that could be used to store computer memory. Demand was strong because it was the first of its kind, and Intel significantly ramped up production to satisfy that demand. By the early eighties, it was a computer powerhouse and the name Intel was synonymous with computer memory. In the mid-eighties, Japanese memory producers began to appear on the scene and could produce higher-quality chips at a cheaper cost. At first, Intel saw these producers as a healthy backup plan when demand exceeded Intel’s supply, but over time it became clear they were losing market share. Intel saw this commoditization and decided to pivot out of the memory business and into the newer, less-competitive microprocessor business. The microprocessor (or CPU) handles the execution of tasks within the computer, while memories simply store the byproduct of that execution. As memory became easier to produce, the cost dropped dramatically and business became more competitive with producers consistently undercutting each other to win business. On the other hand, microprocessors became increasingly important as the internet grew, applications became more complex and computer speed became a top-selling point.

  3. Mainframes to PCs. IBM had become the biggest technology company in the world on the backs of mainframes: massive, powerful, inflexible, and expensive mega-computers. As the computing industry began to shift to PCs and move away from a vertical alignment to a horizontal one, IBM was caught flat-footed. In 1981, IBM chose Intel to provide the microprocessor for their PC, which led to Intel becoming the most widely accepted supplier of microprocessors. The industry followed volume - manufacturers focused on producing on top of Intel architecture, developers focused on developing on the best operating system (Microsoft Windows) and over time Intel and Microsoft encroached on IBM’s turf. Grove’s reasoning for this is simple: “IBM was composed of a group of people who had won time and time again, decade after decade, in the battle among vertical computer players. So when the industry changed, they attempted to use the same type of thinking regarding product development and competitiveness that had worked so well in the past.” Just because the company has been successful before, it doesn’t mean it will be successful again when change occurs.

The six forces acting on a business at any time. When one becomes outsized, it can represent a strategic inflection point to the business.

The six forces acting on a business at any time. When one becomes outsized, it can represent a strategic inflection point to the business.

Business Themes

  1. Strategic Inflection Points and 10x forces. A strategic inflection point is a fundamental shift in a business, due to industry dynamics. Examples of well known shifts include: mainframes to PCs, vertical computer production to horizontal production, on-premise hardware to the cloud, shrink-wrapped software to SaaS, and physical retail to e-commerce. These strategic inflection points are caused by 10x forces, which represent the underlying shift in the technology or demand that has caused the inflection point. Deriving from the Porter five forces model, these forces can affect your current competitors, complementors, customers, suppliers, potential competitors and substitutes. For Intel, the 10x force came from their Japanese competitors which could produce better quality memories at a substantially lower cost. Recognizing these inflection points can be difficult, and takes place over time in stages. Grove describes it best: “First, there is a troubling sense that something is different. Things don’t work the way they used to. Customers’ attitudes toward you are different. The trade shows seem weird. Then there is a growing dissonance between what your company thinks it is doing and what is actually happening inside the bowels of the organization. Such misalignment between corporate statements and operational actions hints at more than the normal chaos that you have learned to live with. Eventually, a new framework, a new set of understandings, a new set of actions emerges…working your way through a strategic inflection point is like venturing into what i call the valley of death.”

  2. The bottoms up, top-down way to “Let chaos reign.” The way to respond to a strategic inflection point comes through experimentation. As Grove says, “Loosen up the level of control that your organization normally is accustomed to. Let people try different techniques, review different products. Only stepping out of the old ruts will bring new insights.” This idea was also recently discussed by Jeff Bezos in his annual shareholder letter - he likened this idea to wandering: “Sometimes (often actually) in business, you do know where you’re going, and when you do, you can be efficient. Put in place a plan and execute. In contrast, wandering in business is not efficient … but it’s also not random. It’s guided – by hunch, gut, intuition, curiosity, and powered by a deep conviction that the prize for customers is big enough that it’s worth being a little messy and tangential to find our way there. Wandering is an essential counter-balance to efficiency. You need to employ both. The outsized discoveries – the “non-linear” ones – are highly likely to require wandering.” When faced with mounting evidence that things are changing, begin the process of strategic wandering. This needs to be coupled with bottom-up actions from middle managers who are exposed to the underlying industry/technology change on a day to day basis. Strategic wandering reinforced with the buy-in and action of middle management can produce major advances as was the case with Amazon Web Services.

  3. Traversing the valley of death. The first task in traversing through a strategic inflection point is to create a clear, explainable, mental image of what the business looks like on the other side. This becomes your new focus and the Company’s mantra. For Intel, in 1986, it was, “Intel, the microcomputer company.” This phrase did two things: it broke the previous synonymy of Intel with ‘memory’ and signaled internally a new focus on microprocessors. Next, the Company should redeploy its best resources to its biggest problems, including the CEO. Grove described this process as, “going back to school.” He met with managers and engineers and grilled them with questions to fully understand the state and potential of the inflection point. Once the new direction is decided, the company should focus all of its efforts in one direction without hedging. While it may feel comfortable to hedge, it signals an unclear direction and can be incredibly expensive.

Dig Deeper

  • Mapping strategic inflection points to product lifecycles

  • Review of grocery strategic inflection points by Coca-cola

  • Strategic inflection point for Kimberly Clark in the paper industry: “Sell the Mills”

  • Andy Grove survived the Nazi and Communist regimes of Hungary

  • Is Facebook at a strategic inflection point?

tags: Andy Grove, Intel, Chips, hardware, Amazon, Jeff Bezos, Strategic inflection point, 10x force, software, batch2
categories: Non-Fiction
 

About Contact Us | Recommend a Book Disclaimer