• Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search
Tech Book of the Month
  • Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search

August 2023 - Capital Returns by Edward Chancellor

We dive into an investing book that covers the capital cycle. In summary, the best time to invest in a sector is actually when capital is leaving or has left.

Tech Themes

  1. Amazon. Marathon understands that the world moves in cycles. During the internet bubble of the late 1990s the company refused to invest in a lot of speculative internet companies. “At the time, we were unable to justify the valuations of any of these companies, nor identify any which could safely say would still be going strong in years to come.” In August of 2007, however, several years after the internet bubble burst, they noticed Amazon again. Amazon’s stock had rebounded well from the lows of 2001 and was roughly flat from its May 1999 valuation. Sales had grown 10x since 1999 and while they recognized it had a tarnished reputation from the internet bubble, it was actually a very good business with a negative working capital cycle. On top of this, the reason the stock hadn’t performed well in the past few years was because they were investing in two new long-term growth levers, Amazon Web Services and Fulfillment by Amazon. I’m sure Marathon underestimated the potential for these businesses but we can look back now and know how exceptional and genius these margin lowering investments were at the time.

  2. Semis. Nothing paints a more clear picture of cyclicality than semiconductors. Now we can debate whether AI and Nvidia have moved us permanently out of a cycle but up until 2023, Semiconductors was considered cyclical. As Marathon notes: “Driven by Moore’s law, the semiconductor sector has achieved sustained and dramatic performance increases over the last 30years, greatly benefiting productivity and the overall economy. Unfortunately, investors have not done so well. Since inception in 1994, the Philadelphia Semiconductor Index has underperformed the Nasdaq by around 200 percentage point, and exhibited greater volatility…In good times, prices pick up, companies increase capacity, and new entrants appear, generally from different parts of Asia (Japan in the 1970s, Korea in 1980s, Taiwan in the mid1990s, and China more recently). Excess capital entering at cyclical peaks has led to relatively poor aggregate industry returns.” As Fabricated Knowledge points out the 1980s had two brutal Semiconductor cycles. First, in 1981, the industry experienced severe overcapacity, leading to declining prices while inflation ravaged through many businesses. Then in 1985, the US semiconductor business declined significantly. “1985 was a traumatic moment for Intel and the semiconductor industry. Intel had one of the largest layoffs in its history. National Semi had a 17% decrease in revenue but moved from an operating profit of $59 million to an operating loss of -$117 million. Even Texas Instruments had a brutal period of layoffs, as revenue shrank 14% and profits went negative”. The culprit was Japanese imports. Low-end chips had declined significantly in price, as Japan flexed its labor cost advantage. All of the domestic US chip manufacturers complained (National Semiconductor, Texas Instruments, Micron, and Intel), leading to the 1986 US-Japan Semiconductor Agreement, effectively capping Japanese market share at 20%. Now, this was a time when semiconductor manufacturing wasn’t easy, but easier than today, because it focused mainly on more commoditized memories. 1985 is an interesting example of the capital cycle compounding when geographic expansion overlaps with product overcapacity (as we had in the US). Marathon actually preferred Analog Devices, when it published its thesis in February 2013, highlighting the complex production process of analog chips (physical) vs. digital, the complex engineering required to build analog chips, and the low-cost nature of the product. “These factors - a differentiated product and company specific “sticky” intellectual capital - reduce market contestability….Pricing power is further aided by the fact that an analog semiconductor chip typically plays a very important role in a product for example, the air-bag crash sensor) but represents a very small proportion of the cost of materials. The average selling price for Linear Technology’s products is under $2.” Analog Devices would acquire Linear in 2017 for $14.8B, a nice coda to Marathon’s Analog/Linear dual pitch.

  3. Why do we have cycles? If everyone is playing the same business game and aware that markets come and go, why do we have cycles at all. Wouldn’t efficient markets pull us away from getting too hyped when the market is up and too sour when the market is down? Wrong. Chancellor gives a number of reasons why we have a capital cycle: Overconfidence, Competition Neglect, Inside View, Extrapolation, Skewed Incentives, Prisoner’s Dilemma, and Limits to Arbitrage. Overconfidence is somewhat straightforward - managers and investors look at companies and believe they are infallible. When times are booming, managers will want to participate in the boom, increasing investment to match “demand.” In these decisions, they often don’t consider what their competitors are doing, but rather focus on themselves. Competition neglect takes hold as managers enjoy watching their stock tick up and their face be splattered across “Best CEO in America” lists. Inside View is a bit more nuanced, but Michael Mauboussin and Daniel Kahneman have written extensively on it. As Kahneman laid out in Thinking, Fast & Slow: “A remarkable aspect of your mental life is that you are rarely stumped … The normal state of your mind is that you have intuitive feelings and opinions about almost everything that comes your way. You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why; you feel that an enterprise is bound to succeed without analyzing it.” When you take the inside view, you rely exclusively on your own experience, rather than other similar situations. Instead, you should take the outside view and assume your problem/opportunity/case is not unique. Extrapolation is an extremely common driver of cycles, and can be seen all across the investing world after the recent COVID peak. Peloton, for example, massively over-ordered inventory extrapolating out pandemic related demand trends. Skewed incentives can include near-term EPS targets (encourages buybacks, M&A), market share preservation (encourages overinvestment), low cost of capital (buy something with cheap debt), analyst expectations, and champion bias (you’ve decided to do something and its no longer attractive, but you do it anyway because you got people excited about it). The Prisoner’s Dilemma is also a form of market share preservation/expansion, when your competitor may be acting much more aggressively and you have to decide whether its worth the fight. Limits to Arbitrage is almost an extension of career risk, in that, when everyone owns an overvalued market, you may actually hurt your firm by actively withholding even if it makes investment sense. That’s why many firms need to maintain a low tracking error against indexes, which can naturally result in concentrations in the same stocks.

Business Themes

The-Capital-Cycle.jpg
  1. Capital Cycle. The capital cycle has four stages: 1. New entrants attracted by prospect of high returns: investor optimistic 2. Rising competition causes returns to fall below cost of capital: share price underperforms 3. Business investment declines, industry consolidation, firms exit: investors pessimistic 4. Improving supply side causes returns to rise above the cost of capital: share price outperforms. The capital cycle reveals how competitive forces and investment behavior create predictable patterns in industries over time. Picture it as a self-reinforcing loop where success breeds excess, and pain eventually leads to gain. Stage 1: The Siren Song - High returns in an industry attract capital like moths to a flame. Investors, seeing strong profits and growth, eagerly fund expansions and new entrants. Optimism reigns and valuations soar as everyone wants a piece of the apparent opportunity. Stage 2: Reality Bites - As new capacity comes online, competition intensifies. Prices fall as supply outpaces demand. Returns dip below the cost of capital, but capacity keeps coming – many projects started in good times are hard to stop. Share prices begin to reflect the deteriorating reality. Stage 3: The Great Cleansing - Pain finally drives action. Capital expenditure is slashed. Weaker players exit or get acquired. The industry consolidates as survivors battle for market share. Investors, now scarred, want nothing to do with the sector. Capacity starts to rationalize. Stage 4: Phoenix Rising - The supply-side healing during the downturn slowly improves industry economics. With fewer competitors and more disciplined capacity, returns rise above the cost of capital. Share prices recover as improved profitability becomes evident. But this very success plants the seeds for the next cycle. The genius of understanding this pattern is that it's perpetual - human nature and institutional incentives ensure it repeats. The key is recognizing which stage an industry is in, and having the courage to be contrarian when others are either too optimistic or too pessimistic.

  2. 7 signs of a bubble. Nothing gets people going more than Swedish Banking in the 2008-09 financial crisis. Marathon called out its Seven Deadly Sins of banking in November 2009, utilizing Handelsbanken as a positive reference, highlighting how they avoided the many pitfalls that laid waste to their peers. 1. Imprudent Asset-Liability mismatches on the balance sheet. If this sounds familiar, its because its the exact sin that took down Silicon Valley Bank earlier this year. As Greg Brown lays out here: “Like many banks, SVB’s liabilities were largely in the form of demand deposits; as such, these liabilities tend to be short term and far less sensitive to interest rate movement. By contrast, SVB’s assets took the form of more long-term bonds, such as U.S. Treasury securities and mortgage-backed securities. These assets tend to have a much longer maturity – the majority of SVB’s assets matured in 10 years or more – and as a result their prices are much more sensitive to interest rate changes. The mismatch, then, should be obvious: SVB was taking in cash via short-term demand deposits and investing these funds in longer-term financial instruments.” 2. Supporting asset-liability mismatches by clients. Here, Chancellor calls out foreign currency lending, whereby certain European banks would offer mortgages to Hungarians in swiss francs, to buy houses in Hungary. Not only were these banks taking on currency risk, they were exposing their customers to it and many didn’t hedge the risk out appropriately. 3. Lending to “Can’t Pay, Won’t Pay” types. The financial crisis was filled with banks lending to subprime borrowers. 4. Reaching for growth in unfamiliar areas. As Marathon calls out, “A number of European banks have lost billions investing in US subprime CDOs, having foolishly relied on “experts” who told them these were riskless AAA rated credits.” 5. Engaging in off-balance sheet lending. Many European banks maintained "Structured Investment Vehicles” that were off-balance sheet funds holding CDOs and MBSs. At one point, it got so bad that Citigroup tried the friendship approach: “The news comes as a group of banks in the U.S. led by Citigroup Inc. are working to set up a $100 billion fund aimed at preventing SIVs from dumping assets in a fire sale that could trigger a wider fallout.” These SIVs held substantial risk but were relatively unknown to many investors. 6. Getting sucked into virtuous/vicious cycle dynamics. As many European banks looked for expansion, they turned to lending into the Baltic states. As more lenders got comfortable lending, GDP began to grow meaningfully, which attracted more aggressive lending. More banks got suckered into lending in the area to not miss out on the growth, not realizing that the growth was almost entirely debt fueled. 7. Relying on the rearview mirror. Marathon points out how risk models tend to fail when the recent past has been glamorous. “In its 2007 annual report, Merrill Lunch reported a total risk exposure - based on ‘a 95 percent confidence interval and a one day holding period’ - of $157m. A year later, the Thundering Herd stumbled into a $30B loss!”

  3. Investing Countercyclically. Björn Wahlroos exemplified exceptional capital allocation skills as CEO of Sampo, a Finnish financial services group. His most notable moves included perfectly timing the sale of Nokia shares before their collapse, transforming Sampo's property & casualty insurance business into the highly profitable "If" venture, selling the company's Finnish retail banking business to Danske Bank at peak valuations just before the 2008 financial crisis, and then using that capital to build a significant stake in Nordea at deeply discounted prices. He also showed remarkable foresight by reducing equity exposure before the 2008 crisis and deploying capital into distressed commercial credit, generating €1.5 billion in gains. Several other CEOs have demonstrated similar capital allocation prowess. Henry Singleton at Teledyne was legendary for his counter-cyclical approach to capital allocation. He issued shares when valuations were high in the 1960s to fund acquisitions, then spent the 1970s and early 1980s buying back over 90% of Teledyne's shares at much lower prices, generating exceptional returns for shareholders. As we saw in Cable Cowboy, John Malone at TCI (later Liberty Media) was masterful at using financial engineering and tax-efficient structures to build value. He pioneered the use of spin-offs, tracking stocks, and complex deal structures to maximize shareholder returns while minimizing tax impacts. Tom Murphy at Capital Cities demonstrated exceptional discipline in acquiring media assets only when prices were attractive. His most famous move was purchasing ABC in 1985, then selling the combined company to Disney a decade later for a massive profit. Warren Buffett at Berkshire Hathaway has shown remarkable skill in capital allocation across multiple decades, particularly in knowing when to hold cash and when to deploy it aggressively during times of market stress, such as during the 2008 financial crisis when he made highly profitable investments in companies like Goldman Sachs and Bank of America. Jamie Dimon at JPMorgan Chase has also proven to be an astute capital allocator, particularly during crises. He guided JPMorgan through the 2008 financial crisis while acquiring Bear Stearns and Washington Mutual at fire-sale prices, significantly strengthening the bank's competitive position. D. Scott Patterson has shown excellent capital allocation skills at FirstService. He began leading FirstService following the spin-off of Colliers in 2015, and has compounded EBITDA in the high teens via strategic property management acquistions coupled with large platforms like First OnSite and recently Roofing Corp of America. Another great capital allocator is Brad Jacobs. He has a storied career building rollups like United Waste Systems (acquired by Waste Services for $2.5B), United Rentals (now a $56B public company), XPO logistics which he separated into three public companies (XPO, GXO, RXO), and now QXO, his latest endeavor into the building products space. These leaders share common traits with Wahlroos: patience during bull markets, aggression during downturns, and the discipline to ignore market sentiment in favor of fundamental value. They demonstrate that superior capital allocation, while rare, can create enormous shareholder value over time.

    Dig Deeper

  • Handelsbanken: A Budgetless Banking Pioneer

  • ECB has created 'toxic environment' for banking, says Sampo & UPM chairman Bjorn Wahlroos

  • Edward Chancellor part 1: ‘intelligent contrarians’ should follow the capital cycle

  • Charlie Munger: Investing in Semiconductor Industry 2023

  • Amazon founder and CEO Jeff Bezos delivers graduation speech at Princeton University

tags: Amazon, Jeff Bezos, National Semiconductor, Intel, Moore's Law, Texas Instruments, Micron, Analog Devices, Michael Mauboussin, Daniel Kahneman, Peloton, Handelsbanken, Bjorn Wahlroos, Sampo, Henry Singleton, Teledyne, John Malone, D. Scott Patterson, Jamie Dimon, Tom Murphy, Warren Buffett, Brad Jacobs
categories: Non-Fiction
 

July 2023 - The Myth of Capitalism by Jonathan Tepper with Denise Hearn

We learn about the fun history of many monopolies and anti-trust! While I can’t recommend this book because its long and poorly written, it does reasonably critique aspects of antitrust and monopoly formation. Its repetitive and so aggressively one-sided that it loses credibility. The fact that the author used to advise and now runs a hedge fund that owns monopoly businesses tells you all you need to know.

Tech Themes

  1. Consumer Welfare. Tepper’s fundamental argument is that since the 1980s, driven by Regan’s deregulation push, the government has allowed corporate mergers and abuses of market power, leading to more market concentration, higher prices, greater inequality, worse worker conditions, and stymied innovation. Influenced by the Chicago School’s free market ideas and Robert Bork’s popular 1978 book Antitrust Paradox, the standard for antitrust enforcement morphed from breaking up market-abusing companies to “consumer welfare.” With this shift, antitrust enforcement became: “Does this harm the consumer?” A lot of things do not harm consumers. Broadcast Music, Inc. v. CBS, Inc. (1979) is widely regarded as one of the first antitrust cases that shifted the Rule of reason towards consumer welfare. CBS had sued Broadcast Music, alleging that blanket licenses constituted price fixing. Broadcast Music represented copyright holders and would grant licenses to media companies to use artist’s music on air. These deals were negotiated on behalf of many artists, and did not allow CBS to negotiate for selected works. The court sided with BMI because the blanket license process was simpler, lowered transaction costs by reducing the number of negotiations, and allowed broadcasters greater access to works. They even admitted that the blanket license may be a form of price setting, but concluded that it didn’t necessarily harm consumers and was more efficient, so they allowed it. The consumer welfare ideology has recently come under fire around the big tech companies - Apple, Microsoft, Google, Meta, and Amazon. Lina Khan, Commissioner of the Federal Trade Commission (FTC) wrote a powerful and aptly titled article, Amazon’s Antitrust Paradox, highlighting why in her view consumer welfare was not a strong enough stance on antitrust. “This Note argues that the current framework in antitrust—specifically its pegging competition to “consumer welfare,” defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy.” The note argues that Amazon’s willingness to offer unsustainably low prices and their role as a marketplace platform and a seller on that marketplace allow it crush competition. Google is currently being sued by the Department of Justice over illegal monopolization of adtech and its dominance in the search engine market. The government is attempting to shift antitrust back to a more aggressive approach regarding monopolistic behavior. From a consumer welfare perspective, there is no doubt that all of these companies have created situations that benefit consumers (“free” services, low prices) and hurt competition. The question is: “Is it illegal?”

  2. The ACTs - Sherman and Clayton. The Sherman Antitrust Act, passed in 1890, was the first major federal law aimed at curbing monopolies and promoting competition. The late 19th century, often referred to as the Gilded Age, saw the rise of powerful industrialists like J.P. Morgan, John D. Rockefeller, and Cornelius Vanderbilt, whose massive corporations threatened to dominate key sectors of the economy. Public outcry over the potential for these monopolies to stifle competition and exploit consumers led to the passage of the Sherman Act. Senator John Sherman, intended the law to protect the public from the negative consequences of concentrated economic power. The Sherman Act broadly prohibited anticompetitive agreements and monopolization, empowering the government to break up monopolies and prevent practices that restrained trade. However, the Sherman Act's broad language left it open to interpretation, and its early enforcement was inconsistent. President Theodore Roosevelt, a proponent of trust-busting, used the Sherman Act to challenge powerful monopolies, such as the Northern Securities Company, a railroad conglomerate controlled by J.P. Morgan. The Supreme Court's decision in the Standard Oil case in 1911 further shaped the interpretation of the Sherman Act, establishing the "rule of reason" as the standard for evaluating antitrust violations. This meant that not all restraints of trade were illegal, only those that were deemed "unreasonable" in their impact on competition. The Clayton Antitrust Act, passed in 1914, was designed to strengthen and clarify the Sherman Act. It specifically targeted practices not explicitly covered by the Sherman Act, such as mergers and acquisitions that could lessen competition, price discrimination, and interlocking directorates. The Clayton Act also sought to protect labor unions, which had been subject to antitrust prosecution under the Sherman Act. The passage of these acts led to a wave of significant antitrust cases. Prominent examples include: United States v. American Tobacco Co. (1911): This case resulted in the breakup of the American Tobacco Company, a dominant force in the tobacco industry, demonstrating the government's commitment to using antitrust laws to dismantle powerful monopolies. United States v. Paramount Pictures, Inc. (1948): This case challenged the vertical integration of the film industry, where major studios controlled production, distribution, and exhibition. The court's decision led to significant changes in the industry's structure. United States v. AT&T Co. (1982): This landmark case resulted in the breakup of AT&T, a telecommunications giant, into smaller, regional companies. This case marked a major victory for antitrust enforcement and had a lasting impact on the telecommunications industry.

  3. Microsoft. The Microsoft antitrust case, initiated in October 1998, saw the U.S. government accusing Microsoft of abusing its monopoly power in the personal computer operating systems market. The government, represented by David Boies (yes, Theranos David Boies), argued that Microsoft, led by Bill Gates, had engaged in anti-competitive practices to stifle competition, particularly in the web browser market. Gates was famously deposed and shockingly (not really) came away from the deposition looking like an asshole. The government alleged that Microsoft violated the Sherman Act by: Bundling its Internet Explorer (IE) web browser with its Windows operating system, thereby hindering competing browsers like Netscape Navigator, manipulating application programming interfaces to favor IE, and enforcing restrictive licensing agreements with original equipment manufacturers, compelling them to include IE with Windows. Judge Thomas Jackson presided over the case at the United States District Court for the District of Columbia. In 1999, he ruled in favor of the government, finding that Microsoft held a monopoly and had acted to maintain it. He ordered Microsoft to be split into two units, one for operating systems and the other for software components. Microsoft appealed the decision. The Appeals Court overturned the breakup order, partly due to Judge Jackson's inappropriate discussions with the media. While upholding the finding of Microsoft's monopolistic practices, the court deemed traditional antitrust analysis unsuitable for software issues. The case was remanded to Judge Colleen Kollar-Kotelly, and ultimately, a settlement was reached in 2001. The settlement mandated Microsoft to share its application programming interfaces with third-party companies and grant a panel access to its systems for compliance monitoring. However, it did not require Microsoft to change its code or bar future software bundling with Windows. This led to criticism that the settlement was inadequate in curbing Microsoft's anti-competitive behavior. History doesn’t repeat itself, but it does rhyme and Microsoft is doing the exact same bundling strategy again with its Teams app.

Business Themes

Screenshot 2024-04-14 205150.png
PIXAR_THEMES_GRID.0.jpg
  1. Monopoly Markets. Tepper lays out all of the markets that he believes are monopoly, duopoly, or oligopoly markets. Cable/high speed internet (Comcast, Verizon, AT&T, Charter (Spectrum)) - pretty much the same, Computer Operating Systems (Microsoft) - pretty much the same but iOS and Linux are probably bigger, Social Networks (Facebook with 75% share). Since then Tiktok, Twitter, Pinterest, and Snap have all put a small dent in Facebook’s share. Search (Google), Milk (Dean Foods), Railroads (BNSF, NSC, CSX, Union Pacific, Kansas City Southern), Seeds (Bayer/Monsanto, Syngenta/ChemChina, Dow/DuPont), Microprocessors (Intel 80%, AMD 20%), Funeral Homes (Service Corporation International) all join the monopoly club. The duopoly club consists of Payment Systems (Visa, Mastercard), Beer (AB Inbev, Heineken), Phone Operating Systems (iOS, Android), Online Advertising (Google, Facebook), Kidney Dialysis (DaVita), and Glasses (Luxottica). The oligopoly club is Credit Reporting Bureaus (Transunion, Experian, FICO), Tax Preparation (H&R Block, Intuit), Airlines (American, Delta, United, Southwest, Alaska), Phone Companies (Verizon, Sprint, T-Mobile, AT&T), Banks (JP Morgan Chase, Bank of America, Citigroup, Wells Fargo), Health Insurance (UnitedHealthcare, Centene, Humana, Aetna), Medical Care (HCA, Encompass, Ascension, Universal Health), Group Purchasing Organizations (Vizient, Premier, HealthTrust, Intaler), Pharmacy Benefit Managers (Express Scripts, CVS Caremark, Optum/UnitedHealthcare), Drug Wholesalers (Cencora, McKesson, Cardinal Health), Agriculture (ADM, Bunge, Cargill, Louis Dreyfus), Media (Walt Disney, Time Warner, CBS, Viacom, NBC Universal, News Corp), Title Insurance (Fidelity National, First American, Stewart, and Old Republic). Since the book was published in 2018, there has been even more consolidation - Canadian Pacific bought Kansas City Southern for $31B, Essilor merged with Luxottica in 2018 in a $49B deal, Sprint merged with T-Mobile in a $26B deal, and CBS and Viacom merged in a $30B deal. Tepper’s anger towards lackadaisical enforcement of antitrust is palpable. He encourages greater antitrust speed and transparency, the unwinding of now clear market consolidating mergers, and the breakup of local monopolies.

  2. Conglomeration and De-Conglomeration. Market Concentration. The conglomerate boom, primarily occurring in the 1960s, saw a surge in the formation of large corporations encompassing diverse, often unrelated businesses. This era was fueled by low interest rates and a fluctuating stock market, creating favorable conditions for leveraged buyouts. A key driver of this trend was the Celler-Kefauver Act of 1950, which, by prohibiting companies from acquiring their competitors or suppliers, pushed them towards diversification through acquiring businesses in unrelated fields. The prevailing motive was to achieve rapid growth, even if it meant prioritizing revenue growth over profit growth. Conglomerates were seen as a means to mitigate risk through diversification and achieve operational economies of scale. Many conglomerates formed that operated across completely different industries: Gulf and Western (Paramount Pictures, Simon & Schuster, Sega, Madison Square Garden), ITT (Telephone companies, Avis, Wonder Bread, Hartford Insurance, and Sheraton), and Henry Singleton’s Teledyne. However, the conglomerate era ultimately waned. The government took a more proactive approach to acquisitions in the late 1960s, curbing the aggressive approaches. The FTC sued Proctor & Gamble over its potential acquisition of Clorox and merger guidelines were revised in 1968, setting out more rules against market share and concentration. Rising interest rates in the 1970s strained these sprawling enterprises, forcing them to divest many of their acquisitions. The belief in the inherent efficiency of conglomerates was challenged as businesses increasingly favored specialization over sprawling, unwieldy structures. The concept of synergy, once touted as a key advantage of conglomerates, came under scrutiny. Ultimately, the conglomerate era was marked by performance dilution, value erosion, and the realization that strong performance in one business did not guarantee success in unrelated sectors.

  3. Industry Concentration. A central pillar to Tepper’s argument that the capitalism game isn’t being played fairly or appropriately, is that rising industry concentration is worrisome and indicative of a broken market system. He uses the Herfindahl-Hirschman Index (HHI) to discuss levels of industry concentration. According to the Antitrust Division at the DOJ: “The HHI is calculated by squaring the market share of each firm competing in the market and then summing the resulting numbers. For example, for a market consisting of four firms with shares of 30, 30, 20, and 20 percent, the HHI is 2,600 (302 + 302 + 202 + 202 = 2,600). The agencies generally consider markets in which the HHI is between 1,000 and 1,800 points to be moderately concentrated, and consider markets in which the HHI is in excess of 1,800 points to be highly concentrated.” The HHI index is relatively straightforward to calculate. It can be a quick test to see if a potential merger creates a more significantly concentrated market. However, it still falls prey to some issues. For example, market definitions are extremely important in antitrust cases and a poorly or narrowly defined market can cause the HHI to look overly concentrated. In the ongoing Kroger-Albertson’s Merger case, the FTC is proposing a somewhat narrow definition of supermarkets, which excludes large supermarket players like Walmart, Costco, Aldi, and Whole Foods. If Whole Foods isn’t a super market, I’m not sure what is. And sure, maybe they narrowly define the market because Kroger and Albertsons serve a particular niche where substitutes are not easily available. Whole Foods may be more expensive, Aldi may have limited assortment, and Costco portion sizes may be too big. However, if you have a market that has Kroger, Walmart, Costco, Aldi, and Whole Foods serving a reasonable size population, I can almost guarantee the prices are likely to remain competitive. In some cases, high industry concentration does not mean monopolistic behavior. However, it can lead to monopolistic or monopsonistic behavior including: higher prices, lower worker’s wages, lower growth, and greater inequality.

    Dig Deeper

  • Microsoft Volume II: The Complete History and Strategy of the Ballmer Years

  • Lecture Antitrust 1 Rise of Standard Oil | Walter Isaacson

  • Anti-Monopoly Timeline

  • How Xerox Lost Half its Market Share

  • (Anti)Trust Issues: Harvard Law Bulletin

tags: Ronald Regan, Robert Bork, Broadcast Music, CBS, Apple, Microsoft, Google, Meta, Amazon, Lina Khan, Sherman Act, Clayton Act, JP Morgan Chase, John D. Rockefeller, Vanderbilt, Theodore Roosevelt, Standard Oil, American Tobacco, Paramount, AT&T, Bill Gates, David Boies, Netscape, Gulf & Western, ITT, Henry Singleton, Teledyne, Proctor & Gamble, Clorox, Herfindahl-Hirschman Index, Kroger, Albertsons, Costco, Whole Foods, Aldi
categories: Non-Fiction
 

December 2022 - We Are Legion (We Are Bob) by Dennis E. Taylor

This month we take a view into the future to see what a futuristic society full of AI, 5G, and easy space travel.

Tech Themes

  1. Artificial General Intelligence. One of the most significant technological themes in the book is the development of AGI. Exhibiting artificial general intelligence would mean a computer could perform any task that humans could perform. While this is the ultimate vision of the AI hypetrain, there remains a big gap even between current iterations of GPT-4 and AGI. While Bob is able to seamlessly create VR experiences, recognize missles in flight, and upgrade himself, the world of computing today lacks the technology to fit all of these things into a sentient program. A 2019 article hypothesized by 2060 that we’d have full AGI. Other predictions suggest its 200 years away. It is still early days in the world of AGI, and there needs to be a lot more innovation before we get full AGI.

  2. Programs Programming Programs. In the book, Taylor explores the concept of self-programmability when Bob discovers he can rewrite portions of his own code. Bob begins to set up virtual reality simulations for himself, complete with a cat, virtual baseball, and a butler. These VR “home” simulations offer a sense of normalcy that Bob dearly misses after reawakening as an AI. Later, Bob realizes that he is able to replicate his code. Code replication is similar to a type of AI called, Genetic Computing. In Genetic Computing, a program models the reproduction of a population based on a fitness measure and a mutation rate. When Bob replicates himself, he notices that each new Bob has a slightly different personality that all stem from his original personality. These personality changes make some replicants better suited for exploration vs. war vs. maintenance, which could be seen as their individual fitness functions. Genetic algorithms can be used to solve a whole host of machine learning and computing programs.

  3. Technology and Emotion. Before he was killed in a car crash, Bob had sold his successful software company, netting him millions. With the extra money he paid the cryogenic service that would preserve his mind in the event something bad happened to him. After his death, Bob is awoken as an artificial intelligence. Similar to Ender’s Game, he finds himself being trained for an unknown objective, although he quickly understands its military related. Over time he becomes aware that other AI’s are going crazy and discovers that when left alone to process their fate as a war-faring AI, many become immensely depressed. Bob recognizes the immensity of time as a computer, with a clock that can work at the nano-second level. This theme raises important ethical questions about the implications of creating self-aware machines, notably the mental health consequences of inventing self-aware machines that experience the world differently than humans do.Therefore time feels extended beyond comprehension. After a while, Bob discovers an endocrine switch that overrides emotion. He’s curious about its function and switches it on, and immediately becomes overwhelmed with emotions: “You know that sinking feeling you get when you suddenly realize you’ve forgotten something important. Like a combination of fast elevator and urge to hurl. It hit me without any warning or buildup. Maybe it was the sudden release, maybe it was an accumulation of all the suppressed emotions, whatever, I wasn’t ready for the intensity. My thoughts swirled with all the thing that had been bugging me since I woke up…I mourned my lost life. I was still human in the ways that mattered.” Emotion and technology are often thought of as opposite ends of the spectrum, but they are more intertwined then people imagine.

Business Themes

Food_Sustainability.png
  1. Government and AI Future. Another business theme explored in the book is the power and influence of corporations. In the story, Bob's actions and the emergence of AI have a significant impact on the economy, politics, and society. This theme raises questions about the ethics of corporate power and the need for regulation to ensure that technology is used in ways that benefit society as a whole. For example, Bob is controlled by a religious government entity called FAITH, the Free American Independent Theocratic Hegemony, which is led by Christian Fundamentalists. While Taylor’s expression of a future whereby Christian Fundamentalists control the government is a commentary on an increasingly co-mingled church and state environment in the US, it also begs the question about control over AI. In China, the government has a front row seat and access to all potential AI innovations. In the US, a lot of these innovations are controlled by corporations, who will obviously work with the government but who do not necessarily need to sell to the government. At the same time, it would be difficult to envision how the American government would repossess or control all AI developments of underlying corporations. There is still a lot to be figured out between industries and government’s when it comes to AI innovation.

  2. Space business. The book also explores the intersection of technology and business, specifically in the context of space exploration and colonization. Amazon's Kuiper and SpaceX's Starlink are two examples of companies that are driving innovation in this field. These satellite constellations have the potential to revolutionize industries such as agriculture, mining, and energy, by enabling real-time data analytics and remote control of machinery. The book touches on this theme with Bob's use of satellite constellations for communication and coordination in his efforts to explore and colonize new worlds. For example, the book explores the potential consequences of corporate control over space resources, highlighting the importance of ethical guidelines to ensure the equitable distribution of resources. Bob, who is a FAITH probe, fights China, the Australian Federation, and the Brazilian Empire over control of vast new space worlds. In the real world, people are beginning to question the value of these new constellation space businesses. A recent publication at Bernstein research noted: “Project Kuiper appears even more extreme as an investment area with $10B+ already committed. Perhaps there’s a lesson here from Google shutting Loon and stagnant Fiber and Fi businesses, that capital intensive low-margin utilities aren’t worth the effort regardless of how ‘cool’ the technology may be.” The durability of a real, sustainable business model has always been a question for Space focused businesses. As we learned from Carlota Perez’s Technology Revolutions and Financial Capital, the early establishers of infrastructure can either reap windfalls (railroads, steel) or face severe competition (telecom) which drives returns negative. I am skeptical that Kuiper or Starlink have a large enough market to create substantially large businesses that cover the cost of the capital expenditure involved in launching and maintaining the satellites. That being said, I think both organizations will probably learn a lot about space in the process, so should it ever become economically feasible, they would be ready to pounce (if they still exist).

  3. 3D Printing and The Food Question. Bob uses 3D printing technology to replicate himself into new versions with longer and larger appendages. “The area was a beehive of activity. Five version two HEAVEN vessels were under construction. One of which was a trade up from me. The new designs included a bigger reactor and drive, a rail gun, storage and launch facilities for busters, replicant systems with twice the capacity of version one, more room for storing roamers and mining drones, and more cargo capacity in general. The manufacturing systems cranked out parts as fast as the roamers could feed in the raw ore.” Bob creates many many roamers, which he uses in all sorts of ways, as drones, analyzers, and crafters. The plurality of use cases has always been the pitch for 3D computing, however, the businesses involved such as Desktop Metal or 3D Systems have struggeld to really hit mass consumer adoption. Today, it is still too hard for the average non AGI person to build things with a 3D printer, and most jobs are left to seasoned professionals. As the newly created Bob replicant’s peruse the universe for new worlds, original Bob sticks behind to help determine the fate of people on earth. One of the big challenges facing Bob is finding enough food for the world’s population while it is in transit to a new world. This situation is reminiscent of Wall-E, where the entire population of earth leaves after a nuclear attack. Food insecurity, or lacking access to quality food, is a global question, with estimates of over 345 million people facing high levels of food insecurity in 2023. In the US, about 10% of the population or 13.8 million households had low or very low food security. The question is complicated by the cost of sustainable farming, the role that farming and food play in greenhouse gas emissions, and how to use land with a growing population. Bob ulitmately decides to build a farm on a spaceship, which is reminiscent of the vertical farming craze that came through Ag-Tech around 2016-17. Three vertical farming businesses: Aerofarms, Kalera, and NL have gone bankrupt this week, after failing to find a financial sustainable business model. Its still early days in the world of alternative foods and new farming techniques, but we need to figure them out before the world population hits 10B in 2050.

    Dig Deeper

  • OpenAI CEO: When will AGI arrive? | Sam Altman and Lex Fridman

  • Starlink 2 months later ... in a 2min review ✌️

  • We are Legion (We are Bob) | Dennis E. Taylor | Talks at Google

  • What Is 3D Printing and How Does It Work? | Mashable Explains

  • What is Sustainable Agriculture? Episode 1: A Whole-Farm Approach to Sustainability

tags: Bob, AGI, Cryogenics, Genetic Computing, Space, SpaceX, VR, Government, AI, Amazon, Kuiper, Starlink, Google, Farming, Aerofarms, Vertical Farming, 3D Printing, Desktop Metal, 3D Systems, Food Insecurity
categories: Fiction
 

April 2022 - Ask Your Developer by Jeff Lawson

This month we check out Jeff Lawson’s new book about API’s. Jeff was a founder and the first CTO at Stubhub, an early hire at AWS, and started Twilio in 2008. He has a very interesting perspective on the software ecosystem as it stands today and what it looks like in the future!

Tech Themes

  1. Start with the Problem, Not the Solution. Lawson repeats a mantra throughout the book related to developers: "Start with the problem, not the solution." This is something that Jeff learned as an early hire at AWS in 2004. Before AWS, Lawson had founded and sold a note-taking service to an internet flame out, co-founded Stubhub as its first CTO, and worked at an extreme sports retailer. His experience across four startups has guided him to a maniacal focus on the customer, and he wants that focus to extend to developers. If you tell developers the exact specification for something and give no context, they will fail to deliver great code. Beginning with the problem and the customer's specific description allows developers to use their creativity to solve the issue at hand. The key is to tell developers the business problem and how the issue works, let them talk to the customer, and help them understand it. That way, developers can use their imaginative, creative problem-solving abilities.

  2. Experiment to Innovate. Experimentation is at the root of invention, which drives business performance over the long term. Jeff calls on the story of the Wright Brothers to illustrate this point. The Wright Brothers were not the first to try to build a flying vehicle. When they achieved flight, they beat out a much better-funded competitor by simply doing something the other person wouldn't do – crash. The Wright brothers would make incremental changes to their flying machine, see what worked, fly it, crash it, and update the design again. The other competitor, Samuel Pierpont Langley, spent heavily on his "aerodome" machine (~$2m in today's dollars) and tried to build the exact specs of a flying machine but didn't run these quick and fast (and somewhat calamitous) experiments. This process of continual experimentation and innovation is the hallmark of a great product organization. Lawson loves the lean startup and its idea of innovation accounting. In innovation accounting, teams document exact experiments, set expectations, hypotheses, target goals, and then detail what happens in the experiment. Think of this as a lab notebook for product experimentation. When doing these experiments, they must have a business focus rather than just a technical ramification. Jeff always asks: "What will this help our customers do?" when evaluating experimentation and innovation. Agile - features, deadlines, quality, certainty - choose 3.

  3. Big Ideas Start Small. In 1986, famous computer scientist Fred Brooks, published a paper called No Silver Bullet, detailing how to manage software teams. Brooks contends that adding more developers and spending more money seldom gets a project to completion faster – normally, it does the opposite. Why is this? New people on the team need time to ramp up and get familiar with the code-base, so they are low productivity at the start. Additionally, developers on the project take a lot of time explaining the code base to new developers joining late. Lawson uses the example of GE Digital to show the issues of overinvesting when starting. Jeff Immelt started as CEO of GE in 2001, and later proclaimed in 2014 that GE would launch a new software/IoT division that would be a meaningful part of their future business. GE invested tons of money into the venture and put experienced leaders on the project; however, it generated minimal profit years later. Despite acquisitions like ServiceMax (later divested), the company spent hundreds of millions with hardly any return. Lawson believes the correct approach would be to invest in 100 small product teams with $1m each, and then as those ideas grow, add more $. This idea of planting seeds and seeing which ones flower and then investing more is the right way to do it, if you can. Start small and slowly gather steam until it makes sense to step on the gas.

Business Themes

lawson-hero-1536x1536.jpeg
  1. Software Infrastructure is Cheap. Software infrastructure has improved dramatically over the last fifteen years. In 2007, if you wanted to start a business, you had to buy servers, configure them, and manage your databases, networking equipment, security, compliance, and privacy. Today that is all handled by the cloud hyperscalers. Furthermore, new infrastructure providers sprouted as the cloud grew that could offer even better, specialized performance. On top of core cloud services like storage and compute, new companies like Datadog, Snowflake, Redis, Github, all make it easy to startup infrastructure for your software business. On top of that, creative tools are just as good. Lawson calls to mind the story of Lil Nas X, the now-famous rapper, who bought a beat online for $30, remixed it and launched it. That beat became "Old Town Road," which went 15x platinum and is now rated 490th on the list of best songs of all time. The startup costs for a new musician, software company, or consumer brand are very low because the infrastructure is so good.

  2. Organization Setup. Amazon has heavily influenced Lawson and Twilio, including Bezos's idea of two-pizza teams. The origin story of two pizza teams comes from a time at Amazon when teams were getting bigger and bigger, and people were becoming more removed from the customer. Slowly, many people throughout the company had almost no insight into the customer and their issues. Jeff introduced cutting his organization into two-pizza teams, i.e. two pizzas could reasonably feed the team. Lawson has adopted this in spades, with Twilio housing over 150 two-pizza teams. Every team has a core customer, whether internal or external. If you are on the platform infrastructure team, your customer may be internal developers who leverage the infrastructure team's development pipelines. If you are on the Voice team, your customer may be actual end customers building applications with Twilio's API-based voice solution. When these teams get large (beyond two pizzas), there is a somewhat natural process of mitosis, where the team splits into two. To do this, the teams detangle their respective codebases and modularize their service so other teams within the company can access it. They then set up collaboration contacts with their closely related teams; internally, everyone monitors how much they use each other microservice across the company. This monitoring allows companies to see where they may need to deploy more resources or create a new division.

  3. Hospitality. Many companies claim to be customer-focused, but few are. Amazon always leaves an empty chair in conference rooms to symbolize the customer in every meeting. Jeff Lawson and Twilio extended this idea – he asked customers for their shoes (the old adage: "walk a mile in someone's shoes") and then hung them throughout Twilio's office. Jeff is intensely focused on the customer and likens his approach to the one famous restauranteur Danny Meyer takes to his restaurants. Danny focuses on this idea of hospitality. In Danny's mind, hospitality goes beyond just focusing on the customer; it makes the customer feel like they are on the business side. While it may be hard to imagine this, everyone knows this feeling when someone goes out of their way to ensure that you have a positive experience. Meyer extends this to an idea about a gatekeeper vs. an agent. A gatekeeper makes it feel like they sit in between you and the product; they remove you from whats happening and make you feel like you are being pushed to do things. In contrast, an agent is a proactive member of an organization that tries to build a team-like atmosphere between the company and the individual customer. Beyond the customer focus, Jeff extends this to developers – developers want autonomy, mastery, and purpose. They want a mission that resonates with them, the freedom to choose how they approach development, and the ability to learn from the best around them. The idea of hospitality exists among all stakeholders of a business but, most importantly, employees and customers.

Dig Deeper

  • Twilio's Jeff Lawson on Building Software with Superpowers

  • The Golden Rule of Hospitality | Tony Robbins Interviews Danny Meyer

  • #SIGNALConf 2021 Keynote

  • How the Wright Brothers Did the 'Impossible'

  • Webinar: How to Focus on the Problem, Not the Solution by Spotify PM, Cindy Chen

tags: Jeff Lawson, Twilio, AWS, Amazon, Jeff Bezos, Stubhub, Wright Brothers, Samuel Pierpont Langley, Innovation Accounting, No Silver Bullet, Fred Brooks, GE, Jeff Immelt, ServiceMax, Lil Nas X, Two Pizza Teams, APIs, Danny Meyer
categories: Non-Fiction
 

March 2022 - Invent and Wander by Jeff Bezos

This month we go back to tech giant Amazon and review all of Jeff Bezos’s letters to shareholders. This book describes Amazon’s journey from e-commerce to cloud to everything in a quick and fascinating read!

Tech Themes

  1. The Customer Focus. These shareholder letters clearly show that Amazon fell in love with its customer and then sought to hammer out traditional operational challenges like cycle times, fulfillment times, and distribution capacity. In the 2008 letter, Bezos calls out: "We have strong conviction that customers value low prices, vast selection, and fast, convenient delivery and that these needs will remain stable over time. It is difficult for us to imagine that ten years from now, customers will want higher prices, less selection, or slower delivery." When a business is so clearly focused on delivering the best customer experience, with completely obvious drivers, its no wonder they succeeded. The entirety of the 2003 letter, entitled "What's good for customers is good for shareholders" is devoted to this idea. The customer is "divinely discontented" and will be very loyal until there is a slightly better service. If you continue to offer lower prices on items, more selection of things to buy, and faster delivery - customers will continue to be happy. Those tenants are not static - you can continually lower prices, add more items, and build more fulfillment centers (while getting faster) to keep customers happy. This learning curve continues in your favor - higher volumes mean cheaper to buy, lower prices means more customers, more items mean more new customers, higher volumes and more selection force the service operations to adjust to ship more. The flywheel continues all for the customer!

  2. Power of Invention. Throughout the shareholder letters, Bezos refers to the power of invention. From the 2018 letter: "We wanted to create a culture of builders - people who are curious, explorers. They like to invent. Even when they're experts, they are "fresh" with a beginner's mind. They see the way we do things as just the way we do things now. A builder's mentality helps us approach big, hard-to-solve opportunities with a humble conviction that success can come through iteration: invent, launch, reinvent, relaunch, start over, rinse, repeat, again and again." Bezos sees invention as the ruthless process of trying and failing repeatedly. The importance of invention was also highlighted in our January book 7 Powers, with Hamilton Helmer calling the idea critical to building more and future S curves. Invention is preceded by wandering and taking big bets - the hunch and the boldness. Bezos understands that the stakes for invention have to grow, too: "As a company grows, everything needs to scale, including the size of your failed experiments. If the size of your failures isn't growing, you're not going to be inventing at a size that can actually move the needle." Once you make these decisions, you have to be ready to watch the business scale, which sounds easy but requires constant attention to customer demand and value. Amazon's penchant for bold bets may inform Andy Jassy's recent decision to spend $10B making a competitor to Elon Musk/SpaceX's Starlink internet service. This decision is a big, bold bet on the future - we'll see if he is right in time.

  3. Long-Term Focus. Bezos always preached trading off the short-term gain for the long-term relationship. This mindset shows up everywhere at Amazon - selling an item below cost to drive more volumes and give consumers better prices, allowing negative reviews on sites when it means that Amazon may sell fewer products, and providing Prime with ever-faster and free delivery shipments. The list goes on and on - all aspects focused on building a long-term moat and relationship with the customer. However it's important to note that not every decision pans out, and it's critical to recognize when things are going sideways; sometimes, you get an unmistakable punch in the mouth to figure that out. Bezos's 2000 shareholder letter started with, "Ouch. It's been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80 percent from when I wrote you last year." It then went on to highlight something that I didn't see in any other shareholder letter, a mistake: "In retrospect, we significantly underestimated how much time would be available to enter these categories and underestimated how difficult it would be for a single category e-commerce companies to achieve the scale necessary to succeed…With a long enough financing runway, pets.com and living.com may have been able to acquire enough customers to achieve the needed scale. But when the capital markets closed the door on financing internet companies, these companies simply had no choice but to close their doors. As painful as that was, the alternative - investing more of our own capital in these companies to keep them afloat- would have been an even bigger mistake." During the mid to late 90s, Amazon was on an M&A and investment tear, and it wasn't until the bubble crashed that they looked back and realized their mistake. Still, optimizing for the long term means admitting those mistakes and changing Amazon's behavior to improve the business. When thinking long-term, the company continued to operate amazingly well.

Business Themes

Amazon+flywheel+model+colour.png
  1. Free Cash Flow per Share. Despite historical rhetoric that Bezos forewent profits in favor of growth, his annual shareholder letters continually reinforce the value of upfront cash flows to Amazon's business model. If Amazon could receive cash upfront and manage its working capital cycle (days in inventory + days AR - days AP), it could scale its operations without requiring tons of cash. He valued the free cash flow per share metric so intensely that he spent an entire shareholder letter (2004) walking through an example of how earnings can differ from cash flow in businesses that invest in infrastructure. This maniacal focus on a financial metric is an excellent reminder that Bezos was a hedge fund portfolio manager before starting Amazon. These multiple personas: the hedge fund manager, the operator, the inventor, the engineer - all make Bezos a different type of character and CEO. He clearly understood financials and modeling, something that can seem notoriously absent from public technology CEOs today.

  2. A 1,000 run home-run. Odds and sports have always captivated Warren Buffett, and he frequently liked to use Ted Williams's approach to hitting as a metaphor for investing. Bezos elaborates on this idea in his 2014 Letter (3 Big Ideas): "We all know that if you swing for the fences, you're going to strike out a lot, but you're also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score one thousand runs. This long-tailed distribution of returns is why its important to be bold. Big winners pay for so many experiments." AWS is certainly a case of a 1,000 run home-run. The company incubated the business and first wrote about it in 2006 when they had 240,000 registered developers. By 2015, AWS had 1,000,000 customers, and is now at a $74B+ run-rate. This idea also calls to mind Monish Pabrai's Spawners idea - or the idea that great companies can spawn entirely new massive drivers for their business - Google with Waymo, Amazon with AWS, Apple with the iPhone. These new businesses require a lot of care and experimentation to get right, but they are 1,000 home runs, and taking bold bets is important to realizing them.

  3. High Standards. How does Amazon achieve all that it does? While its culture has been called into question a few times, it's clear that Amazon has high expectations for its employees. The 2017 letter addresses this idea, diving into whether high standards are intrinsic/teachable and universal/domain-specific. Bezos believes that standards are teachable and driven by the environment while high standards tend to be domain-specific - high standards in one area do not mean you have high standards in another. This discussion of standards also calls back to Amazon's 2012 letter entitled "Internally Driven," where Bezos argues that he wants proactive employees. To identify and build a high standards culture, you need to recognize what high standards look like; then, you must have realistic expectations for how hard it should be or how long it will take. He illustrates this with a simple vignette on perfect handstands: "She decided to start her journey by taking a handstand workshop at her yoga studio. She then practiced for a while but wasn't getting the results she wanted. So, she hired a handstand coach. Yes, I know what you're thinking, but evidently this is an actual thing that exists. In the very first lesson, the coach gave her some wonderful advice. 'Most people,' he said, 'think that if they work hard, they should be able to master a handstand in about two weeks. The reality is that it takes about six months of daily practice. If you think you should be able to do it in two weeks, you're just going to end up quitting.' Unrealistic beliefs on scope – often hidden and undiscussed – kill high standards." Companies can develop high standards with clear scope and corresponding challenge recognition.

Dig Deeper

  • Jeff Bezo’s Regret Minimization Framework

  • Andy Jassy on Figuring Out What's Next for Amazon

  • Amazon’s Annual Reports and Shareholder Letters

  • Elements of Amazon’s Day 1 Culture

  • AWS re:Invent 2021 Keynote

tags: Jeff Bezos, Amazon, AWS, Invention, 7 Powers, Elon Musk, SpaceX, Andy Jassy, Hamilton Helmer, Prime, Working Capital, Warren Buffett, Ted Williams, Monish Pabrai, Spawners, High Standards
categories: Non-Fiction
 

January 2022 - Seven Powers by Hamilton Helmer

This month we dove into a classic technology strategy book. The book covers seven major Powers a company can have that offer both a benefit and a barrier to competition. Helmer covers the majority of the book through the lens of different case studies including his favorite company, Netflix.

Tech Themes

  1. Power. After years as a consultant at BCG and decades investing in the public market, Helmer distilled all successful business strategies to seven individual Powers. A Power offers a company a re-inforcing benefit while also providing a barrier to potential competition. This is the epitome of an enduring business model in Helmer's mind. Power describes a company's strength relative to a specific competitor, and Powers focus on a single business unit rather than throughout a business. This makes sense: Apple may have a scale economies Power from its iPhone install base relative to Samsung, but it may not have Power in its AppleTV originals segment relative to Netflix. The seven types of Powers are: Scale Economies, Network Economies, Counter-Positioning, Switching Costs, Branding, Cornered Resources, and Process Power.

  2. Invention. While Powers are somewhat easy to spot (scale economies of Google's search algorithm), creating them is anything but easy. So what underlies every one of the seven Powers? Invention. Helmer pulls invention through the lens of industry Dynamics - external competitive conditions and the forward march of technology create opportunities to pursue new business models, processes, brands, and products. Companies must leverage their resources to craft Powers through trial and error, rather than an upfront conscious decision to pursue something by design. I view this almost as an extension of Clayton Christensen's Resource-Processes-Values (RPV) framework we discussed in July 2020. Companies can find a route to Power through these resources and the crafting process. For Netflix, the route was streaming, but the actual Power came from a strong push into exclusive and original content. The streaming business opened up Netflix's subscriber base, and the content decision provided the ability to amortize great content across its growing subscriber base.

  3. Power Progressions. Powers become available at different points in business progression. This makes sense - what drives a company forward in an unpenetrated market is different from what keeps it going during steady-state - Snowflake's competitive dynamics are different than Nestle's. Helmer defines three stages to a company: Origination, Takeoff, and Stability. These stages mirror the dynamics of S-Curves, which we discussed in our July 2021 book. During the Origination stage, companies can benefit from Cornered Resources and Counter-Positioning. Helmer uses the Pixar management team as an example of Cornered Resources during the Origination phase of 3D animated movies. The company had Steve Jobs (product visionary), John Lasseter (story-teller creative), and Ed Catmull (operations and technology leader). During the early days of the industry, these were the only people that knew how to operate a digital film studio. Another Cornered Resource example might be a company finding a new oil well. Before the company starts drilling, it is the only one that can own that asset. An example of Origination Counter-Positioning might be TSMC when they first launched. At that time, it was standard industry perception that semiconductor companies had to be integrated design manufacturers (IDM) - they had to do everything in-house. TSMC was launched as solely a fabrication facility that companies could use to gain extra manufacturing capacity or try out new designs. This gave them great Counter-Positioning relative to the IDM's and they were dismissed as a non-threat. The Takeoff period offers Network Economies, Scale Economies, and Switching Cost Powers. This phase is the growth phase of businesses. Snowflake currently benefits from Switching Cost dynamics - once you use Snowflake, it's unlikely you'll want to use other data warehouse providers because that process involves data replication and additional costs. Scale economies can be seen in businesses that amortize high costs over their user base, like Amazon. Amazon invests in distribution centers at a significant scale, which improves customer experience, which helps them get more customers - the flywheel repeats, allowing Amazon to continually invest in more distribution centers, further building its scale. Network economies show in social media businesses like Bytedance/TikTok. Users make content that attracts more users; incremental users join the platform because there is so much content to "gain" by joining the platform. Like scale economies, it's almost impossible to go build a competitor because a new company would have to recruit all users from the other platform, which would cost tons of money. The Stability phase offers Branding and Process Power. Branding is hard to generate, but the advantage grows with time. Consider luxury goods providers like LVMH; the older, the more exclusive the brand, the more it's desired, and every day it gets older and becomes more desired. A business can create Process Power by refining and improving operations to such a high degree that it becomes difficult to replicate. Classic examples of Process Power are TSMC's innovative 3-5nm processes today and Toyota's Production System. Toyota has even allowed competitors to tour its factory, but no competitor has replicated its operational efficiency.

Business Themes

7Power_Chart_Overview.png
  1. Sneak Attack. I've always been surprised by businesses that seemingly "come out of nowhere." In Helmer's eyes, this stems from Counter-Positioning. He tells the story of Vanguard, which was started by Jack Bogle in 1976. "You could charitably describe the reception as enthusiastic: only $11M trickled in from investors. Soon after the launch, [Noble Laureate Paul] Samuelson himself lauded the effort in his column for Newsweek, but with little result: the fund had only reached $17M by mid-1977. Vanguard's operating model depended on others for distribution, and brokers, in particular, were put off by a product that predicated on the notion that they provided no value in helping their clients choose which active funds to select." But Vanguard had something that active managers didn't: low fees and consistency. Vanguard's funds performed like the indices and cost much less than active funds. No longer were individuals underperforming the market and paying advisors to pick actively managed funds. Furthermore, Vanguard continually invested all profits back into its funds, so it looked like it wasn't making money while it grew its assets under management. It's so hard to spot these sneak attacks while they are happening. But one that might be happening right now is Cloudflare relative to AWS. Cloudflare launched its low-cost R2 service (a play on Amazon's famous S3 storage technology). Cloudflare is offering a cheaper product at a much lower cost and is leveraging its large installed base with its CDN product to get people in the door. It's unclear whether this will offer Power over AWS because it's confusing what the barrier might be other than some relating to switching costs. However, there will likely be reluctance on AWS's part to cut prices because of its scale and public company growth targets.

  2. A New Valuation Formula. Helmer offers a very unique take on the traditional DCF valuation approach. Investors have long suggested the value of any business was equal to the present value of its future discounted cash flows. In contrast to the traditional approach of summing up a firm's cash flows and discounting it, Helmer takes a look at all of the cash flows subject to the industry in which firms compete. In this formula (shown above), M0 represents the current market size, g the discounted market growth factor, s the long-term market share of the company, and m the long-term differential margin (net profit margin over that needed to cover the cost of capital). More simply, a company is worth it's Market Scale (Mo x g) x its Power (s x m). This implies that a company is worth the portion of the industry's profits it collects over time. This formula helps consider Power progression relative to industry dynamics and company stage. In the Origination stage, an industry's profits may be small but growing very quickly. If we think that a competitor in the industry can achieve an actual Power, it will likely gain a large portion of the long-term market. Thus, watching market share dynamics unfold can tell us about the potential for a route to Power and the ability for a company to achieve a superior value to its near-term cash flows.

  3. Collateral Damage. If companies are aware of these Powers and how other companies can achieve them, how can companies not take proactive action to avoid being on the losing end of a Power struggle? Helmer lays out what he calls Collateral Damage, or the unwillingness of a competitor to find the right path to navigating the damage caused by a competitor's Power. His point is actually very nuanced - it's not the incumbent's unwillingness to invest in the same type of solution as the competitor (although that happens). The incumbent's business gets trashed as collateral damage by the new entrant. The incumbent can respond to the challenger by investing in the new innovation. But where counter-positioning really takes hold is if the incumbent recognizes the attractiveness of the business model/innovation but is stymied from investing. Why would a business leader choose not to invest in something attractive? In the case of Vanguard competitor Fidelity, any move into passive funds could cause steep cannibalization of their revenue. So in response, a CEO might decide to just keep their existing business and "milk" all of its cash flow. In addition, how could Fidelity invest in a business that completely undermined their actively managed mutual fund business? Often CEOs will have a negative bias toward the competing business model despite the positive NPV of an investment in the new business. Just think how long it took SAP to start selling Cloud subscriptions compared to its on-premise license/maintenance model. Lastly, a CEO might not invest in the promising new business model if they are worried about job security. This is the classic example of the principal-agent problem we discussed in June. Would you invest in a new, unproven business model if you faced a declining stock price and calls for your resignation? In addition, annual CEO compensation is frequently tagged to stock price performance and growth targets. The easiest way to achieve near-term stock price appreciation and growth targets is staying with what has worked in the past (and M&A!). Its the path of least resistance! Counter-positioning and collateral damage are nuanced and difficult to spot, but the complex emotions and issues become obvious over time.

Dig Deeper

  • The 7 Powers with Hamilton Helmer & Jeff Lawson (CEO of Twilio)

  • Hamilton Helmer Discusses 7Powers with Acquired Podcast

  • Vanguard Founder Jack Bogle's '90s Interview Shows His Investing Philosophy

  • Bernard Arnault, Chairman and CEO of LVMH | The Brave Ones

  • S-curves in Innovation

tags: Hamilton Helmer, 7 Powers, Reed Hastings, Netflix, SAP, Snowflake, Amazon, TSMC, Tiktok, Bytedance, BCG, iPhone, Apple, LVMH, Google, Clayton Christensen, S-Curve, Steve Jobs, John Lasseter, Ed Catmull, Toyota, Vanguard, Fidelity, Cloudflare
categories: Non-Fiction
 

April 2021 - Innovator's Solution by Clayton Christensen and Michael Raynor

This month we take another look at disruptive innovation in the counter piece to Clayton Christensen’s Innovator’s Dilemma, our July 2020 book. The book crystallizes the types of disruptive innovation and provides frameworks for how incumbents can introduce or combat these innovations. The book was a pleasure to read and will serve as a great reference for the future.

Tech Themes

  1. Integration and Outsourcing. Today, technology companies rely on a variety of software tools and open source components to build their products. When you stitch all of these components together, you get the full product architecture. A great example is seen here with Gitlab, an SMB DevOps provider. They have Postgres for a relational database, Redis for caching, NGINX for request routing, Sentry for monitoring and error tracking and so on. Each of these subsystems interacts with each other to form the powerful Gitlab project. These interaction points are called interfaces. The key product development question for companies is: “Which things do I build internally and which do I outsource?” A simple answer offered by many MBA students is “Outsource everything that is not part of your core competence.” As Clayton Christensen points out, “The problem with core-competence/not-your-core-competence categorization is that what might seem to be a non-core activity today might become an absolutely critical competence to have mastered in a proprietary way in the future, and vice versa.” A great example that we’ve discussed before is IBM’s decision to go with Microsoft DOS for its Operating System and Intel for its Microprocessor. At the time, IBM thought it was making a strategic decision to outsource things that were not within its core competence but they inadvertently gave almost all of the industry profits from personal computing to Intel and Microsoft. Other competitors copied their modular approach and the whole industry slugged it out on price. The question of whether to outsource really depends on what might be important in the future. But that is difficult to predict, so the question of integration vs. outsourcing really comes down to the state of the product and market itself: is this product “not good enough” yet? If the answer is yes, then a proprietary, integrated architecture is likely needed just to make the actual product work for customers. Over time, as competitors enter the market and the fully integrated platform becomes more commoditized, the individual subsystems become increasingly important competitive drivers. So the decision to outsource or build internally must be made on the status of product and the market its attacking.

  2. Commoditization within Stacks. The above point leads to the unbelievable idea of how companies fall into the commoditization trap. This happens from overshooting, where companies create products that are too good (which I find counter-intuitive, who thought that doing your job really well would cause customers to leave!). Christensen describes this through the lens of a salesperson “‘Why can’t they see that our product is better than the competition? They’re treating it like a commodity!’ This is evidence of overshooting…there is a performance surplus. Customers are happy to accept improved products, but unwilling to pay a premium price to get them.” At this time, the things demanded by customers flip - they are willing to pay premium prices for innovations along a new trajectory of performance, most likely speed, convenience, and customization. “The pressure of competing along this new trajectory of improvement forces a gradual evolution in product architectures, away from the interdependent, proprietary architectures that had the advantage in the not-good-enough era toward modular designs in the era of performance surplus. In a modular world, you can prosper by outsourcing or by supplying just one element.” This process of integration, to modularization and back, is super fascinating. As an example of modularization, let’s take the streaming company Confluent, the makers of the open-source software project Apache Kafka. Confluent offers a real-time communications service that allows companies to stream data (as events) rather than batching large data transfers. Their product is often a sub-system underpinning real-time applications, like providing data to traders at Citigroup. Clearly, the basis of competition in trading has pivoted over the years as more and more banking companies offer the service. Companies are prioritizing a new axis, speed, to differentiate amongst competing services, and when speed is the basis of competition, you use Confluent and Kafka to beat out the competition. Now let’s fast forward five years and assume all banks use Kafka and Confluent for their traders, the modular sub-system is thus commoditized. What happens? I’d posit that the axis would shift again, maybe towards convenience, or customization where traders want specific info displayed maybe on a mobile phone or tablet. The fundamental idea is that “Disruption and commoditization can be seen as two sides of the same coin. That’s because the process of commoditization initiates a reciprocal process of de-commoditization [somewhere else in the stack].”

  3. The Disruptive Becomes the Disruptor. Disruption is a relative term. As we’ve discussed previously, disruption is often mischaracterized as startups enter markets and challenge incumbents. Disruption is really a focused and contextual concept whereby products that are “not good enough” by market standards enter a market with a simpler, more convenient, or less expensive product. These products and markets are often dismissed by incumbents or even ceded by market leaders as those leaders continue to move up-market to chase even bigger customers. Its fascinating to watch the disruptive become the disrupted. A great example would be department stores - initially, Macy’s offered a massive selection that couldn’t be found in any single store and customers loved it. They did this by turning inventory three times per year with 40% gross margins for a 120% return on capital invested in inventory. In the 1960s, Walmart and Kmart attacked the full-service department stores by offering a similar selection at much cheaper prices. They did this by setting up a value system whereby they could make 23% gross margins but turn inventories 5 times per year, enabling them to earn the industry golden 120% return on capital invested in inventory. Full-service department stores decided not to compete against these lower gross margin products and shifted more space to beauty and cosmetics that offered even higher gross margins (55%) than the 40% they were used to. This meant they could increase their return on capital invested in inventory and their profits while avoiding a competitive threat. This process continued with discount stores eventually pushing Macy’s out of most categories until Macy’s had nowhere to go. All of a sudden the initially disruptive department stores had become disrupted. We see this in technology markets as well. I’m not 100% this qualifies but think about Salesforce and Oracle. Marc Benioff had spent a number of years at Oracle and left to start Salesforce, which pioneered selling subscription, cloud software, on a per-seat revenue model. This meant a much cheaper option compared to traditional Oracle/Siebel CRM software. Salesforce was initially adopted by smaller customers that didn’t need the feature-rich platform offered by Oracle. Oracle dismissed Salesforce as competition even as Oracle CEO Larry Ellison seeded Salesforce and sat on Salesforce’s board. Today, Salesforce is a $200B company and briefly passed Oracle in market cap a few months ago. But now, Salesforce has raised its prices and mostly targets large enterprise buyers to hit its ambitious growth initiatives. Down-market competitors like Hubspot have come into the market with cheaper solutions and more fully integrated marketing tools to help smaller businesses that aren’t ready for a fully-featured Salesforce platform. Disruption is always contextual and it never stops.

Business Themes

1_fnX5OXzCcYOyPfRHA7o7ug.png
  1. Low-end-Market vs. New-Market Disruption. There are two types of established methods for disruption: Low-end-market (Down-market) and New-market. Low-end-market disruption seeks to establish performance that is “not good enough” along traditional lines, and targets overserved customers in the low-end of the mainstream market. It typically utilizes a new operating or financial approach with structurally different margins than up-market competitors. Amazon.com is a quintessential low-end market disruptor compared to traditional bookstores, offering prices so low they angered book publishers while offering unmatched convenience to customers allowing them to purchase books online. In contrast, Robinhood is a great example of a new-market disruption. Traditional discount brokerages like Charles Schwab and Fidelity had been around for a while (themselves disruptors of full-service models like Morgan Stanley Wealth Management). But Robinhood targeted a group of people that weren’t consuming in the market, namely teens and millennials, and they did it in an easy-to-use app with a much better user interface compared to Schwab and Fidelity. Robinhood also pioneered new pricing with zero-fee trading and made revenue via a new financial approach, payment for order flow (PFOF). Robinhood makes money by being a data provider to market makers - basically, large hedge funds, like Citadel, pay Robinhood for data on their transactions to help optimize customers buying and selling prices. When approaching big markets its important to ask: Is this targeted at a non-consumer today or am I competing at a structurally lower margin with a new financial model and a “not quite good enough” product? This determines whether you are providing a low-end market disruption or a new-market disruption.

  2. Jobs To Be Done. The jobs to be done framework was one of the most important frameworks that Clayton Christensen ever introduced. Marketers typically use advertising platforms like Facebook and Google to target specific demographics with their ads. These segments are narrowly defined: “Males over 55, living in New York City, with household income above $100,000.” The issue with this categorization method is that while these are attributes that may be correlated with a product purchase, customers do not look up exactly how marketers expect them to behave and purchase the products expected by their attributes. There may be a correlation but simply targeting certain demographics does not yield a great result. The marketers need to understand why the customer is adopting the product. This is where the Jobs to Be Done framework comes in. As Christensen describes it, “Customers - people and companies - have ‘jobs’ that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can ‘hire’ to get the job done. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently, and inexpensively as possible.” Christensen zeroes in on the contextual adoption of products; it is the circumstance and not the demographics that matter most. Christensen describes ways for people to view competition and feature development through the Jobs to Be Done lens using Blackberry as an example (later disrupted by the iPhone). While the immature smartphone market was seeing feature competition from Microsoft, Motorola, and Nokia, Blackberry and its parent company RIM came out with a simple to use device that allowed for short productivity bursts when the time was available. This meant they leaned into features that competed not with other smartphone providers (like better cellular reception), but rather things that allowed for these easy “productive” sessions like email, wall street journal updates, and simple games. The Blackberry was later disrupted by the iPhone which offered more interesting applications in an easier to use package. Interestingly, the first iPhone shipped without an app store (but as a proprietary, interdependent product) and was viewed as not good enough for work purposes, allowing the Blackberry to co-exist. Management even dismissed the iPhone as a competitor initially. It wasn’t long until the iPhone caught up and eventually surpassed the Blackberry as the world’s leading mobile phone.

  3. Brand Strategies. Companies may choose to address customers in a number of different circumstances and address a number of Jobs to Be Done. It’s important that the Company establishes specific ways of communicating the circumstance to the customer. Branding is powerful, something that Warren Buffett, Terry Smith, and Clayton Christensen have all recognized as durable growth providers. As Christensen puts it: “Brands are, at the beginning, hollow words into which marketers stuff meaning. if a brand’s meaning is positioned on a job to be done, then when the job arises in a customer’s life, he or she will remember the brand and hire the product. Customers pay significant premiums for brands that do a job well.” So what can a large corporate company do when faced with a disruptive challenger to its branding turf? It’s simple - add a word to their leading brand, targeted at the circumstance in which a customer might find themself. Think about Marriott, one of the leading hotel chains. They offer a number of hotel brands: Courtyard by Marriott for business travel, Residence Inn by Marriott for a home away from home, the Ritz Carlton for high-end luxurious stays, Marriott Vacation Club for resort destination hotels. Each brand is targeted at a different Job to Be Done and customers intuitively understand what the brands stand for based on experience or advertising. A great technology example is Amazon Web Services (AWS), the cloud computing division of Amazon.com. Amazon invented the cloud, and rather than launch with the Amazon.com brand, which might have confused their normal e-commerce customers, they created a completely new brand targeted at a different set of buyers and problems, that maintained the quality and recognition that Amazon had become known for. Another great retail example is the SNKRs app released by Nike. Nike understands that some customers are sneakerheads, and want to know the latest about all Nike shoe drops, so Nike created a distinct, branded app called SNKRS, that gives news and updates on the latest, trendiest sneakers. These buyers might not be interested in logging into the Nike app and may become angry after sifting through all of the different types of apparel offered by Nike, just to find new shoes. The SNKRS app offers a new set of consumers and an easy way to find what they are looking for (convenience), which benefits Nike’s core business. Branding is powerful, and understanding the Job to Be Done helps focus the right brand for the right job.

Dig Deeper

  • Clayton Christensen’s Overview on Disruptive Innovation

  • Jobs to Be Done: 4 Real-World Examples

  • A Peek Inside Marriott’s Marketing Strategy & Why It Works So Well

  • The Rise and Fall of Blackberry

  • Payment for Order Flow Overview

  • How Commoditization Happens

tags: Clayton Christensen, AWS, Nike, Amazon, Marriott, Warren Buffett, Terry Smith, Blackberry, RIM, Microsoft, Motorola, iPhone, Facebook, Google, Robinhood, Citadel, Schwab, Fidelity, Morgan Stanley, Oracle, Salesforce, Walmart, Macy's, Kmart, Confluent, Kafka, Citigroup, Intel, Gitlab, Redis
categories: Non-Fiction
 

March 2021 - Payments Systems in the U.S. by Carol Coye Benson, Scott Loftesness, and Russ Jones

This month we dive into the fintech space for the first time! Glenbrook Partners is a famous payments consulting company. This classic book describes the history and current state of the many financial systems we use every day. While the book is a bit dated and reads like a textbook, it throws in some great real-world observations and provides a great foundation for any payments novice!

Tech Themes

  1. Mapping Open-Loop and Closed-Loop Networks. The major credit and debit card providers (Visa, Mastercard, American Express, China UnionPay, and Discover) all compete for the same spots in customer wallets but have unique and differing backgrounds and mechanics. The first credit card on the scene was the BankAmericard in the late 1950’s. As it took off, Bank of America started licensing the technology all across the US and created National BankAmericard Inc. (NBI) to facilitate its card program. NBI merged with its international counterpart (IBANCO) to form Visa in the mid-1970’s. Another group of California banks had created the Interbank Card Association (ICA) to compete with Visa and in 1979 renamed itself Mastercard. Both organizations remained owned by the banks until their IPO’s in 2006 (Mastercard) and 2008 (Visa). Both of these companies are known as open-loop networks, that is they work with any bank and require banks to sign up customers and merchants. As the bank points out, “This structure allows the two end parties to transact with each other without having direct relationships with each other’s banks.” This convenient feature of open-loop payments systems means that they can scale incredibly quickly. Any time a bank signs up a new customer or merchant, they immediately have access to the network of all other banks on the Mastercard / Visa network. In contrast to open-loop systems, American Express and Discover operate largely closed-loop systems, where they enroll each merchant and customer individually. Because of this onerous task of finding and signing up every single consumer/merchant, Amex and Discover cannot scale to nearly the size of Visa/Mastercard. However, there is no bank intermediation and the networks get total access to all transaction data, making them a go-to solution for things like loyalty programs, where a merchant may want to leverage data to target specific brand benefits at a customer. Open-loop systems like Apple Pay (its tied to your bank account) and closed-loop systems like Starbuck’s purchasing app (funds are pre-loaded and can only be redeemed at Starbucks) can be found everywhere. Even Snowflake, the data warehouse provider and subject of last month’s TBOTM is a closed-loop payments network. Customers buy Snowflake credits up-front, which can only be used to redeem Snowflake compute services. In contrast, AWS and other cloud’s are beginning to offer more open-loop style networks, where AWS credits can be redeemed against non-AWS software. Side note - these credit systems and odd-pricing structures deliberately mislead customers and obfuscate actual costs, allowing the cloud companies to better control gross margins and revenue growth. It’s fascinating to view the world through this open-loop / closed-loop dynamic.

  2. New Kids on the Block - What are Stripe, Adyen, and Marqeta? Stripe recently raised at a minuscule valuation of $95B, making it the highest valued private startup (ever?!). Marqeta, its API/card-issuing counterpart, is prepping a 2021 IPO that may value it at $10B. Adyen, a Dutch public company is worth close to $60B (Visa is worth $440B for comparison). Stripe and Marqeta are API-based payment service providers, which allow businesses to easily accept online payments and issue debit and credit cards for a variety of use cases. Adyen is a merchant account provider, which means it actually maintains the merchant account used to run a company’s business - this often comes with enormous scale benefits and reduced costs, which is why large customers like Nike have opted for Adyen. This merchant account clearing process can take quite a while which is why Stripe is focused on SMB’s - a business can sign up as a Stripe customer and almost immediately begin accepting online payments on the internet. Stripe and Marqeta’s API’s allow a seamless integration into payment checkout flows. On top of this basic but highly now simplified use case, Stripe and Marqeta (and Adyen) allow companies to issue debit and credit cards for all sorts of use cases. This is creating an absolute BOOM in fintech, as companies seek to try new and innovative ways of issuing credit/debit cards - such as expense management, banking-as-a-service, and buy-now-pay-later. Why is this now such a big thing when Stripe, Adyen, and Marqeta were all created before 2011? In 2016, Visa launched its first developer API’s which allowed companies like Stripe, Adyen, and Marqeta to become licensed Visa card issuers - now any merchant could issue their own branded Visa card. That is why Andreessen Horowitz’s fintech partner Angela Strange proclaimed: “Every company will be a fintech company.” (this is also clearly some VC marketing)! Mastercard followed suit in 2019, launching its open API called the Mastercard Innovation Engine. The big networks decided to support innovation - Visa is an investor in Stripe and Marqeta, AmEx is an investor in Stripe, and Mastercard is an investor in Marqeta. Surprisingly, no network providers are investors in Adyen. Fintech innovation has always seen that the upstarts re-write the incumbents (Visa and Mastercard are bigger than the banks with much better business models) - will the same happen here?

  3. Building a High Availability System. Do Mastercard and Visa have the highest availability needs of any system? Obviously, people are angry when Slack or Google Cloud goes down, but think about how many people are affected when Visa or Mastercard goes down? In 2018, a UK hardware failure prompted a five-hour outage at Visa: “Disgruntled customers at supermarkets, petrol stations and abroad vented their frustrations on social media when there was little information from the financial services firm. Bank transactions were also hit.” High availability is a measure of system uptime: “Availability is often expressed as a percentage indicating how much uptime is expected from a particular system or component in a given period of time, where a value of 100% would indicate that the system never fails. For instance, a system that guarantees 99% of availability in a period of one year can have up to 3.65 days of downtime (1%).” According to Statista, Visa handles ~185B transactions per year (a cool 6,000 per second), while UnionPay comes in second with 131B and Mastercard in third with 108B. For the last twelve months end June 30, 2020, Visa processed $8.7T in payments volume which means that the average transaction was ~$47. At 6,000 transactions per second, Visa loses $282,000 in payment volume every second it’s down. Mastercard and Visa have always been historically very cagey about disclosing data center operations (the only article I could find is from 2013) though they control their own operations much like other technology giants. “One of the keys to the [Visa] network's performance, Quinlan says, is capacity. And Visa has lots of it. Its two data centers--which are mirror images of each other and can operate interchangeably--are configured to process as many as 30,000 simultaneous transactions, or nearly three times as much as they've ever been asked to handle. Inside the pods, 376 servers, 277 switches, 85 routers, and 42 firewalls--all connected by 3,000 miles of cable--hum around the clock, enabling transactions around the globe in near real-time and keeping Visa's business running.” The data infrastructure challenges that payments systems are subjected to are massive and yet they all seem to perform very well. I’d love to learn more about how they do it!

Business Themes

interchange_fee.jpg
Interchange.png
  1. What is interchange and why does it exist? BigCommerce has a great simple definition for interchange: “Interchange fees are transaction fees that the merchant's bank account must pay whenever a customer uses a credit/debit card to make a purchase from their store. The fees are paid to the card-issuing bank to cover handling costs, fraud and bad debt costs and the risk involved in approving the payment.” What is crazy about interchange is that it is not the banks, but the networks (Mastercard, Visa, China UnionPay) that set interchange rates. On top of that, the networks set the rates but receive no revenue from interchange itself. As the book points out: “Since the card netork’s issuing customers are the recipients of interchange fees, the level of interchange that a network sets is an important element in the network’s competitive position. A higher level of interchange on one network’s card products naturally makes that network’s card products more attractive to card issuers.” The incentives here are wild - the card issuers (banks) want higher interchange because they receive the interchange from the merchant’s bank in a transaction, the card networks want more card issuing customers and offering higher interchange rates better positions them in competitive battles. The merchant is left worse off by higher interchange rates, as the merchant bank almost always passes this fee on to the merchant itself ($100 received via credit card turns out to only be $97 when it gets to their bank account because of fees). Visa and Mastercard have different interchange rates for every type of transaction and acceptance method - making it a complicated nightmare to actually understand their fees. The networks and their issuers may claim that increased interchange fees allow banks to invest more in fraud protection, risk management, and handling costs, but there is no way to verify this claim. This has caused a crazy war between merchants, the card networks, and the card issuers.

  2. Why is Jamie Dimon so pissed about fintechs? In a recent interview, Jamie Dimon, CEO of JP Morgan Chase, recently called fintechs “examples of unfair competition.” Dimon is angry about the famous (or infamous) Durbin Amendment, which was a last-minute addition included in the landmark Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. The Durbin amendment attempted to cap the interchange amount that could be charged by banks and tier the interchange rates based on the assets of the bank. In theory, capping the rates would mean that merchants paid less in fees, and the merchant would pass these lower fees onto the consumer by giving them lower prices thus spurring demand. The tiering would mean banks with >$10B in assets under management would make less in interchange fees, leveling the playing field for smaller banks and credit unions. “The regulated [bank with >$10B in assets] debit fee is 0.05% + $0.21, while the unregulated is 1.60% + $0.05. Before the Durbin Amendment the fee was 1.190% + $0.10.” While this did lower debit card interchange, a few unintended consequences resulted: 1. Regulators expected that banks would make substantially less revenue, however, they failed to recognize that banks might increase other fees to offset this lost revenue stream: “Banks have cut back on offering rewards for their debit cards. Banks have also started charging more for their checking accounts or they require a larger monthly balance.” In addition, many smaller banks couldn’t recoup the lost revenue amount, leading to many bankruptcies and consolidation. 2. Because a flat rate fee was introduced regardless of transaction size, smaller merchants were charged more in interchange than the prior system (which was pro-rated based on $ amount). “One problem with the Durbin Amendment is that it didn’t take small transactions into account,” said Ellen Cunningham, processing expert at CardFellow.com. “On a small transaction, 22 cents is a bigger bite than on a larger transaction. Convenience stores, coffee shops and others with smaller sales benefited from the original system, with a lower per-transaction fee even if it came with a higher percentage.” These small retailers ended up raising prices in some instances to combat these additional fees - causing the law to have the opposite effect of lowering costs to consumers. Dimon is angry that this law has allowed fintech companies to start charging higher prices for debit card transactions. As shown above, smaller banks earn a substantial amount more in interchange fees. These smaller banks are moving quickly to partner with fintechs, which now power hundreds of millions of dollars in account balances and Dimon believes they are not spending enough attention on anti-money laundering and fraud practices. In addition, fintech’s are making money in suspect ways - Chime makes 21% of its revenue through high out-of-network ATM fees, and cash advance companies like Dave, Branch, and Earnin’ are offering what amount to pay-day loans to customers.

  3. Mastercard and Visa: A history of regulation. Visa and Mastercard have been the subject of many regulatory battles over the years. The US Justice Department announced in March that it would be investigating Visa over online debit-card practices. In 1996, Visa and Mastercard were sued by merchants and settled for $3B. In 1998, the Department of Justice won a case against Visa and Mastercard for not allowing issuing banks to work with other card networks like AmEx and Discover. In 2009, Mastercard and Visa were sued by the European Union and forced to reduce debit card swipe fees by 0.2%. In 2012, Mastercard and Visa were sued for price-fixing fees and were forced to pay $6.25B in a settlement. The networks have been sued by the US, Europe, Australia, New Zealand, ATM Operators, Intuit, Starbucks, Amazon, Walmart, and many more. Each time they have been forced to modify fees and practices to ensure competition. However, this has also re-inforced their dominance as the biggest payment networks which is why no competitors have been established since the creation of the networks in the 1970’s. Also, leave it to the banks to establish a revenue source that is so good that it is almost entirely undefeatable by legislation. When, if ever, will Visa and Mastercard not be dominant payments companies?

Dig Deeper

  • American Banker: Big banks, Big Tech face-off over swipe fees

  • Stripe Sessions 2019 | The future of payments

  • China's growth cements UnionPay as world's largest card scheme

  • THE DAY THE CREDIT CARD WAS BORN by Joe Nocera (Washington Post)

  • Mine Safety Disclosure’s 2019 Visa Investment Case

  • FineMeValue’s Payments Overview

tags: Visa, Mastercard, American Express, Discover, Bank of America, Stripe, Marqeta, Adyen, Apple, Open-loop, Closed-loop, Snowflake, AWS, Nike, BNPL, Andreessen Horowitz, Angela Strange, Slack, Google Cloud, UnionPay, BigCommerce, Jamie Dimon, Dodd-Frank, Durbin Amendment, JP Morgan Chase, Debit Cards, Credit Cards, Chime, Branch, Earnin', US Department of Justice, Intuit, Starbucks, Amazon, Walmart
categories: Non-Fiction
 

February 2021 - Rise of the Data Cloud by Frank Slootman and Steve Hamm

This month we read a new book by the CEO of Snowflake and author of our November 2020 book, Tape Sucks. The book covers Snowflake’s founding, products, strategy, industry specific solutions and partnerships. Although the content is somewhat interesting, it reads more like a marketing book than an actually useful guide to cloud data warehousing. Nonetheless, its a solid quick read on the state of the data infrastructure ecosystem.

Tech Themes

  1. The Data Warehouse. A data warehouse is a type of database that is optimized for analytics. These optimizations mainly revolve around complex query performance, the ability to handle multiple data types, the ability to integrate data from different applications, and the ability to run fast queries across large data sets. In contrast to a normal database (like Postgres), a data warehouse is purpose-built for efficient retrieval of large data sets and not high performance read/write transactions like a typical relational database. The industry began in the late 1970s and early 80’s, driven by work done by the “Father of Data Warehousing” Bill Inmon and early competitor Ralph Kimball, who was a former Xerox PARC designer. In 1986, Kimball launched Redbrick Systems and Inmon launched Prism Solutions in 1991, with its leading product the Prism Warehouse Manager. Prism went public in 1995 and was acquired by Ardent Software in 1998 for $42M while Red Brick was acquired by Informix for ~$35M in 1998. In the background, a company called Teradata, which was formed in the late 1970s by researchers at Cal and employees from Citibank, was going through their own journey to the data warehouse. Teradata would IPO in 1987, get acquired by NCR in 1991; NCR itself would get acquired by AT&T in 1991; NCR would then spin out of AT&T in 1997, and Teradata would spin out of NCR through IPO in 2007. What a whirlwind of corporate acquisitions! Around that time, other new data warehouses were popping up on the scene including Netezza (launched in 1999) and Vertica (2005). Netezza, Vertica, and Teradata were great solutions but they were physical hardware that ran a highly efficient data warehouse on-premise. The issue was, as data began to grow on the hardware, it became really difficult to add more hardware boxes and to know how to manage queries optimally across the disparate hardware. Snowflake wanted to leverage the unlimited storage and computing power of the cloud to allow for infinitely scalable data warehouses. This was an absolute game-changer as early customer Accordant Media described, “In the first five minutes, I was sold. Cloud-based. Storage separate from compute. Virtual warehouses that can go up and down. I said, ‘That’s what we want!’”

  2. Storage + Compute. Snowflake was launched in 2012 by Benoit Dageville (Oracle), Thierry Cruanes (Oracle) and Marcin Żukowski (Vectorwise). Mike Speiser and Sutter Hill Ventures provided the initial capital to fund the formation of the company. After numerous whiteboarding sessions, the technical founders decided to try something crazy, separating data storage from compute (processing power). This allowed Snowflake’s product to scale the storage (i.e. add more boxes) and put tons of computing power behind very complex queries. What may have been limited by Vertica hardware, was now possible with Snowflake. At this point, the cloud had only been around for about 5 years and unlike today, there were only a few services offered by the main providers. The team took a huge risk to 1) bet on the long-term success of the public cloud providers and 2) try something that had never successfully been accomplished before. When they got it to work, it felt like magic. “One of the early customers was using a $20 million system to do behavioral analysis of online advertising results. Typically, one big analytics job would take about thirty days to complete. When they tried the same job on an early version of Snowflake;’s data warehouse, it took just six minutes. After Mike learned about this, he said to himself: ‘Holy shit, we need to hire a lot of sales people. This product will sell itself.’” This idea was so crazy that not even Amazon (where Snowflake runs) thought of unbundling storage and compute when they built their cloud-native data warehouse, Redshift, in 2013. Funny enough, Amazon also sought to attract people away from Oracle, hence the name Red-Shift. It would take Amazon almost seven years to re-design their data warehouse to separate storage and compute in Redshift RA3 which launched in 2019. On top of these functional benefits, there is a massive gap in the cost of storage and the cost of compute and separating the two made Snowflake a significantly more cost-competitive solution than traditional hardware systems.

  3. The Battle for Data Pipelines. A typical data pipeline (shown below) consists of pulling data from many sources, perform ETL/ELT (extract, load, transform and vice versa), centralizing it in a data warehouse or data lake, and connecting that data to visualization tools like Tableau or Looker. All parts of this data stack are facing intense competition. On the ETL/ELT side, you have companies like Fivetran and Matillion and on the data warehouse/data lake side you have Snowflake and Databricks. Fivetran focuses on the extract and load portion of ETL, providing a data integration tool that allows you to connect to all of your operational systems (salesforce, zendesk, workday, etc.) and pull them all together in Snowflake for comprehensive analysis. Matillion is similar, except it connects to your systems and imports raw data into Snowflake, and then transforms it (checking for NULL’s, ensuring matching records, removing blanks) in your Snowflake data warehouse. Matillion thus focuses on the load and transform steps in ETL while Fivetran focuses on the extract and load portions and leverages dbt (data build tool) to do transformations. The data warehouse vs. data lake debate is a complex and highly technical discussion but it mainly comes down to Databricks vs. Snowflake. Databricks is primarily a Machine Learning platform that allows you to run Apache Spark (an open-source ML framework) at scale. Databricks’s main product, Delta Lake allows you to store all data types - structured and unstructured for real-time and complex analytical processes. As Datagrom points out here, the platforms come down to three differences: data structure, data ownership, and use case versatility. Snowflake requires structured or semi-structured data prior to running a query while Databricks does not. Similarly, while Snowflake decouples data storage from compute, it does not decouple data ownership meaning Snowflake maintains all of your data, whereas you can run Databricks on top of any data source you have whether it be on-premise or in the cloud. Lastly, Databricks acts more as a processing layer (able to function in code like python as well as SQL) while Snowflake acts as a query and storage layer (mainly driven by SQL). Snowflake performs best with business intelligence querying while Databricks performs best with data science and machine learning. Both platforms can be used by the same organizations and I expect both to be massive companies (Databricks recently raised at a $28B valuation!). All of these tools are blending together and competing against each other - Databricks just launched a new LakeHouse (Data lake + data warehouse - I know the name is hilarious) and Snowflake is leaning heavily into its data lake. We will see who wins!

An interesting data platform battle is brewing that will play out over the next 5-10 years: The Data Warehouse vs the Data Lakehouse, and the race to create the data cloud

Who's the biggest threat to @snowflake? I think it's @databricks, not AWS Redshifthttps://t.co/R2b77XPXB7

— Jamin Ball (@jaminball) January 26, 2021

Business Themes

Lakehouse_v1.png
architecture-overview.png
  1. Marketing Customers. This book at its core, is a marketing document. Sure, it gives a nice story of how the company was built, the insights of its founding team, and some obstacles they overcame. But the majority of the book is just a “Imagine what you could do with data” exploration across a variety of industries and use cases. Its not good or bad, but its an interesting way of marketing - that’s for sure. Its annoying they spent so little on the technology and actual company building. Our May 2019 book, The Everything Store, about Jeff Bezos and Amazon was perfect because it covered all of the decision making and challenging moments to build a long-term company. This book just talks about customer and partner use cases over and over. Slootman’s section is only about 20 pages and five of them cover case studies from Square, Walmart, Capital One, Fair, and Blackboard. I suspect it may be due to the controversial ousting of their long-time CEO Bob Muglia for Frank Slootman, co-author of this book. As this Forbes article noted: “Just one problem: No one told Muglia until the day the company announced the coup. Speaking publicly about his departure for the first time, Muglia tells Forbes that it took him months to get over the shock.” One day we will hear the actual unfiltered story of Snowflake and it will make for an interesting comparison to this book.

  2. Timing & Building. We often forget how important timing is in startups. Being the right investor or company at the right time can do a lot to drive unbelievable returns. Consider Don Valentine at Sequoia in the early 1970’s. We know that venture capital fund performance persists, in part due to incredible branding at firms like Sequoia that has built up over years and years (obviously reinforced by top-notch talents like Mike Moritz and Doug Leone). Don is a great investor and took significant risks on unproven individuals like Steve Jobs (Apple), Nolan Bushnell (Atari), and Trip Hawkins (EA). But he also had unfettered access to the birth of an entirely new ecosystem and knowledge of how that ecosystem would change business, built up from his years at Fairchild Semiconductor. Don is a unique person and capitalized on that incredible knowledgebase, veritably creating the VC industry. Sequoia is a top firm because he was in the right place at the right time with the right knowledge. Now let’s cover some companies that weren’t: Cloudera, Hortonworks, and MapR. In 2005, Yahoo engineers Doug Cutting and Mike Cafarella, inspired by the Google File System paper, created Hadoop, a distributed file system for storing and accessing data like never before. Hadoop spawned many companies like Cloudera, Hortonworks, and MapR that were built to commercialize the open-source Hadoop project. All of the companies came out of the gate fast with big funding - Cloudera raised $1B at a $4B valuation prior to its 2017 IPO, Hortonworks raised $260M at a $1B valuation prior to its 2014 IPO, and MapR $300M before it was acquired by HPE in 2019. The companies all had one thing in problem however, they were on-premise and built prior to the cloud gaining traction. That meant it required significant internal expertise and resources to run Cloudera, Hortonworks, and MapR software. In 2018, Cloudera and Hortonworks merged (at a $5B valuation) because the competitive pressure from the cloud was eroding both of their businesses. MapR was quietly acquired for less than it raised. Today Cloudera trades at a $5B valuation meaning no shareholder return since the merger and the business is only recently slightly profitable at its current low growth rate. This cautionary case study shows how important timing is and how difficult it is to build a lasting company in the data infrastructure world. As the new analytics stack is built with Fivetran, Matillion, dbt, Snowflake, and Databricks, it will be interesting to see which companies exist 10 years from now. Its probable that some new technology will come along and hurt every company in the stack, but for now the coast is clear - the scariest time for any of these companies.

  3. Burn Baby Burn. Snowflake burns A LOT of money. In the Nine months ended October 31, 2020, Snowflake burned $343M, including $169M in their third quarter alone. Why would Snowflake burn so much money? Because they are growing efficiently! What does efficient growth mean? As we discussed in the last Frank Slootman book - sales and marketing efficiency is a key hallmark to understand the quality of growth a company is experiencing. According to their filings, Snowflake added ~$230M of revenue and spent $325M in sales and marketing. This is actually not terribly efficient - it supposes a dollar invested in sales and marketing yielded $0.70 of incremental revenue. While you would like this number to be closer to 1x (i.e. $1 in S&M yield $1 in revenue - hence a repeatable go-to-market motion), it is not terrible. ServiceNow (Slootman’s old company), actually operates less efficiently - for every dollar it invests in sales and marketing, it generates only $0.55 of subscription revenue. Crowdstrike, on the other hand, operates a partner-driven go-to-market, which enables it to generate more while spending less - created $0.90 for every dollar invested in sales and marketing over the last nine months. However, there is a key thing that distinguishes the data warehouse compared to these other companies and Ben Thompson at Stratechery nails it here: “Think about this in the context of Snowflake’s business: the entire concept of a data warehouse is that it contains nearly all of a company’s data, which (1) it has to be sold to the highest levels of the company, because you will only get the full benefit if everyone in the company is contributing their data and (2) once the data is in the data warehouse it will be exceptionally difficult and expensive to move it somewhere else. Both of these suggest that Snowflake should spend more on sales and marketing, not less. Selling to the executive suite is inherently more expensive than a bottoms-up approach. Data warehouses have inherently large lifetime values given the fact that the data, once imported, isn’t going anywhere.” I hope Snowflake burns more money in the future, and builds a sustainable long-term business.

Dig Deeper

  • Early Youtube Videos Describing Snowflake’s Architecture and Re-inventing the Data Warehouse

  • NCR’s spinoff of Teradata in 2007

  • Fraser Harris of Fivetran and Tristan Handy of dbt speak at the Modern Data Stack Conference

  • Don Valentine, Sequoia Capital: "Target Big Markets" - A discussion at Stanford

  • The Mike Speiser Incubation Playbook (an essay by Kevin Kwok)

tags: Snowflake, Data Warehouse, Oracle, Vertica, Netezza, IBM, Databricks, Apache Spark, Open Source, Fivetran, Matillion, dbt, Data Lake, Sequoia, ServiceNow, Crowdstrike, Cloudera, Hortonworks, MapR, BigQuery, Frank Slootman, Teradata, Xerox, Informix, NCR, AT&T, Benoit Dageville, Mike Speiser, Sutter Hill Ventures, Redshift, Amazon, ETL, Hadoop, SQL
categories: Non-Fiction
 

November 2020 - Tape Sucks: Inside Data Domain, A Silicon Valley Growth Story by Frank Slootman

This month we read a short, under-discussed book by current Snowflake and former ServiceNow and Data Domain CEO, Frank Slootman. The book is just like Frank - direct and unafraid. Frank has had success several times in the startup world and the story of Data Domain provides a great case study of entrepreneurship. Data Domain was a data deduplication company, offering a 20:1 reduction of data backed up to tape casettes by using new disk drive technology.

Tech Themes

Data Domain’s 2008 10-K prior to being acquired

Data Domain’s 2008 10-K prior to being acquired

  1. First time CEO at a Company with No Revenue. Frank is an immigrant to the US, coming from the Netherlands shortly after graduating from the University of Rotterdam. After being rejected by IBM 10+ times, he joined Burroughs corporation, an early mainframe provider which subsequently merged with its direct competitor Sperry for $4.8B in 1986. Frank then spent some time at Compuware and moved back to the Netherlands to help it integrate the acquisition of Uniface, an early customizable report building software. After spending time there, he went to Borland software in 1997, working his way up the product management ranks but all the while being angered by time spent lobbying internally, rather than building. Frank joined Data Domain in the Spring of 2003 - when it had no customers, no revenue, and was burning cash. The initial team and VC’s were impressive - Kai Li, a computer science professor on sabbatical from Princeton, Ben Zhu, an EIR at USVP, and Brian Biles, a product leader with experience at VA Linux and Sun Microsystems. The company was financed by top-tier VC’s New Enterprise Associates and Greylock Partners, with Aneel Bhusri (Founder and current CEO of Workday) serving as initial CEO and then board chairman. This was a stacked team and Slootman knew it: “I’d bring down the average IQ of the company by joining, which felt right to me.” The Company had been around for 18 months and already burned through a significant amount of money when Frank joined. He knew he needed to raise money relatively soon after joining and put the Company’s chances bluntly: “Would this idea really come together and captivate customers? Nobody knew. We, the people on the ground floor, were perhaps, the most surprised by the extraordinary success we enjoyed.”

  2. Playing to his Strengths: Capital Efficiency. One of the big takeaways from the Innovators by Walter Issacson was that individuals or teams at the nexus of disciplines - primarily where the sciences meet the humanities, often achieved breakthrough success. The classic case study for this is Apple - Steve Jobs had an intense love of art, music, and design and Steve Wozniak was an amazing technologist. Frank has cultivated a cross-discipline strength at the intersection of Sales and Technology. This might be driven by Slootman’s background is in economics. The book has several references to economic terms, which clearly have had an impact on Frank’s thinking. Data Domain espoused capital efficiency: “We traveled alone, made few many-legged sales calls, and booked cheap flights and hotels: everybody tried to save a dime for the company.” The results showed - the business went from $800K of revenue in 2004 to $275 million by 2008, generating $75M in cash flow from operations. Frank’s capital efficiency was interesting and broke from traditional thinking - most people think to raise a round and build something. Frank took a different approach: “When you are not yet generating revenue, conservation of resource is the dominant theme.” Over time, “when your sales activity is solidly paying for itself,” the spending should shift from conservative to aggressive (like Snowflake is doing this now). The concept of sales efficiency is somewhat talked about, but given the recent fundraising environment, is often dismissed. Sales efficiency can be thought of as: “How much revenue do I generate for every $1 spent in sales and marketing?” Looking at the P&L below, we see Data Domain was highly efficient in its sales and marketing activity - the company increased revenue $150M in 2008, despite spending $115M in sales and marketing (a ratio of 1.3x). Contrast this with a company like Slack which spent $403M to acquire $230M of new revenue (a ratio of 0.6x). It gets harder to acquire customers at scale, so this efficiency is supposed to come down over time but best in class is hopefully above 1x. Frank clearly understands when to step on the gas with investing, as both ServiceNow and Snowflake have remained fairly efficient (from a sales perspective at least) while growing to a significant scale.

  3. Technology for Technology’s Sake. “Many technologies are conceived without a clear, precise notion of the intended use.” Slootman hits on a key point and one that the tech industry has struggled to grasp throughout its history. So many products and companies are established around budding technology with no use case. We’ve discussed Magic Leap’s fundraising money-pit (still might find its way), and Iridium Communications, the massive satellite telephone that required people to carry a suitcase around to use it. Gartner, the leading IT research publication (which is heavily influenced by marketing spend from companies) established the Technology Hype Cycle, complete with the “Peak of inflated expectations,” and the “Trough of Disillusionment” for categorizing technologies that fail to live up to their promise. There have been several waves that have come and gone: AR/VR, Blockchain, and most recently, Serverless. Its not so much that these technologies were wrong or not useful, its rather that they were initially described as a panacea to several or all known technology hindrances and few technologies ever live up to that hype. Its common that new innovations spur tons of development but also lots of failure, and this is Slootman’s caution to entrepreneurs. Data Domain was attacking a problem that existed already (tape storage) and the company provided what Clayton Christensen would call a sustaining innovation (something that Slootman points out). Whenever things go into “winter state”, like the internet after the dot-com bubble, or the recent Crpyto Winter which is unthawing as I write; it is time to pay attention and understand the relevance of the innovation.

Business Themes

5dacqibnz_funnelvs.pipeline.png
Inside-Sales-Team-Structure.png
  1. Importance of Owning Sales. Slootman spends a considerable amount of this small book discussing sales tactics and decision making, particularly with respect to direct sales and OEM relationships. OEM deals are partnerships with other companies whereby one company will re-sell the software, hardware, or service of another company. Crowdstrike is a popular product with many OEM relationships. The Company drives a significant amount of its sales through its partner model, who re-sell on behalf of Crowdstrike. OEM partnerships with big companies present many challenges: “First of all, you get divorced from your customer because the OEM is now between you and them, making customer intimacy challenging. Plus, as the OEM becomes a large part of your business, for all intents and purposes they basically own you without paying for the privilege…Never forget that nobody wants to sell your product more than you do.” The challenges don’t end there. Slootman points out that EMC discarded their previous OEM vendor in the data deduplication space, right after acquiring Data Domain. On top of that, the typical reseller relationship happens at a 10-20% margin, degrading gross margins and hurting ability to invest. It is somewhat similar to the challenges open-source companies like MongoDB and Elastic have run into with their core software being…free. Amazon can just OEM their offering and cut them out as a partner, something they do frequently. Partner models can be sustainable, but the give and take from the big company is a tough balance to strike. Investors like organic adoption, especially recently with the rise of freemium SaaS models percolating in startups. Slootman’s point is that at some point in enterprise focused businesses, the Company must own direct sales (and relationships) with its customers to drive real efficiency. After the low cost to acquire freemium adopters buy the product, the executive team must pivot to traditional top down enterprise sales to drive a successful and enduring relationship with the customer.

  2. In the Thick of Things. Slootman has some very concise advice for CEOs: be a fighter, show some humanity, and check your ego at the door. “Running a startup reduces you to your most elementary instincts, and survival is on your mind most of the time…The CEO is the ‘Chief Combatant,’ warrior number one.” Slootman views the role of CEO as a fighter, ready to be the first to jump into the action, at all times. And this can be incredibly productive for business as well. Tony Xu, the founder and CEO of Doordash, takes time out every month to do delivery for his own company, in order to remain close to the customer and the problems of the company. Jeff Bezos famously still responds and views emails from customers at jeff@amazon.com. Being CEO also requires a willingness to put yourself out there and show your true personality. As Slootman puts it: “People can instantly finger a phony. Let them know who you really are, warts and all.” As CEO you are tasked with managing so many people and being involved in all aspects of the business, it is easy to become rigid and unemotional in everyday interactions. Harvard Business School professor and former leader at Uber distills it down to a simple phrase: “Begin With Trust.” All CEO’s have some amount of ego, driving them to want to be at the top of their organization. Slootman encourages CEO’s to be introspective, and try to recognize blind spots, so ego doesn’t drive day-to-day interactions with employees. One way to do that is simple: use the pronoun “we” when discussing the company you are leading. Though Slootman doesn’t explicitly call it out - all of these suggestions (fighting, showing empathy, getting rid of ego) are meant to build trust with employees.

  3. R-E-C-I-P-E for a Great Culture. The last fifth of the book is all focused on building culture at companies. It is the only topic Slootman stays on for more than a few chapters, so you know its important! RECIPE was an acronym created by the employees at Data Domain to describe the company’s values: Respect, Excellence, Customer, Integrity, Performance, Execution. Its interesting how simple and focused these values are. Technology has pushed its cultural delusion’s of grandeur to an extreme in recent years. The WeWork S-1 hilariously started with: “We are a community company committed to maximum global impact. Our mission is to elevate the world’s consciousness.” But none of Data Domain’s values were about changing the world to be a better place - they were about doing excellent, honest work for customers. Slootman is lasered focused on culture, and specifically views culture as an asset - calling it: “The only enduring, sustainable form of differentiation. These days, we don’t have a monopoly for very long on talent, technology, capital, or any other asset; the one thing that is unique to us is how we choose to come together as a group of people, day in and day out. How many organizations are there that make more than a halfhearted attempt at this?” Technology companies have taken different routes in establishing culture: Google and Facebook have tried to create culture by showering employees with unbelievable benefits, Netflix has focused on pure execution and transparency, and Microsoft has re-vamped its culture by adopting a Growth Mindset (has it really though?). Google originally promoted “Don’t be evil,” as part of its Code of Conduct but dropped the motto in 2018. Employees want to work for mission-driven organizations, but not all companies are really changing the world with their products, and Frank did not try to sugarcoat Data Domain’s data-duplication technology as a way to “elevate the world’s consciousness.” He created a culture driven by performance and execution - providing a useful product to businesses that needed it. The culture was so revered that post-acquisition, EMC instituted Data Domain’s performance management system. Data Domain employees were looked at strangely by longtime EMC executives, who had spent years in a big and stale company. Culture is a hard thing to replicate and a hard thing to change as we saw with the Innovator’s Dilemma. Might as well use it to help the company succeed!

Dig Deeper

  • How Data Domain Evolved in the Cloud World

  • Former Data Domain CEO Frank Slootman Gets His Old Band Back Together at ServiceNow

  • The Contentious Take-over Battle for Data Domain: Netapp vs. EMC

  • 2009 Interview with Frank Slootman After the Acquisition of Data Domain

tags: Snowflake, DoorDash, ServiceNow, WeWork, Data Domain, EMC, Netapp, Frank Slootman, Borland, IBM, Burroughs, Sperry, NEA, Greylock, Workday, Aneel Bhusri, Sun Microsystems, USVP, Uber, Netflix, Facebook, Google, Microsoft, Amazon, Jeff Bezos, Tony Xu, MongoDB, Elastic, Crowdstrike, Crypto, Gartner, Hype Cycle, Slack, Apple, Steve Jobs, Steve Wozniak, Magic Leap, batch2
categories: Non-Fiction
 

July 2020 - Innovator's Dilemma by Clayton Christensen

This month we review the technology classic, the Innovator’s Dilemma, by Clayton Christensen. The book attempts to answer the age-old question: why do dominant companies eventually fail?

Tech Themes

  1. The Actual Definition of Disruptive Technology. Disruption is a term that is frequently thrown around in Silicon Valley circles. Every startup thinks its technology is disruptive, meaning it changes how the customer currently performs a task or service. The actual definition, discussed in detail throughout the book, is relatively specific. Christensen re-emphasizes this distinction in a 2015 Harvard Business Review article: "Specifically, as incumbents focus on improving their products and services for their most demanding (and usually most profitable) customers, they exceed the needs of some segments and ignore the needs of others. Entrants that prove disruptive begin by successfully targeting those overlooked segments, gaining a foothold by delivering more-suitable functionality—frequently at a lower price. Incumbents, chasing higher profitability in more-demanding segments, tend not to respond vigorously. Entrants then move upmarket, delivering the performance that incumbents' mainstream customers require, while preserving the advantages that drove their early success. When mainstream customers start adopting the entrants' offerings in volume, disruption has occurred." The book posits that there are generally two types of innovation: sustaining and disruptive. While disruptive innovation focuses on low-end or new, small market entry, sustaining innovation merely continues markets along their already determined axes. For example, in the book, Christensen discusses the disk drive industry, mapping out the jumps which pack more memory and power into each subsequent product release. There is a slew of sustaining jumps for each disruptive jump that improves product performance for existing customers but doesn't necessarily get non-customers to become customers. It is only when new use cases emerge, like rugged disk usage and PCs arrive, that disruption occurs. Understanding the specific definition can help companies and individuals better navigate muddled tech messaging; Uber, for example, is shown to be a sustaining technology because its market already existed, and the company didn't offer lower prices or a new business model. Understanding the intricacies of the definition can help incumbents spot disruptive competitors.

  2. Value Networks. Value networks are an underappreciated and somewhat confusing topic covered in The Innovator's Dilemma's early chapters. A value network is defined as "The context within which a firm identifies and responds to customers' needs, solves problems, procures input, reacts to competitors, and strives for profit." A value network seems all-encompassing on the surface. In reality, a value network serves to simplify the lens through which an organization must make complex decisions every day. Shown as a nested product architecture, a value network attempts to show where a company interacts with other products. By distilling the product down to its most atomic components (literally computer hardware), we can see all of the considerations that impact a business. Once we have this holistic view, we can consider the decisions and tradeoffs that face an organization every day. The takeaway here is that organizations care about different levels of performance for different products. For example, when looking at cloud computing services at AWS, Azure, or GCP, we see Amazon EC2 instances, Azure VMs, and Google Cloud VMs with different operating systems, different purposes (general, compute, memory), and different sizes. General-purpose might be fine for basic enterprise applications, while gaming applications might need compute-optimized, and real-time big data analytics may need a memory-optimized VM. While it gets somewhat forgotten throughout the book, this point means that organizations focused on producing only compute-intensive machines may not be the best for memory-intensive, because the customers of the organization may not have a use for them. In the book's example, some customers (of bigger memory providers) looked at smaller memory applications and said there was no need. In reality, there was massive demand in the rugged, portable market for smaller memory disks. When approaching disruptive innovation, it's essential to recognize your organization's current value network so that you don't target new technologies at those who don't need it.

  3. Product Commoditization. Christensen spends a lot of time describing the dynamics of the disk drive industry, where companies continually supplied increasingly smaller drives with better performance. Christensen's description of commoditization is very interesting: "A product becomes a commodity within a specific market segment when the repeated changes in the basis of competition, completely play themselves out, that is, when market needs on each attribute or dimension of performance have been fully satisfied by more than one available product." At this point, products begin competing primarily on price. In the disk drive industry, companies first competed on capacity, then on size, then on reliability, and finally on price. This price war is reminiscent of the current state of the Continuous Integration / Continuous Deployment (CI/CD) market, a subsegment of DevOps software. Companies in the space, including Github, CircleCI, Gitlab, and others are now competing primarily on price to win new business. Each of the cloud providers has similar technologies native to their public cloud offerings (AWS CodePipeline and CloudFormation, GitHub Actions, Google Cloud Build). They are giving it away for free because of their scale. The building block of CI/CD software is git, an open-source version control system founded by Linux founder Linus Torvalds. With all the providers leveraging a massive open-source project, there is little room for true differentiation. Christensen even says: "It may, in fact, be the case that the product offerings of competitors in a market continue to be differentiated from each other. But differentiation loses its meaning when the features and functionality have exceeded what the market demands." Only time will tell whether these companies can pivot into burgeoning highly differentiated technologies.

Business Themes

Innovator Dilemma.png
R1512B_BIG_MODEL-1200x1035.png
  1. Resources-Processes-Value (RPV) Framework. The RPV framework is a powerful lens for understanding the challenges that large businesses face. Companies have resources (people, assets, technology, product designs, brands, information, cash, relationships with customers, etc.) that can be transformed into greater value products and services. The way organizations go about converting these resources is the organization's processes. These processes can be formal (documented sales strategies, for example) or informal (culture and habitual routines). Processes are the big reasons organizations struggle to deal with emerging technologies. Because culture and habit are ingrained in the organization, the same process used to launch a mature, slow-growing market may be applied to a fast-growing, dynamic sector. Christensen puts it best: "This means the very mechanisms through which organizations create value are intrinsically inimical to change." Lastly, companies have values, or "the standards by which employees make prioritization decisions." When there is a mismatch between the resources, processes, and values of an organization and the product or market that an organization is chasing, its rare the business can be successful in competing in the disruptive market. To see this misalignment in action, Christensen describes a meeting with a CEO who had identified the disruptive change happening in the disk-drive market and had gotten a product to market to meet the growing market. In response to a publication showing the fast growth of the market, the CEO lamented to Christensen: "I know that's what they think, but they're wrong. There isn't a market. We've had that drive in our catalog for 18 months. Everyone knows we've got it, but nobody wants it." The issue was not the product or market demand, but the organization's values. As Christensen continues, "But among the employees, there was nothing about an $80 million, low-end market that solved the growth and profit problems of a multi-billion dollar company – especially when capable competitors were doing all they could to steal away the customers providing those billions. And way at the other end of the company there was nothing about supplying prototype companies of 1.8-inch drives to an automaker that solved the problem of meeting the 1994 quotas of salespeople whose contacts and expertise were based so solidly in the computer industry." The CEO cared about the product, but his team did not. The RPV framework helps evaluate large companies and the challenges they face in launching new products.

  2. How to manage through technological change. Christensen points out three primary ways of managing through disruptive technology change: 1. "Acquire a different organization whose processes and values are a close match with the new task." 2. "Try to change the processes and values of the current organization." 3. "Separate out an independent organization and develop within it the new processes and values that are required to solve the new problem." Acquisitions are a way to get out ahead of disruptive change. There are so many examples but two recent ones come to mind: Microsoft's acquisition of Github and Facebook's acquisition of Instagram. Microsoft paid a whopping $7.5B for Github in 2018 when the Github was rumored to be at roughly $200M in revenue (37.5x Revenue multiple!). Github was undoubtedly a mature business with a great product, but it didn't have a ton of enterprise adoption. Diane Greene at Google Cloud, tried to get Sundar Pichai to pay more, but he said no. Github has changed Azure's position within the market and continued its anti-Amazon strategy of pushing open-source technology. In contrast to the Github acquisition, Instagram was only 13 employees when it was acquired for $1B. Zuckerberg saw the threat the social network represented to Facebook, and today the acquisition is regularly touted as one of the best ever. Instagram was developing a social network solely based on photographs, right at the time every person suddenly had an excellent smartphone camera in their pocket. The acquisition occurred right when the market was ballooning, and Facebook capitalized on that growth. The second way of managing technological change is through changing cultural norms. This is rarely successful, because you are fighting against all of the processes and values deeply embedded in the organization. Indra Nooyi cited a desire to move faster on culture as one of her biggest regrets as a young executive: "I’d say I was a little too respectful of the heritage and culture [of PepsiCo]. You’ve got to make a break with the past. I was more patient than I should’ve been. When you know you have to make a change, at some point you have to say enough is enough. The people who have been in the company for 20-30 years pull you down. If I had to do it all over again, I might have hastened the pace of change even more." Lastly, Christensen prescribes creating an independent organization matched to the resources, processes, and values that the new market requires. Three great spin-out, spin-in examples with different flavors of this come to mind. First, Cisco developed a spin-ins practice whereby they would take members of their organization and start a new company that they would fund to develop a new process. The spin-ins worked for a time but caused major cultural issues. Second, as we've discussed, one of the key reasons AWS was born was that Chris Pinkham was in South Africa, thousands of miles away from Amazon Corporate in Seattle; this distance and that team's focus allowed it to come up with a major advance in computing. Lastly, Mastercard started Mastercard Labs a few years ago. CEO Ajay Banga told his team: "I need two commercial products in three years." He doesn't tell his CFO their budget, and he is the only person from his executive team that interacts with the business. This separation of resources, processes, and values allows those smaller organizations to be more nimble in finding emerging technology products and markets.

  3. Discovering Emerging Markets.

    The resources-processes-values framework can also show us why established firms fail to address emerging markets. Established companies rely on formal budgeting and forecasting processes whereby resources are allocated based on market estimates and revenue forecasts. Christensen highlights several important factors for tackling emerging markets, including focusing on ideas, failure, and learning. Underpinning all of these ideas is the impossibility of predicting the scale and growth rate of disruptive technologies: "Experts' forecasts will always be wrong. It is simply impossible to predict with any useful degree of precision how disruptive products will be used or how large their markets will be." Because of this challenge, relying too heavily on these estimates to underpin financial projections can cause businesses to view initial market development as a failure or not worthy of the companies time. When HP launched a new 1.3-inch disk drive, which could be embedded in PDAs, the company mandated that its revenues had to scale up to $150M within three years, in line with market estimates. That market never materialized, and the initiative was abandoned as a failed investment. Christensen argues that because disruptive technologies are threats, planning has to come after action, and thus strategic and financial planning must be discovery-based rather than execution-based. Companies should focus on learning their customer's needs and the right business model to attack the problem, rather than plan to execute their initial vision. As he puts it: "Research has shown, in fact, that the vast majority of successful new business ventures, abandoned their original business strategies when they began implementing their initial plans and learned what would and would not work." One big fan of Christensen's work is Jeff Bezos, and its easy to see why with Amazon's focus on releasing new products in this discovery manner. The pace of product releases is simply staggering (~almost one per day). Bezos even talked about this exact issue in his 2016 shareholder letter: "The senior team at Amazon is determined to keep our decision-making velocity high. Speed matters in business – plus a high-velocity decision making environment is more fun too. We don't know all the answers, but here are some thoughts. First, never use a one-size-fits-all decision-making process. Many decisions are reversible, two-way doors. Those decisions can use a light-weight process. For those, so what if you're wrong? I wrote about this in more detail in last year's letter. Second, most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you're probably being slow." Amazon is one of the first large organizations to truly embrace this decision-making style, and clearly, the results speak for themselves.

Dig Deeper

  • What Jeff Bezos Tells His Executives To Read

  • Github Cuts Subscription Price by More Than Half

  • Ajay Banga Opening Address at MasterCard Innovation Forum 2014

  • Clayton Christensen Describing Disruptive Innovation

  • Why Cisco’s Spin-Ins Never Caught On

tags: Amazon, Google Cloud, Microsoft, Azure, Github, Gitlab, CircleCI, Pepsi, Jeff Bezos, Indra Nooyi, Mastercard, Ajay Banga, HP, Uber, RPV, Facebook, Instagram, Cisco, batch2
categories: Non-Fiction
 

April 2020 - Good To Great by Jim Collins

Collins’ book attempts to answer the question - Why do good companies continue to be good companies? His analysis across several different industries provides meaningful insights into strong management and strategic practices.

Tech Themes

  1. Packard’s Law. We’ve discussed Packard’s law before when analyzing the troubling acquisition history of AOL-Time Warner and Yahoo. As a reminder, Packard’s law states: “No company can consistently grow revenues faster than its ability to get enough of the right people to implement that growth and still become a great company. [And] If a company consistently grows revenue faster than its ability to get enough of the right people to implement that growth, it will not simply stagnate; it will fall.” Given Good To Great is a management focused book, I wanted to explore an example of this law manifesting itself in a recent management dilemma. Look no further than ride-sharing giant, Uber. Uber’s culture and management problems have been highly publicized. Susan Fowler’s famous blog post kicked off a series of blows that would ultimately lead to a board dispute, the departure of its CEO, and a full-on criminal investigation. Uber’s problems as a company, however, can be traced to its insistence to be the only ride-sharing service throughout the world. Uber launched several incredibly unprofitable ventures, not only a price-war with its local competitor Lyft, but also a concerted effort to get into China, India, and other locations that ultimately proved incredibly unprofitable. Uber tried to be all things transportation to every location in the world, an over-indulgence that led to the Company raising a casual $20B prior to going public. Dara Khosrowshahi, Uber’s replacement for Travis Kalanick, has concertedly sold off several business lines and shuttered other unprofitable ventures to regain financial control of this formerly money burning “logistics” pit. This unwinding has clearly benefited the business, but also limited growth, prompting the stock to drop significantly from IPO price. Dara is no stranger to facing travel challenges, he architected the spin-out of Expedia with Barry Diller, right before 9/11. Only time will tell if he can refocus the Company as it looks to run profitably. Uber pushed too far in unprofitable locations, and ran head on into Packard’s law, now having to pay the price for its brash push into unprofitable markets.

  2. Technology Accelerators. In Collins’ Good to Great framework (pictured below), technology accelerators act as a catalyst to momentum built up from disciplined people and disciplined thought. By adapting a “Pause, think, crawl, walk, run” approach to technology, meaning a slow and thoughtful transition to new technologies, companies can establish best practices for the long-term, instead of short term gains from technology faux-feature marketing. Technology faux-feature marketing, which is decoupled from actual technology has become increasingly popular in the past few years, whereby companies adopt a marketing position that is actually complete separate from their technological sophistication. Look no further than the blockchain / crypto faux-feature marketing around 2018, when Long Island iced-tea changed its name to Long Island Blockchain, which is reminiscent of companies adding “.com” to their name in the early 2000’s. Collins makes several important distinctions about technology accelerators: technology should only be a focus if it fits into a company’s hedgehog concept, technology accelerators cannot make up for poor people choices, and technology is never a primary root cause of either greatness or decline. The first two axioms make sense, just think of how many failed, custom software projects have begun and never finished; there is literally an entire wikipedia page dedicated to exactly that. The government has also reportedly been a famous dabbler in homegrown, highly customized technology. As Collins notes, technology accelerators cannot make up for bad people choices, an aspect of venture capital that is overlooked by so many. Enron is a great example of an interesting idea turned sour by terrible leadership. Beyond the accounting scandals that are discussed frequently, the culture was utterly toxic, with employees subjected to a “Performance Review Committee” whereby they were rated on a scale of 1-5 by their peers. Employees rated a 5 were fired, which meant roughly 15% of the workforce turned over every year. The New York Times reckoned Enron is still viewed as a trailblazer for the way it combined technology and energy services, but it clearly suffered from terrible leadership that even great technology couldn’t surmount. Collins’ most controversial point is arguably that technology cannot cause greatness or decline. Some would argue that technology is the primary cause of greatness for some companies like Amazon, Apple, Google, and Microsoft. The “it was just a better search engine” argument abounds discussions of early internet search engines. I think what Collins’ is getting at is that technology is malleable and can be built several different ways. Zoom and Cloudflare are great examples of this. As we’ve discussed, Zoom started over 100 years after the idea for video calling was first conceived, and several years after Cisco had purchased Webex, which begs the question, is technology the cause of greatness for Zoom? No! Zoom’s ultimate success the elegance of its simple video chat, something which had been locked up in corporate feature complexity for years. Cloudflare presents another great example. CDN businesses had existed for years when Cloudflare launched, and Cloudflare famously embedded security within the CDN, building on a trend which Akamai tried to address via M&A. Was technology the cause of greatness for Cloudflare? No! It’s way cheaper and easier to use than Akamai. Its cost structure enabled it to compete for customers that would be unprofitable to Akamai, a classic example of a sustaining technology innovation, Clayton Christensen’s Innovator’s Dilemma. This is not to say these are not technologically sophisticated companies, Zoom’s cloud ops team has kept an amazing service running 24/7 despite a massive increase in users, and Cloudflare’s Workers technology is probably the best bet to disrupt the traditional cloud providers today. But to place technology as the sole cause for greatness would be understating the companies achievements in several other areas.

  3. Build up, Breakthrough Flywheel. Jeff Bezos loves this book. Its listed in the continued reading section of prior TBOTM, The Everything Store. The build up, breakthrough flywheel is the culmination of disciplined people, disciplined thought and disciplined action. Collins’ points out that several great companies frequently appear like overnight successes; all of a sudden, the Company has created something great. But that’s rarely the case. Amazon is a great example of this; it had several detractors in the early days, and was dismissed as simply an online bookseller. Little did the world know that Jeff Bezos had ideas to pursue every product line and slowly launched one after the other in a concerted fashion. In addition, what is a better technology accelerator than AWS! AWS resulted from an internal problem of scaling compute fast enough to meet growing consumer demand for their online products. The company’s tech helped it scale so well that they thought, “Hey! Other companies would probably like this!” Apple is another classic example of a build-up, breakthrough flywheel. The Company had a massive success with the iPod, it was 40% of revenues in 2007. But what did it do? It cannablized itself and pursued the iPhone, with several different teams within the company pursuing it individually. Not only that, it created a terrible first version of an Apple phone with the Rokr, realizing that design was massively important to the phone’s success. The phone’s technology is taken for granted today, but at the time the touch screen was simply magical!

Business Themes

goodtogreatflywheel.png
Ipod_sales.jpeg
Hedgehog-Concept_v2.jpg
Slides-Character-And-Concrete-Actions-Shape-A-Culture.005.png
  1. Level 5 Leader. The first part and probably the most important part of the buildup, breakthrough, flywheel is disciplined people. One aspect of Good to Great that inspired Collins’ other book Built to Last, is the idea that leadership, people, and culture determine the long-term future of a business, even after current leadership has moved on from the business. To set an organization up for long-term success, executives need to display level five leadership, which is a mix of personal humility and professional will. Collins’ leans in on Lee Iacocca as an example of a poor leader, who focused more on personal celebrity and left Chrysler to fail, when he departed. Level 5 leadership has something that you don’t frequently see in technology business leaders, humility. The technology industry seems littered with far more Larry Ellison and Elon Musk’s than any other industry, or maybe its just that tech CEOs tend to shout the loudest from their pedestals. One CEO that has done a great job of representing level five leadership is Shantanu Narayen, who took the reigns of Adobe in December 2007, right on the cusp of the financial crisis. Narayen, who’s been described as more of a doer than a talker, has dramatically changed Adobe’s revenue model, moving the business from a single sale license software business focused on lower ACV numbers, to an enterprise focused SaaS business. This march has been slow and pragmatic but the business has done incredibly well, 10xing since he took over. Adobe CFO, Mark Garrett, summarized it best in a 2015 McKinsey interview: “We instituted open dialogue with employees—here’s what we’re going through, here’s what it might look like—and we encouraged debate. Not everyone stayed, but those who did were committed to the cloud model.”

  2. Hedgehog Concept. The Hedgehog concept (in the picture wheel to the right) is the overlap of three questions: What are you passionate about?, What are you the best in the world at?, and What drives your economic engine? This overlap is the conclusion of Collins’ memo to Confront the Brutal Facts, something that Ben Horowitz emphasizes in March’s TBOTM. Once teams have dug into their business, they should come up with a simple way to center their focus. When companies reach outside their hedgehog concept, they get hurt. The first question, about organizational passion, manifests itself in mission and value statements. The best in the world question manifests itself through value network exercises, SWOT analyses and competitive analyses. The economic engine is typically shown as a single metric to define success in the organization. As an example, let’s walk through an example with a less well-known SaaS company: Avalara. Avalara is a provider of tax compliance software for SMBs and enterprises, allowing those businesses to outsource complex and changing tax rules to software that integrates with financial management systems to provide an accurate view of corporate taxes. Avalara’s hedgehog concept is right on their website: “We live and breathe tax compliance so you don't have to.” Its simple and effective. The also list a slightly different version in their 10-K, “Avalara’s motto is ‘Tax compliance done right.’” Avalara is the best at tax compliance software, and that is their passion; they “live and breath” tax compliance software. What drives Avalara’s economic engine? They list two metrics right at the top of their SEC filings, number of core customers and net revenue retention. Core customers are customers who have been billed more than $3,000 in the last twelve months. The growth in core customers allows Avalara to understand their base of revenue. Tax compliance software is likely low churn because filing taxes is such an onerous process, and most people don’t have the expertise to do it for their corporate taxes. They will however suffer from some tax seasonality and some customers may churn and come back after the tax period has ended for a given year. Total billings allows Avalara to account for this possibility. Avalara’s core customers have grown 32% in the last twelve months, meaning its revenue should be following a similar trajectory. Net retention allows the company to understand how customer purchasing behavior changes over time and at 113% net retention, Avalara’s overall base is buying more software from Avalara than is churning, which is a positive trend for the company. What is the company the best in the world at? Tax compliance software for SMBs. Avalara views their core customer as greater than $3,000 of trailing twelve months revenue, which means they are targeting small customers. The Company’s integrations also speak to this - Shopify, Magento, NetSuite, and Stripe are all focused on SMB and mid-market customers. Notice that neither SAP nor Oracle ERP is in that list of integrations, which are the financial management software providers that target large enterprises. This means Avalara has set up its product and cost structure to ensure long-term profitability in the SMB segment; the enterprise segment is on the horizon, but today they are focused on SMBs.

  3. Culture of Discipline. Collins describes a culture of discipline as an ability of managers to have open and honest, often confrontational conversation. The culture of discipline has to fit within a culture of freedom, allowing individuals to feel responsible for their division of the business. This culture of discipline is one of the first things to break down when a CEO leaves. Collins points on this issue with Lee Iaccoca, the former CEO of Chrysler. Lee built an intense culture of corporate favoritism, which completely unraveled after he left the business. This is also the focus of Collins’ other book, Built to Last. Companies don’t die overnight, yet it seems that way when problems begin to abound company-wide. We’ve analyzed HP’s 20 year downfall and a similar story can be shown with IBM. In 1993, IBM elected Lou Gerstner as CEO of the company. Gerstner was an outsider to technology businesses, having previously led the highly controversial RJR Nabisco, after KKR completed its buyout in 1989. He has also been credited with enacting wholesale changes to the company’s culture during his tenure. Despite the stock price increasing significantly over Gerstner’s tenure, the business lost significant market share to Microsoft, Apple and Dell. Gerstner was also the first IBM CEO to make significant income, having personally been paid hundreds of millions over his tenure. Following Gerstner, IBM elected insider Sam Palmisano to lead the Company. Sam pushed IBM into several new business lines, acquired 25 software companies, and famously sold off IBM’s PC division, which turned out to be an excellent strategic decision as PC sales and margins declined over the following ten years. Interestingly, Sam’s goal was to “leave [IBM] better than when I got there.” Sam presided over a strong run up in the stock, but yet again, severely missed the broad strategic shift toward public cloud. In 2012, Ginni Rometty was elected as new CEO. Ginni had championed IBM’s large purchase of PwC’s technology consulting business, turning IBM more into a full service organization than a technology company. Palmisano has an interesting quote in an interview with a wharton business school professor where he discusses IBM’s strategy: “The thing I learned about Lou is that other than his phenomenal analytical capability, which is almost unmatched, Lou always had the ability to put the market or the client first. So the analysis always started from the outside in. You could say that goes back to connecting with the marketplace or the customer, but the point of it was to get the company and the analysis focused on outside in, not inside out. I think when you miss these shifts, you’re inside out. If you’re outside in, you don’t miss the shifts. They’re going to hit you. Now acting on them is a different characteristic. But you can’t miss the shift if you’re outside in. If you’re inside out, it’s easy to delude yourself. So he taught me the importance of always taking the view of outside in.” Palmisano’s period of leadership introduced a myriad of organizational changes, 110+ acquisitions, and a centralization of IBM processes globally. Ginni learned from Sam that acquisitions were key toward growth, but IBM was buying into markets they didn’t fully understand, and when Ginni layered on 25 new acquisitions in her first two years, the Company had to shift from an outside-in perspective to an inside-out perspective. The way IBM had historically handled the outside-in perspective, to recognize shifts and get ahead of them, was through acquisition. But when the acquisitions occured at such a rapid pace, and in new markets, the organization got bogged down in a process of digestion. Furthermore, the centralization of processes and acquired businesses is the exact opposite of what Clayton Christensen recommends when pursuing disruptive technology. This makes it obvious why IBM was so late to the cloud game. This was a mainframe and services company, that had acquired hundreds of software businesses they didn’t really understand. Instead of building on these software platforms, they wasted years trying to put them all together into a digestible package for their customers. IBM launched their public cloud offering in June 2014, a full seven years after Microsoft, Amazon, and Google launched their services, despite providing the underlying databases and computing power for all of their enterprise customers. Gerstner established the high-pay, glamorous CEO role at IBM, which Palmisano and Ginni stepped into, with corporate jets and great expense policies. The company favored increasing revenues and profits (as a result of acquisitions) over the recognition and focus on a strategic market shift, which led to a downfall in the stock price and a declining mindshare in enterprises. Collins’ understands the importance of long term cultural leadership. “Does Palmisano think he could have done anything differently to set IBM up for success once he left? Not really. What has happened since falls to a new coach, a new team, he says.”

Dig Deeper

  • Level 5 Leadership from Darwin Smith at Kimberly Clark

  • From Good to Great … to Below Average by Steven Levitt - Unpacking underperformance from some of the companies Collins’ studied

  • The Challenges faced by new CEO Arvind Krishna

  • Overview of Cloudflare Workers

  • The Opposite of the Buildup, Breakthrough, Flywheel - the Doom Loop

tags: IBM, Apple, Microsoft, Packard's Law, HP, Uber, Barry Diller, Enron, Zoom, Cloudflare, Innovator's Dilemma, Clayton Christensen, Jeff Bezos, Amazon, Larry Ellison, Adobe, Shantanu Narayen, Avalara, Hedgehog Concept, batch2
categories: Non-Fiction
 

March 2020 - The Hard Thing About Hard Things by Ben Horowitz

Ben Horowitz, GP of the famous investment fund Andreessen Horowitz, addresses the not-so-pleasant aspects of being a founder/CEO during a crisis. This book provides an excellent framework for anyone going through the struggles of scaling a business and dealing with growing pains.

Tech Themes

  1. The importance of Netscape. Now that its been relegated to history by the rise of AOL and internet explorer, its hard to believe that Netscape was ever the best web browser. Founded by Marc Andreessen, who had founded the first web browser, Mosaic (as a teenager!), Netscape would go on to achieve amazing success only to blow up in the face of competition and changes to internet infrastructure. Netscape was an incredible technology company, and as Brian McCullough shows in last month’s TBOTM, Netscape was the posterchild for the internet bubble. But for all the fanfare around Netscape’s seminal IPO, little is discussed about its massive and longstanding technological contributions. In 1995, early engineer Brendan Eich created Javascript, which still stands as the dominant front end language for the web. In the same year, the Company developed Secure Socket Layer (SSL), the most dominant basic internet security protocol (and reason for HTTPS). On top of those two fundamental technologies, Netscape also developed the internet cookie, in 1994! Netscape is normally discussed as the amazing company that ushered many of the first internet users onto the web, but its rarely lauded for its longstanding technological contributions. Ben Horowitz, author of the Hard Thing About Hard Things was an early employee and head of the server business unit for Netscape when it went public.

  2. Executing a pivot. Famous pivots have become part of startup lore whether it be in product (Glitch (video game) —> Slack (chat)), business model (Netflix DVD rental —> Streaming), or some combo of both (Snowdevil (selling snowboards online) —> Shopify (ecommerce tech)). The pivot has been hailed as necessary tool in every entrepreneur’s toolbox. Though many are sensationalized, the pivot Ben Horowitz underwent at LoudCloud / Opsware is an underrated one. LoudCloud was a provider of web hosting services and managed services for enterprises. The Company raised a boatload ($346M) of money prior to going public in March 2001, after the internet bubble had already burst. The Company was losing a lot of money and Ben knew that the business was on its last legs. After executing a 400 person layoff, he sold the managed services part of the business to EDS, a large IT provider, for $63.5M. LoudCloud had a software tool called Opsware that it used to manage all of the complexities of the web hosting business, scaling infrastructure with demand and managing compliance in data centers. After the sale was executed, the company’s stock fell to $0.35 per share, even trading below cash, which meant the markets viewed the Company as already bankrupt. The acquisition did something very important for Ben and the Opsware team, it bought them time - the Company had enough cash on hand to execute until Q4 2001 when it had to be cash flow positive. To balance out these cash issues, Opsware purchased Tangram, Rendition Networks, and Creekpath, which were all software vendors that helped manage the software of data centers. This had two effects - slowing the burn (these were profitable companies), and building a substantial product offering for data center providers. Opsware started making sales and the stock price began to tick up, peaking the attention of strategic acquirers. Ultimately it came down to BMC Software and HP. BMC offered $13.25 per share, the Opsware board said $14, BMC countered with $13.50 and HP came in with a $14.25 offer, a 38% premium to the stock price and a total valuation of $1.6B, which the board could not refuse. The Company changed business model (services —> software), made acquisitions and successfully exited, amidst a terrible environment for tech companies post-internet bubble.

  3. The Demise of the Great HP. Hewlett-Packard was one of the first garage-borne, silicon valley technology companies. The company was founded in Palo Alto by Bill Hewlett and Dave Packard in 1939 as a provider of test and measurement instruments. Over the next 40 years, the company moved into producing some of the best printers, scanners, calculators, logic analyzers, and computers in the world. In the 90s, HP continued to grow its product lines in the computing space, and executed a spinout of its manufacturing / non-computing device business in 1999. 1999 marks the tragic beginning of the end for HP. The first massive mistake was the acquisition of Compaq, a flailing competitor in the personal computer market, who had acquired DEC (a losing microprocessor company), a few years earlier. The acquisition was heavily debated, with Walter Hewlett, son of the founder and board director at the time, engaging in a proxy battle with then current CEO, Carly Firorina. The new HP went on to lose half of its market value and incur heavy job losses that were highly publicized. This started a string of terrible acquisitions including EDS, 3COM, Palm Inc., and Autonomy for a combined $28.8B. The Company spun into two divisions - HP Inc. and HP Enterprise in 2015 and each had their own spinouts and mergers from there (Micro Focus and DXC Technology). Today, HP Inc. sells computers and printers, and HPE sells storage, networking and server technology. What can be made of this sad tale? HP suffered from a few things. First, poor long term direction - in hindsight their acquisitions look especially terrible as a repeat series of massive bets on technology that was already being phased out due to market pressures. Second, HP had horrible corporate governance during the late 90s and 2000s - board in-fighting over acquisitions, repeat CEO fiirings over cultural issues, chairman-CEO’s with no checks, and an inability to see the outright fraud in their Autonomy acquisition. Lastly, the Company saw acquisitions and divestitures as band-aids - new CEO entrants Carly Fiorina (from AT&T), Mark Hurd (from NCR), Leo Apotheker (from SAP), and Meg Whitman (from eBay) were focused on making an impact at HP which meant big acquisitions and strategic shifts. Almost none of these panned out, and the repeated ideal shifts took a toll on the organization as the best talent moved elswehere. Its sad to see what has happened at a once-great company.

Business Themes

51DydLyUcrL.jpg
MarcA_Cover.jpg
  1. Ill, not sick: going public at the end of the internet bubble. Going public is supposed to be the culmination of a long entrepreneurial journey for early company employees, but according to Ben Horowitz’s experience, going public during the internet bubble pop was terrible. Loudcloud had tried to raise money privately but struggled given the terrible conditions for raising money at the beginning of 2001. Its not included in the book but the reason the Company failed to raise money was its obscene valuation and loss. The Company was valued at $1.15B in its prior funding round and could only report $6M in Net Revenue on a $107M loss. The Company sought to go public at $10 per share ($700M valuation), but after an intense and brutal roadshow that left Horowitz physically sick, they settled for $6.00 per share, a massive write-down from the previous round. The fact that the banks were even able to find investors to take on this significant risk at this point in the business cycle was a marvel. Timing can be crucial in an IPO as we saw during the internet bubble; internet “businesses” could rise 4-5x on their first trading day because of the massive and silly web landgrab in the late 90s. On the flip side, going public when investors don’t want what you’re selling is almost a death sentence. Although they both have critical business and market issues, WeWork and Casper are clear examples of the importance of timing. WeWork and Casper were late arrivals on the unicorn IPO train. Let me be clear - both have huge issues (WeWork - fundamental business model, Casper - competition/differentiation) but I could imagine these types of companies going public during a favorable time period with a relatively strong IPO. Both companies had massive losses, and investors were especially wary of losses after the failed IPOs of Lyft and Uber, which were arguably the most famous unicorns to go public at the time. Its not to say that WeWork and Casper wouldn’t have had trouble in the public markets, but during the internet bubble these companies could’ve received massive valuations and raised tons of cash instead of seeking bailouts from Softbank and reticent public market investors.

  2. Peactime / Wartime CEO. The genesis of this book was a 2011 blog post written by Horowitz detailing Peacetime and Wartime CEO behavior. As the book and blog post describe, “Peacetime in business means those times when a company has a large advantage vs. the competition in its core market, and its market is growing. In times of peace, the company can focus on expanding the market and reinforcing the company’s strengths.” On the other hand, to describe Wartime, Horowitz uses the example of a previous TBOTM, Only the Paranoid Survive, by Andy Grove. In the early 1980’s, Grove realized his business was under serious threat as competition increased in Intel’s core business, computer memory. Grove shifted the entire organization whole-heartedly into chip manufacturing and saved the company. Horowitz outlines several opposing behaviors of Peacetime and Wartime CEOs: “Peacetime CEO knows that proper protocol leads to winning. Wartime CEO violates protocol in order to win; Peacetime CEO spends time defining the culture. Wartime CEO lets the war define the culture; Peacetime CEO strives for broad based buy in. Wartime CEO neither indulges consensus-building nor tolerates disagreements.” Horowitz concludes that executives can be a peacetime and wartime CEO after mastering each of the respective skill sets and knowing when to shift from peacetime to wartime and back. The theory is interesting to consider; at its best, it provides an excellent framework for managing times of stress (like right now with the Coronavirus). At its worst, it encourages poor CEO behavior and cut throat culture. While I do think its a helpful theory, I think its helpful to think of situations that may be an exception, as a way of testing the theory. For example, lets consider Google, as Horowitz does in his original article. He calls out that Google was likely entering in a period of wartime in 2011 and as a result transitioned CEOs away from peacetime Eric Schmidt to Google founder and wartime CEO, Larry Page. Looking back however, was it really clear that Google was entering wartime? The business continued to focus on what it was clearly best at, online search advertising, and rarely faced any competition. The Company was late to invest in cloud technology and many have criticized Google for pushing billions of dollars into incredibly unprofitable ventures because they are Larry and Sergey’s pet projects. In addition, its clear that control had been an issue for Larry all along - in 2011, it came out that Eric Schmidt’s ouster as CEO was due to a disagreement with Larry and Sergey over continuing to operate in China. On top of that, its argued that Larry and Sergey, who have controlling votes in Google, stayed on too long and hindered Sundar Pichai’s ability to effectively operate the now restructured Alphabet holding company. In short, was Google in a wartime from 2011-2019? I would argue no, it operated in its core market with virtually no competition and today most Google’s revenues come from its ad products. I think the peacetime / wartime designation is rarely so black and white, which is why it is so hard to recognize what period a Company may be in today.

  3. Firing people. The unfortunate reality of business is that not every hire works out, and that eventually people will be fired. The Hard Thing About Hard Things is all about making difficult decisions. It lays out a framework for thinking about and executing layoffs, which is something that’s rarely discussed in the startup ecosystem until it happens. Companies mess up layoffs all the time, just look at Bird who recently laid off staff via an impersonal Zoom call. Horowitz lays out a roughly six step process for enacting layoffs and gives the hard truths about executing the 400 person layoff at LoudCloud. Two of these steps stand out because they have been frequently violated at startups: Don’t Delay and Train Your Managers. Often times, the decision to fire someone can be a months long process, continually drawn out and interrupted by different excuses. Horowitz encourages CEOs to move thoughtfully and quickly to stem leaks of potential layoffs and to not let poor performers continue to hurt the organization. The book discusses the Law of Crappy People - any level of any organization will eventually converge to the worst person on that level; benchmarked against the crappiest person at the next level. Once a CEO has made her mind up about the decision to fire someone, she should go for it. As part of executing layoffs, CEOs should train their managers, and the managers should execute the layoffs. This gives employees the opportunity to seek direct feedback about what went well and what went poorly. This aspect of the book is incredibly important for all levels of entrepreneurs and provides a great starting place for CEOs.

Dig Deeper

  • Most drastic company pivots that worked out

  • Initial thoughts on the Opsware - HP Deal from 2007

  • A thorough history of HP’s ventures, spin-offs and acquisitions

  • Ben’s original blog post detailing the pivot from service provider to tech company

  • The First (1995-01) and Second Browser War (2004 - 2017)

tags: Apple, IBM, VC, Google, HP, Packard's Law, Amazon, Android, Internet History, Marc Andreessen, Andreessen Horowitz, Loudcloud, Opsware, BMC Software, Mark Hurd, Javascript, Shopify, Slack, Netflix, Compaq, DEC, Micro Focus, DXC Technology, Carly Firoina, Leo Apotheker, Meg Whitman, WeWork, Casper, Larry Page, Eric Schmidt, Sundar Pichai, batch2
categories: Non-Fiction
 

February 2020 - How the Internet Happened: From Netscape to the iPhone by Brian McCullough

Brian McCullough, host of the Internet History Podcast, does an excellent job of showing how the individuals adopted the internet and made it central to their lives. He follows not only the success stories but also the flame outs which provide an accurate history of a time of rapid technological change.

Tech Themes

  1. Form to Factor: Design in Mobile Devices. Apple has a long history with mobile computing, but a few hiccups in the early days are rarely addressed. These hiccups also telegraph something interesting about the technology industry as a whole - design and ease of use often trump features. In the early 90’s Apple created the Figaro, a tablet computer that weighed eight pounds and allowed for navigation through a stylus. The issue was it cost $8,000 to produce and was 3/4 of an inch thick, making it difficult to carry. In 1993, the Company launched the Newton MessagePad, which cost $699 and included a calendar, address book, to-do list and note pad. However, the form was incorrect again; the MessagePad was 7.24 in. x 4.5 in. and clunky. With this failure, Apple turned its attention away from mobile, allowing other players like RIM and Blackberry to gain leading market share. Blackberry pioneered the idea of a full keyboard on a small device and Marc Benioff, CEO of salesforce.com, even called it, “the heroin of mobile computing. I am serious. I had to stop.” IBM also tried its hand in mobile in 1992, creating the Simon Personal Communicator, which had the ability to send and receive calls, do email and fax, and sync with work files via an adapter. The issue was the design - 8 in. by 2.5 in. by 1.5 in. thick. It was a modern smartphone, but it was too big, clunky, and difficult to use. It wasn’t until the iPhone and then Android that someone really nailed the full smart phone experience. The lessons from this case study offer a unique insight into the future of VR. The company able to offer the correct form factor, at a reasonable price can gain market share quickly. Others who try to pioneer too much at a time (cough, magic leap), will struggle.

  2. How to know you’re onto something. Facebook didn’t know. On November 30, 2004, Facebook surpassed one million users after being live for only ten months. This incredible growth was truly remarkable, but Mark Zuckerberg still didn’t know facebook was a special company. Sean Parker, the founder of Napster, had been mentoring Zuckerberg the prior summer: “What was so bizarre about the way Facebook was unfolding at that point, is that Mark just didn’t totally believe in it and wanted to go and do all these other things.” Zuckerberg even showed up to a meeting at Sequoia Capital still dressed in his pajamas with a powerpoint entitled: “The Top Ten Reasons You Should Not Invest.” While this was partially a joke because Sequoia has spurned investing in Parker’s latest company, it represented how immature the whole facebook operation was, in the face of rapid growth. Facebook went on to release key features like groups, photos, and friending, but most importantly, they developed their revenue model: advertising. The quick user growth and increasing ad revenue growth got the attention of big corporations - Viacom offered $2B in cash and stock, and Yahoo offered $1B all cash. By this time, Zuckerberg realized what he had, and famously spurned several offers from Yahoo, even after users reacted negatively to the most important feature that facebook would ever release, the News Feed. In today’s world, we often see entrepreneur’s overhyping their companies, which is why Silicon Valley was in-love with dropout founders for a time, their naivite and creativity could be harnessed to create something huge in a short amount of time.

  3. Channel Partnerships: Why apple was reluctant to launch a phone. Channel partnerships often go un-discussed at startups, but they can be incredibly useful in growing distribution. Some industries, such as the Endpoint Detection and Response (EDR) market thrives on channel partnership arrangements. Companies like Crowdstrike engage partners (mostly IT services firms) to sell on their behalf, lowering Crowdstrike’s customer acquisition and sales spend. This can lead to attractive unit economics, but on the flip side, partners must get paid and educated on the selling motion which takes time and money. Other channel relationships are just overly complex. In the mid 2000’s, mobile computing was a complicated industry, and companies hated dealing with old, legacy carriers and simple clunky handset providers. Apple tried the approach of working with a handset provider, Motorola, but they produced the terrible ROKR which barely worked. The ROKR was built to run on the struggling Cingular (would become AT&T) network, who was eager to do a deal with Apple in hopes of boosting usage on their network. After the failure of the ROKR, Cingular executives begged Jobs to build a phone for the network. Normally, the carriers had specifications for how phones were built for their networks, but Jobs ironed out a contract which exchanged network exclusivity for complete design control, thus Apple entered into mobile phones. The most important computing device of the 2000’s and 2010’s was built on a channel relationship.

Business Themes

caseaoltimewarner.jpg
timewarner_aol_facts1.jpg
  1. AOL-Time Warner: the merger destined to fail. To fully understand the AOL-Time Warner merger, you must first understand what AOL was, what it was becoming, and why it was operating on borrowed time. AOL started as an ISP, charging customers $9.95 for five hours of dial-up internet access, with each additional hour costing $2.95. McCullough describes AOL: “AOL has often been described as training wheels for the Internet. For millions of Americans, their aol.com address was their first experience with email, and thus their first introduction to the myriad ways that networked computing could change their lives.” AOL grew through one of the first viral marketing campaigns ever; AOL put CDs into newspapers which allowed users to download AOL software and get online. The Company went public in March of 1992 and by 1996 the Company had 2.1 million subscribers, however subscribers were starting to flee to cheaper internet access. It turned out that building an ISP was relatively cheap, and the high margin cash flow business that AOL had built was suddenly threatened by a number of competitors. AOL persisted with its viral marketing strategy, and luckily many americans still had not tried the internet yet and defaulted to AOL as being the most popular. AOL continued to add subscribers and its stock price started to balloon; in 1998 alone the stock went up 593%. AOL was also inking ridiculous, heavily VC funded deals with new internet startups. Newly public Drkoop, which raised $85M in an IPO, signed a four year $89M deal to be AOL’s default provider of health content. Barnes and Noble paid $40M to be AOL’s bookselling partner. Tel-save, a long distance phone provider signed a deal worth $100M. As the internet bubble continued to grow, AOL’s CEO, Steve Case realized that many of these new startups would be unable to fufill their contractual obligations. Early web traffic reporting systems could easily be gamed, and companies frequently had no business model other than attract a certain demographic of traffic. By 1999, AOL had a market cap of $149.8B and was added to the S&P 500 index; it was bigger than both Disney and IBM. At this time, the world was shifting away from dial-up internet to modern broadband connections provided by cable companies. One AOL executive lamented: “We all knew we were living on borrowed time and had to buy something of substance by using that huge currency [AOL’s stock].” Time Warner was a massive media company, with movie studios, TV channels, magazines and online properties. On Jan 10, 2000, AOL merged with Time Warner in one of the biggest mergers in history. AOL owned 56% of the combined company. Four days later, the Dow peaked and began a downturn which would decimate hundreds of internet businesses built on foggy fundamentals. Acquisitions happen for a number of reasons, but imminent death is not normally considered by analysts or pundits. When you see acquisitions, read the press release and understand why (at least from a marketing perspective), the two companies made a deal. Was the price just astronomical (i.e. Instagram) or was their something very strategic (i.e. Microsoft-Github)? When you read the press release years later, it should indicate whether the combination actually was proved out by the market.

  2. Acquisitions in the internet bubble: why acquisitions are really just guessing. AOL-Time Warner shows the interesting conundrum in acquisitions. HP founder David Packard coined this idea somewhat in Packard’s law: “No company can consistently grow revenues faster than its ability to get enough of the right people to implement that growth and still become a great company. If a company consistently grows revenue faster than its ability to get enough of the right people to implement that growth, it will not simply stagnate; it will fall.” Author of Good to Great, Jim Collins, clarified this idea: “Great companies are more likely to die of ingestion of too much opportunity, than starvation from too little.” Acquisitions can be a significant cause of this outpacing of growth. Look no further than Yahoo, who acquired twelve companies between September 1997 and June 1999 including Mark Cuban’s Broadcast.com for $5.7B (Kara Swisher at WSJ in 1999), GeoCities for $3.6B, and Y Combinator founder Paul Graham’s Viaweb for $48M. They spent billions in stock and cash to acquire these companies! Its only fitting that two internet darlings would eventually end up in the hands of big-telecom Verizon, who would acquire AOL for $4.4B in 2015, and Yahoo for $4.5B in 2017, only to write down the combined value by $4.6B in 2018. In 2013, Yahoo would acquire Tumblr for $1.1B, only to sell it off this past year for $3M. Acquisitions can really be overwhelming for companies, and frequently they don’t work out as planned. In essence, acquisitions are guesses about future value to customers and rarely are they as clean and smart as technology executives make them seem. Some large organizations have gotten good at acquisitions - Google, Microsoft, Cisco, and Salesforce have all made meaningful acquisitions (Android, Github, AppDynamics, ExactTarget, respectively).

  3. Google and Excite: the acquisition that never happened. McCullough has an incredible quote nestled into the start of chapter six: “Pioneers of new technologies are rarely the ones who survive long enough to dominate their categories; often it is the copycat or follow-on names that are still with us to this day: Google, not AltaVista, in search; Facebook, not Friendster, in social networks.” Amazon obviously bucked this trend (he mentions that), but in search he is absolutely right! In 1996, several internet search companies went public including Excite, Lycos, Infoseek, and Yahoo. As the internet bubble grew bigger, Yahoo was the darling of the day, and by 1998, it had amassed a $100B market cap. There were tons of companies in the market including the players mentioned above and AltaVista, AskJeeves, MSN, and others. The world did not need another search engine. However, in 1998, Google founders Larry Page and Sergey Brin found a better way to do search (the PageRank algorithm) and published their famous paper: “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” They then went out to these massive search engines and tried to license their technology, but no one was interested. Imagine passing on Goolge’s search engine technology. In an over-ingestion of too much opportunity, all of the search engines were trying to be like AOL and become a portal to the internet, providing various services from their homepages. From an interview in 1998, “More than a "portal" (the term analysts employ to describe Yahoo! and its rivals, which are most users' gateway to the rest of the Internet), Yahoo! is looking increasingly like an online service--like America Online (AOL) or even CompuServe before the Web.” Small companies trying to do too much (cough, uber self-driving cars, cough). Excite showed the most interest in Google’s technology and Page offered it to the Company for $1.6M in cash and stock but Excite countered at $750,000. Excite had honest interest in the technology and a deal was still on the table until it became clear that Larry wanted Excite to rip out its search technology and use Google’s instead. Unfortunately that was too big of a risk for the mature Excite company. The two companies parted ways and Google eventually became the dominant player in the industry. Google’s focus was clear from the get-go, build a great search engine. Only when it was big enough did it plunge into acquisitions and development of adjacent technologies.

Dig Deeper

  • Raymond Smith, former CEO of Bell Atlantic, describing the technology behind the internet in 1994

  • Bill Gates’ famous memo: THE INTERNET TIDAL WAVE (May 26, 1995)

  • The rise and fall of Netscape and Mosaic in one chart

  • List of all the companies made famous and infamous in the dot-com bubble

  • Pets.com S-1 (filing for IPO) showin a $62M net loss on $6M in revenue

  • Detail on Microsoft’s antitrust lawsuit

tags: Apple, IBM, Facebook, AT&T, Blackberry, Sequoia, VC, Sean Parker, Yahoo, Excite, Netscape, AOL, Time Warner, Google, Viaweb, Mark Cuban, HP, Packard's Law, Disney, Steve Case, Steve Jobs, Amazon, Drkoop, Android, Mark Zuckerberg, Crowdstrike, Motorola, Viacom, Napster, Salesforce, Marc Benioff, Internet, Internet History, batch2
categories: Non-Fiction
 

October 2019 - The Design of Everyday Things by Don Norman

Psychologist Don Norman takes us through an exploratory journey of the basics in functional design. As the consumerization of software grows, this book’s key principles will become increasingly important.

Tech Themes

  1. Discoverability and Understanding. Discoverability and Understanding are two of the most key principles in design. Discoverability answers the questions of, “Is it possible to figure out what actions are possible and where and how to perform them?” Discoverability is absolutely crucial for first time application users because poor discovery of actions leads to low likelihood of repeat use. In terms of Discoverability, Scott Berkun notes that designers should prioritize what can be discovered easily: “Things that most people do, most often, should be prioritized first. Things that some people do, somewhat often, should come second. Things that few people do, infrequently, should come last.” Understanding answers the questions of: “What does it all mean? How is the product supposed to be used? What do all the different controls and settings mean?” We have all seen and used applications where features and complications dominate the settings and layout of the app. Understanding is simply about allowing the user to make sense of what is going on in the application. Together, Discoverability and Understanding lay the ground work for successful task completion before a user is familiar with an application.

  2. Affordances, Signifiers and Mappings. Affordances represent the set of possible actions that are possible; signifiers communicate the correct action that should take place. If we think about a door, depending on the design, possible affordances could be: push, slide, pull, twist the knob, etc. Signifiers represent the correct action or the action the designer would like you to perform. In the context of a door, a signifier might be a metal plate that makes it obvious that the door must be pushed. Mappings provide straightforward correspondence between two sets of objects. For example, when setting the brightness on an iPhone, swiping up increases brightness and swiping down decreases brightness, as would be expected by a new user. Design issues occur when there is a mismatch in affordances, signifiers and mappings. Doors provide another great example of poor coordination between affordances, signifiers and mappings - everyone has encountered a door with a handle that says push over it. This normally followed by an uncomfortable pushing and pulling motion to discover the actions possible with the door. Why are there handles if I am supposed to push? Good design and alignment between affordances, signifiers and mappings make life easier for everyone.

  3. The Seven Stages of Action. Norman lay outs the psychology underpinning user decisions in seven stages - Goal, Plan, Specify, Perform, Perceive, Interpret, Compare. The first three (Goal, Plan, Specify) represent the clarification of an action to be taken on the World. Once the action is Performed, the final three steps (Perceive, Interpret, Compare) are trying to make sense of the new state of the World. The seven stages of action help generalize the typical user’s interactions with the World. With these stages in mind, designers can understand potential breakdowns in discoverability, understanding, affordances, signifiers, and mappings. As users perform actions within applications, understanding each part of the customer journey allows designers to prioritize feature development and discoverability.

Business Themes

Normans-seven-stages-of-action-Redrawn-from-Norman-2001.png
  1. The best product does not always win, but... If the best product always won out, large entrenched incumbents across the software ecosystem like IBM, Microsoft, Google, SAP, and Oracle would be much smaller companies. Why are there so many large behemoths that won’t fall? Each company has made deliberate design decisions to reduce the amount of customer churn. While most of the large enterprise software providers suffer from Feature Creep, the product and deployment complexity can often be a deterrent to churn. For example, Enterprise CIOs do not want to spend budget to re-platform from AWS to Azure, unless there was a major incident or continued frustration with ease of use. Interestingly enough though, as we’ve discussed, the transition from license-maintenance software to SaaS, as well as the consumerization of the enterprise, are changing the necessity of good design and user experience. If we look at Oracle for example. The business has made several acquisitions of applications to be built on Oracle Databases. But the poor user experience and complexity of the applications is starting to push Oracle out of businesses.

  2. Shipping products on time and on budget. “The day a product development process starts, it is behind schedule and above budget.” The product design process is often long and complex because there is a wide array of disciplines involved in the process. Each discipline thinks they are the most important part of the process and may have different reasons for including a singular feature, which may conflict with good design. To alleviate some of that complexity, Norman suggests hiring design researchers that are separate from the product development focus. These researchers focus on how users are working in the field and are coming up with additional use cases / designs all the time. When the development process kicks off, target features and functionality have already been suggested.

  3. Why should business leaders care about good design? We have already discussed how product design can act as a deterrent to churn. If processes and applications become integral to Company function, then there is a low chance of churn, unless there is continued frustration with ease of use. Measuring product market fit is difficult but from a metrics perspective; companies can look at gross churn ($ or customer amount that left / beginning ARR or beginning customers) or NPS to judge how well their product is being received. Good design is a direct contributor to improved NPS and better retention. When you complement good design with several hooks into the customers, churn reduces.

Dig Deeper

  • UX Fundamentals from General Assembly

  • Why game design is crucial for preventing churn

  • Figma and InVision - the latest product development tools

  • Examples of bad user experience design

  • Introduction to Software Usage Analytics

tags: Internet, UX, UI, Design, Apple, App Store, AWS, Azure, Amazon, Microsoft, Oracle, batch2
categories: Non-Fiction
 

September 2019 - Ready Player One by Ernest Cline

Ernest Cline’s magical world of virtual reality is explores a potential new medium of communication through an excellent heroic tale.

Tech Themes

1. Wide-ranging applicability and use cases of Virtual Reality. Although the novel was written in 2011, Ernest Cline does an incredible job of detailing the complex and numerous use cases of VR throughout the novel. Cline’s 18 year old main character Wade Watts attends school via VR, where you can have a limitless number of students all learn from the same teacher. Beyond that, different worlds and galaxies are easily conjured up with different themes, time periods and technology taking learning and experience to another level: Wade spends time playing old video games in an effort to unlock certain clues about James Halliday, Wade re-enacts all of Matthew Broderick’s part in the movie War Games in an effort to unlock one of the keys, Aech and Wade frequently hang out in the Basement, a re-created 1980’s recreational room with vintage magazines and game consoles. All of these distinct use cases – education, gaming, social networking, and entertainment – are the promise of Virtual Reality. There is a long way to go before that promise is met.

2. The intersection of the online/offline world. As James Halliday writes in Anorak’s Almanac: “Going outside is highly overrated.” Ready Player One does a great job of exploring the conflation of the online and offline worlds. The book weaves together experiences from this intersection into critical moments of the book including Wade’s escape from the Stacks and his imprisonment by IOI. While there is a tangible feeling that online is the much preferred experience for all the reasons discussed above, it’s the offline in-person events that truly shape the heroic ending of the book. This serves as a reminder that the OASIS is very much a virtual reality and explores the need for in-person human connection. Ironically, this is something Halliday sorely missed out on as shown through his unrequited love for Ogden Morrow’s (co-creator of the OASIS) wife, Kira. As big companies move into our homes through Google Homepods, Amazon Echos, Facebook Portals, the human connection element needs to be maintained.

3. The ability to disguise your identity online. “In the OASIS, you could become whomever and whatever you wanted to be, without ever revealing your true identity, because your anonymity was guaranteed.” This quote about the OASIS is largely true of today’s Internet. Through private browsing, Virtual Private Networks, avoiding Google and ad-tagging websites, people are able to stay anonymous online already. But what the OASIS does in addition, is allow you to modify not only your back-story, but also how you appear to others, something that is very important in VR. While there is no question that Wade, Art3mis and Aech are able to avoid insecurities by masking their identities, eventually those insecurities are revealed, albeit with little consequence. Given the myriad of leaks and breaches in the last few years (Yahoo, Facebook, DoorDash, etc.), as the VR ecosystem continues to grow, increasing amounts of privacy will be needed to maintain anonymity.

Business themes

1. What is the dominant revenue model in VR? The evil villains at Innovative Online Industries (IOI) and their army of sixers have tried several hostile takeover attempts to acquire Halliday’s Gregarious Simulations Systems in order to convert it to a paid user model. IOI is the world’s largest internet service provider and just like other three letter named tech behemoths (cough, IBM, cough), fits the classic evil corporation vibe. Dismissing the potential business and technology conflicts (the world’s largest ISP is probably critical in delivering the OASIS throughout the world), its interesting to theorize what the dominant revenue model of VR may be. Facebook recently launched its VR world to complement its Oculus devices and there have been varied attempts to launch similar software worlds like Rec Room. The big discovery Google made early on was that advertising would be the business model of the web. Facebook copied this as it created social networking and as devices transitioned from desktop to mobile, and image to video, advertising continued to be the dominant mode of content monetization. Is there any reason to think VR will be any different? Potentially. The current dominant model for video gaming is subscriber based, freemium (paying for enhanced abilities, character changes, etc.) or single purchase. While there is no reason these ideas can’t be combined with advertising, the idea of a multi-world VR landscape may reduce some of the targeted ROI you receive from very specific ad-targeting on Instagram and Google today. In a limitless world, advertising to specific people will be difficult. Beyond that, porting the mish-mash of complex technologies used in today’s advertising landscape would add even more challenge.

2. The BIG, evil tech corporation. IOI is the quintessential evil technology company. As the world’s largest ISP, IOI could be a reference to Comcast, which is the United States’ largest ISP and often referenced as one of the most hated companies. Comcast, like other ISPs is always facing the challenge of serving millions of subscribers but unlike other companies, they are monopolistic in certain areas where they are the only viable provider for internet, allowing them to raise prices and treat customers poorly. The big, evil technology corporation cliché has been around for a long time and today’s largest tech companies have all spent sometime being that cliché. This dynamic can arise for many reasons. At Amazon, it’s the continued alienation of open source communities, the anti-competitive behavior around its search algorithm and the smothering of small vendors on its marketplace. Facebook and Google have both faced privacy concerns. Google has been sued for manipulating search on mobile devices. Microsoft was sued for anti-trust issues over browsers. As startups begin to dominate their core businesses, unless they continue innovating, they begin acting defensively to maintain their leading position. Facebook feature copied Snapchat stories almost immediately after they came out. IBM had a book written on them in the 1980s claiming they were anticompetitive. There is a reason corporate communications (WeWork lol) are so important and maintaining the image of a positive change for good. Every major technology company has spent time as the evil one, some have just spent more time than others.

3. Difficulty in creating VR applications. Ready Player One stoked a lot interest in the promise of VR, but the actual implementation is incredibly difficult with the hardware and software we have available as tools today. Moore’s law is slowing and some computer scientists have suggested specific chips to address the demands of newer technologies like Artificial Intelligence, Virtual Reality and Deep Learning. After Facebook acquired Oculus in 2014 for $2.4B, funding continued to flow into VR startups. Magic Leap, the highly secretive and most heavily funded VR startup has raised $2.3B on its own, and after years of development finally released its hardware for over $2,000 per device and its unclear if it makes a profit on any sales yet. More recently, several VR companies have gone bankrupt and laid off employees as product development didn’t reach application or end users before the funding ran out. While the software and hardware continues to improve, a lot still needs to be figured out before VR becomes mainstream.

Dig Deeper

  • VR Garden in Montreal

  • Oculus co-founder Palmer Lucky’s review of Magic Leap

  • Augmented Reality and Virtual Reality in Healthcare

  • Deep dive into the secretive Magic Leap

  • The real world easter egg hunt from Ready Player One

tags: Ernest Cline, VR, AR, Video Games, IBM, Facebook, Snap, Google, Amazon, Apple, War Games, VPN, DoorDash, Yahoo, Rec Room, Magic Leap, Oculus, Deep Learning, batch2
categories: Fiction
 

August 2019 - How Google Works by Eric Schmidt and Jonathan Rosenberg

While at times it reads as a piece of Google propaganda, this book offers insight into the management techniques that Larry, Sergey and Eric employed to grow the Company to massive scale. Its hard to read this book and expect that all of these practices were actually implemented – it reads like a “How to build a utopia work culture” - but some of the principles are interesting, and more importantly it gives us insight into what Google values in their products and operations.

Tech Themes

  1. Smart Creatives. Perhaps the most important emphasis in the book is placed on the recruiting and hiring of what Eric Schmidt and Jonathan Rosenberg have termed: Smart Creatives – “people who combine technical & business knowledge, creativity and always-learning attitude.” While these seem like the desired platitudes of every silicon valley employee, it gives a window into what Google finds important in its employees. For example, unlike Amazon, which has both business product managers and technical product managers, Google prefers its PMs to be both business focused and highly technical. Smart Creatives are mentioned hundreds of times in the book and continually underpin the success of new product launches. The book almost harps on it too much, to the point where it feels like Eric Schmidt was trying to convince all Googlers that they were truly unique.

  2. Meetings, Q&A, Data and Information Management. Google is one of the many Silicon Valley companies that hosts company wide all-hands Q&A sessions on Friday where anyone can ask a question of Google’s leadership. Information transparency is critically important to Google, and they try to allow data to be accessible throughout the organization at all times. This trickles into other aspects of Google’s management philosophy including meetings and information management. At Google, meetings have a single owner, and while laptops largely remain closed, it’s the owner’s job to present the relevant data and derive the correct insights for the team. To that end, Google makes its information transparently available for all to access – this process is designed to avoid information asymmetry at management levels. One key issue faced by poor management teams is only receiving the best information at the top – this is countered by Amazon through incredibly blunt and aggressive communication; Google, on the other hand, maintains its intense focus on data and results to direct product strategy, so much so that it even studies its own teams productivity using internal data. Google’s laser focus on data makes sense given its main advertising products harvest the world’s internet user data for their benefit, so understanding how to leverage data is always a priority at Google.

  3. 80/20 Time. As part of Google’s product innovation strategy, employees can spend 20% of their work time on creative projects separate from their current role. While the idea sounds like an awesome to keep employees interested and motivated, in practice, its much more structured. Ideas have to be approved by managers and they are only allowed if they can directly impact Google’s business. Some great innovations were spawned out of this policy including Gmail and Google Maps but Google employees have joked that it should be called “120%” time rather than 80%.

Business Themes

  1. Google’s Cloud Strategy. “You should spend 80% of your time on 80% of your revenue.” This quote speaks volumes when it comes to Google’s business strategy. Google clearly is the leader in Search and search advertising. Not only is it the default search engine preferred by most users, it also owns the browser market that directs searches to Google, and the most used operating system. It has certainly created a dominant position in the market and even done illegal things to maintain that advantage. Google also maintains and mines your data, and as Stratechery has pointed out, they are not hiding it anywhere. But what happens when the next wave of computing comes, and you are so focused on your core business that you end up light years behind competition from Amazon (Web Services) and Microsoft (Azure)? That’s where Google finds itself today, and recent outages and issues haven’t helped. So what is Google’s “Cloud Strategy?” The answer is lower priced, open source alternatives. Google famously developed and open sourced, Kubernetes, the container orchestration platform, which has become an increasingly important technology as developers opt for light weight alternatives to traditional virtual machines. They have followed this open sourcing with a, “We are going to open source everything” mentality that is also being employed, a bit more defensively at Microsoft. Google seeks to be an open source layer, either through Kubernetes (which runs in Azure and AWS) or through other open source platforms (Anthos) and just touch some of your company’s low churn cloud spend. Their issue is scale and support. With their knowledge of data centers and parallel computing, cloud capabilities seemed like an obvious place where Google could win, but they fumbled on building a great product because they were so focused on protecting their core business. They are in a catch up position and new CEO of Google Cloud, Thomas Kurian (formerly at Oracle), isn’t afraid to make acquisitions to build out missing product capabilities, which is why it bought Looker earlier this year. It makes sense why a company as focused as Google is on data, would want a cloud focused data analysis tool. Now they are betting on M&A and a highly open-sourced multi-public cloud future as the only way they can win.

  2. “Objective” Key Results. As mentioned previously, the way Google combats potential information asymmetries by empowering individuals throughout the organization with data. This extends to famous venture capitalist (who invested in both Google and Amazon) John Doerr’s favorite data to examine – OKRs – Objective key results. Each Googler has a specific set of OKRs that they are responsible for maintaining on a quarterly basis. Every person’s OKRs are readily available for anyone to see throughout the Company i.e. full transparency. OKRs are public, measurable, and ambitious. This keeps engineers focused and accountable, as long as the OKRs are set correctly and actually measure outcomes. These fit so perfectly with Google’s focus on mining and monitoring data at all times: their products and their employees need to be data driven at all times.

Dig Deeper

  • Recent reports highlight numerous cultural issues at Google, that are not addressed in the book

  • Google Cloud was plagued by internal clashes and missed acquisitions

  • PayPal mafia veteran, Keith Rabois, won’t fund Google PM’s as founders

  • List of Google’s biggest product failures over time

  • Stadia: Google’s game streaming service

tags: Google, Cloud Computing, Scaling, Management, Internet, China, John Doerr, OKRs, Oracle, GCP, Google Cloud, Android, Amazon
categories: Non-Fiction
 

May 2019 - The Everything Store: Jeff Bezos and the Age of Amazon by Brad Stone

This book is a great deep dive on the history of Amazon and how it became the global powerhouse that it is today.

Tech Themes

  1. The Birth of AWS. We’ve looked at the software transition from on premise, license maintenance software to SaaS hosted in the cloud, but let’s dive deep into how the cloud came to be. The first ideas of AWS go back to 2002 when Bezos met with O’Reilly Media, a book publisher who in order to compete with Amazon, had created a way to scrape the latest book rankings off Amazon’s website. O’Reilly suggested creating a set of tools to let developers access Amazon’s rankings, and in 2003 Amazon launched Amazon Web Services (AWS) to create commerce API’s for third parties. Around this time, Amazon had centralized its IT computing resources in a separate building with hardware professionals operating and maintaining the infrastructure for the entire company. While parts of the infrastructure had improved, Amazon was struggling internally to provision and scale its computing resources. In 2004, Chris Pinkham, head of the infrastructure division, relocated to South Africa to open up Amazon’s first office in Cape Town. His first order of business was to figure out the best way to provision resources internally to allow developers to work on all types of applications on Amazon’s servers. Chris elected to use Xen, a computer that sits on top of infrastructure and acts as a controller to allow multiple projects access the same hardware. This led to the development of Elastic Compute Cloud (EC2). During this time, another group within Amazon was working on solving the problem of storing the millions of gigabytes of data Amazon had created. This team was led by Alan Atlas, who could not escape Bezos’ laser focus: “It would always start out fun and happy, with Jeff’s laugh rebounding against the walls. Then something would happen and the meeting would go south and you would fear for your life. I literally thought I’d get fired after everyone one of those meetings.” In March 2006, Amazon launched the Simple Storage Service (S3), and then a few months later launched EC2. Solving internal problems can lead to incredibly successful companies; Slack, for example, originally started as a game development company but couldn’t get the product off the ground and eventually pivoted into the messaging giant that it is today: “Tiny Speck, the company behind Glitch, will continue. We have developed some unique messaging technology with applications outside of the gaming world and a smaller core team will be working to develop new products.”

  2. A9. In the early 2000s, Google arrived on the scene and began to sit in between Amazon and potential sales. Around this time, Amazon’s core business was struggling and a New York Times article even called for Bezos to resign. Google was siphoning off Amazon’s engineers and Bezos knew he had to take big strategic bets in order to ward off Google’s advances. To do that, he hired Udi Manber, a former Yahoo executive with a PhD in computer science who had written the authoritative textbook on Algorithms. In 2003, Udi set up shop in Palo Alto in a new Amazon subsidiary called A9 (shorthand for Algorithms). The new subsidiary’s sole goal was to create a web search engine that could rival Google’s. While A9.com never completely took off, the new development center did improve Amazon’s website search and created Clickriver, the beginning of Amazon’s advertising business, which minted $10B in revenue last year. Udi eventually became VP of Engineering for all of Google’s search products and then its Youtube Division. A9 still exists to tackle Amazon’s biggest supply chain math problems.

  3. Innovation, Lab126 and the Kindle. In 2004, Bezos called Steve Kessel into his office and moved him from his current role as head of Amazon’s successful online books business, to run Amazon Digital, a small and not yet successful part of Amazon. This would become a repeating pattern in Kessel’s career who now finds himself head of all of Amazon’s physical locations, including its Whole Foods subsidiary. Bezos gave Kessel an incredibly abstract goal, “Your job is to kill your own business. I want you to proceed as if your goal is to put everyone selling physical books out of a job.” Bezos wanted Kessel to create a digital reading device. Kessel spent the next few months meeting with executives at Apple and Palm (make of then famous Palm Pilots) to understand the current challenges in creating such a device. Kessel eventually settled into an empty room at A9 and launched Lab126 (1 stands for a, 26 for z – an ode to Bezos’s goal to sell every book A-Z), a new subsidiary of Amazon. After a long development process and several supply chain issues, the Company launched the Kindle in 2007.

    Business Themes

  4. Something to prove: Jeff Bezos’s Childhood. What do Jeff Bezos, Steve Jobs, Elon Musk and Larry Ellison (founder of Oracle) all have in common? They all had somewhat troubled upbringings. Jobs and Ellison were famously put up for adoption at young ages. Musk’s parents divorced and Elon endured several years of an embattled relationship with his father. Jeff Bezos was born Jeffrey Preston Jorgenson, on January 12, 1964. Ted Jorgenson, Bezos’s biological father, married his mother, Jackie Gise after Gise became pregnant at age sixteen. The couple had a troubled relationship and Ted was immature and an inattentive father. The couple divorced in 1965. Jacklyn eventually met Miguel Bezos, a Cuban immigrant college student, while she was working the late shift at the Bank of New Mexico’s accounting department. Miguel and Jacklyn were married in 1968 and Jeffrey Jorgenson became Jeffrey Bezos. Several books have theorized the maniacal drive of these entrepreneurs relates back to ultimately prove self-worth after being rejected by loved ones at a young age.

  5. Anti-Competitive Amazon & the Story of Quidsi. Amazon has an internal group dubbed Competitive Intelligence, that’s sole job is to research the products and services of competitors and present results to Jeff Bezos so he can strategically address any places where they may be losing to the competition. In the late 2000s, Competitive Intelligence began tracking a company known as Quidsi, famous for its site Diapers.com, which provided discount baby products that could be purchased on a recurring subscription basis. Quidsi had grown quickly because it had customized its distribution system for baby products. In 2009, competitive intelligence reached out to Quidsi founder, Marc Lore (founder of Jet.com and currently the head of Walmart e-commerce) saying it was looking to invest in the category. After rebuffing the offer, Quidsi soon noticed that Amazon was pricing its baby products 30% cheaper in every category; the company even tried dropping prices lower only to see Amazon pages reset to even lower prices. After a few months, Quidsi knew they couldn’t remain in a price battle for long and launched a sale of the company. Walmart agreed in principle to acquire the business for $900M but upon further diligence reduced its bid, which prompted Lore to call Amazon. Lore and his executive team went to meet with Amazon, and during the meeting, Amazon launched Amazon Mom, which gave 30% discounts on all baby products and allowed participants to purchase products on a recurring basis. At one point, Amazon’s prices dipped so low it was on track to lose $100M in three months in the diapers category alone. Amazon submitted a $540M bid for Quidsi and subsequently entered into an exclusivity period with the Company. As the end to exclusivity grew nearer, Walmart submitted a new bid at $600M, but the Amazon team threatened full on price war if Quidsi went with Walmart, so on November 8, 2010, Quidsi was acquired by Amazon for $540M. One month after the acquisition, Amazon stopped the Amazon Mom program and raised all of its prices back to normal levels. The Federal Trade Commission reviewed the deal for four months (longer than usual), but ultimately allowed the acquisition because it did not create a monopoly in the sale of baby products. Quidsi was ultimately shut down by Amazon in 2017, because it was unable to operate it profitably.

  6. The demanding Jeff Bezos and six page memos. At Amazon, nobody uses powerpoint presentations. Instead, employees write out six page narratives in prose. Bezos believes this helps create clear and concise thinking that gets lost in flashy powerpoint slides. Whenever someone wants to launch new initiative or project, they have to submit a six page memo framed as if a customer might be hearing it for the first time. Each meeting begins with the group reading the document and the discussion begins from there. At times, especially around the release of AWS, these documents grew increasingly complex in length and size given the products being described did not already exist. Bezos often responds intensely to these memos, with bad responses including: “Are you just lazy or incompetent?” and “If I hear that idea again, I’m gonna have to kill myself” and “This document was clearly written by the B team. Can someone get me the A team document? I don’t want to waste my time with the B team document.” Its no wonder Amazon is such a terrible place to work.

Dig Deeper

  • How Amazon took the opposite approach that apple took to pricing EC2 and S3

  • The failed Amazon Fire Phone and taking big bets

  • The S Team - Amazon’s intense executives

  • The little-known deal that saved Amazon from the dot-com crash

  • Mary Meeker, Amazon and the internet bubble: Amazon.bomb: How the internet's biggest success story turned sour

  • Customer Centric: Amazon Celebrates 20 Years Of Stupendous Growth As 'Earth's Most Customer-Centric Company

tags: Amazon, Cloud Computing, e-Commerce, Scaling, Seattle, Brad Stone, Jeff Bezos, Elon Musk, Steve Jobs, Mary Meeker, EC2, S3, IaaS, batch2
categories: Non-Fiction
 

April 2019 - Only the Paranoid Survive by Andrew S. Grove

This book details how to manage a company through complex industry change. It is incredibly prescient and a great management book.

Tech Themes

  1. The decoupling of hardware and software. In the early days of personal computers (1980s) the hardware and software were both provided by the same company. This is complete vertical alignment, similar to what we’ve discussed before with Apple. The major providers of the day were IBM, Digital Equipment Corporation (DEC - Acquired by Compaq which was acquired by HP), Sperry Univac and Wang. When you bought a PC, the sales and distribution, application software, operating system, and chips were all handled by the same Company. This created extreme vendor lock-in because each PC had different and complicated ways of operating. Customers typically stayed with the same vendor for years to avoid the headache of learning the new system. Over time, driven by the increases in memory efficiency, and the rise of Intel (where Andy Grove was employee #3), the PC industry began to shift to a horizontal model. In this model, retail stores (Micro Center, Best Buy, etc.) provided sales and distribution, dedicated software companies provided applications (Apple at the time, Microsoft, Mosaic, etc.), Intel provided the chips, and Microsoft provided the operating system (MS-DOS, then Windows). This decoupling produced a more customized computer for significantly lower cost and became the dominant model for purchasing going forward. Dell computers were the first to really capitalize on this trend.

  2. Microprocessors and memory chips. Intel started in 1968 and was the first to market with a microchip that could be used to store computer memory. Demand was strong because it was the first of its kind, and Intel significantly ramped up production to satisfy that demand. By the early eighties, it was a computer powerhouse and the name Intel was synonymous with computer memory. In the mid-eighties, Japanese memory producers began to appear on the scene and could produce higher-quality chips at a cheaper cost. At first, Intel saw these producers as a healthy backup plan when demand exceeded Intel’s supply, but over time it became clear they were losing market share. Intel saw this commoditization and decided to pivot out of the memory business and into the newer, less-competitive microprocessor business. The microprocessor (or CPU) handles the execution of tasks within the computer, while memories simply store the byproduct of that execution. As memory became easier to produce, the cost dropped dramatically and business became more competitive with producers consistently undercutting each other to win business. On the other hand, microprocessors became increasingly important as the internet grew, applications became more complex and computer speed became a top-selling point.

  3. Mainframes to PCs. IBM had become the biggest technology company in the world on the backs of mainframes: massive, powerful, inflexible, and expensive mega-computers. As the computing industry began to shift to PCs and move away from a vertical alignment to a horizontal one, IBM was caught flat-footed. In 1981, IBM chose Intel to provide the microprocessor for their PC, which led to Intel becoming the most widely accepted supplier of microprocessors. The industry followed volume - manufacturers focused on producing on top of Intel architecture, developers focused on developing on the best operating system (Microsoft Windows) and over time Intel and Microsoft encroached on IBM’s turf. Grove’s reasoning for this is simple: “IBM was composed of a group of people who had won time and time again, decade after decade, in the battle among vertical computer players. So when the industry changed, they attempted to use the same type of thinking regarding product development and competitiveness that had worked so well in the past.” Just because the company has been successful before, it doesn’t mean it will be successful again when change occurs.

The six forces acting on a business at any time. When one becomes outsized, it can represent a strategic inflection point to the business.

The six forces acting on a business at any time. When one becomes outsized, it can represent a strategic inflection point to the business.

Business Themes

  1. Strategic Inflection Points and 10x forces. A strategic inflection point is a fundamental shift in a business, due to industry dynamics. Examples of well known shifts include: mainframes to PCs, vertical computer production to horizontal production, on-premise hardware to the cloud, shrink-wrapped software to SaaS, and physical retail to e-commerce. These strategic inflection points are caused by 10x forces, which represent the underlying shift in the technology or demand that has caused the inflection point. Deriving from the Porter five forces model, these forces can affect your current competitors, complementors, customers, suppliers, potential competitors and substitutes. For Intel, the 10x force came from their Japanese competitors which could produce better quality memories at a substantially lower cost. Recognizing these inflection points can be difficult, and takes place over time in stages. Grove describes it best: “First, there is a troubling sense that something is different. Things don’t work the way they used to. Customers’ attitudes toward you are different. The trade shows seem weird. Then there is a growing dissonance between what your company thinks it is doing and what is actually happening inside the bowels of the organization. Such misalignment between corporate statements and operational actions hints at more than the normal chaos that you have learned to live with. Eventually, a new framework, a new set of understandings, a new set of actions emerges…working your way through a strategic inflection point is like venturing into what i call the valley of death.”

  2. The bottoms up, top-down way to “Let chaos reign.” The way to respond to a strategic inflection point comes through experimentation. As Grove says, “Loosen up the level of control that your organization normally is accustomed to. Let people try different techniques, review different products. Only stepping out of the old ruts will bring new insights.” This idea was also recently discussed by Jeff Bezos in his annual shareholder letter - he likened this idea to wandering: “Sometimes (often actually) in business, you do know where you’re going, and when you do, you can be efficient. Put in place a plan and execute. In contrast, wandering in business is not efficient … but it’s also not random. It’s guided – by hunch, gut, intuition, curiosity, and powered by a deep conviction that the prize for customers is big enough that it’s worth being a little messy and tangential to find our way there. Wandering is an essential counter-balance to efficiency. You need to employ both. The outsized discoveries – the “non-linear” ones – are highly likely to require wandering.” When faced with mounting evidence that things are changing, begin the process of strategic wandering. This needs to be coupled with bottom-up actions from middle managers who are exposed to the underlying industry/technology change on a day to day basis. Strategic wandering reinforced with the buy-in and action of middle management can produce major advances as was the case with Amazon Web Services.

  3. Traversing the valley of death. The first task in traversing through a strategic inflection point is to create a clear, explainable, mental image of what the business looks like on the other side. This becomes your new focus and the Company’s mantra. For Intel, in 1986, it was, “Intel, the microcomputer company.” This phrase did two things: it broke the previous synonymy of Intel with ‘memory’ and signaled internally a new focus on microprocessors. Next, the Company should redeploy its best resources to its biggest problems, including the CEO. Grove described this process as, “going back to school.” He met with managers and engineers and grilled them with questions to fully understand the state and potential of the inflection point. Once the new direction is decided, the company should focus all of its efforts in one direction without hedging. While it may feel comfortable to hedge, it signals an unclear direction and can be incredibly expensive.

Dig Deeper

  • Mapping strategic inflection points to product lifecycles

  • Review of grocery strategic inflection points by Coca-cola

  • Strategic inflection point for Kimberly Clark in the paper industry: “Sell the Mills”

  • Andy Grove survived the Nazi and Communist regimes of Hungary

  • Is Facebook at a strategic inflection point?

tags: Andy Grove, Intel, Chips, hardware, Amazon, Jeff Bezos, Strategic inflection point, 10x force, software, batch2
categories: Non-Fiction
 

About Contact Us | Recommend a Book Disclaimer