• Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search
Tech Book of the Month
  • Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search

February 2022 - Cable Cowboy by Mark Robichaux

This month we jump into the history of the cable industry in the US with Cable Cowboy. The book follows cable’s main character for over 30 years, John Malone, the intense, deal-addicted CEO of Telecommunications International (TCI).

Tech Themes

  1. Repurposed Infrastructure. Repurposed infrastructure is one of the incredible drivers of technological change covered in Carlota Perez’s Technology Revolutions and Financial Capital. When a new technology wave comes along, it builds on the backs of existing infrastructure to reach a massive scale. Railroads laid the foundation for oil transport pipelines. Later, telecommunications companies used the miles and miles of cleared railroad land to hang wires to provide phone service through the US. Cable systems were initially used to pull down broadcast signals and bring them to remote places. Over time, more and more content providers like CNN, TBS, BET started to produce shows with cable distribution in mind. Cable became a bigger and bigger presence, so when the internet began to gain steam in the early 1990s, Cable was ready to play a role. It just so happened that Cable was best positioned to provide internet service to individual homes because, unlike the phone companies’ copper wiring, Cable had made extensive use of coaxial fiber which provided much faster speeds. In 1997, after an extended period of underperformance for the Cable industry, Microsoft announced a $1B investment in Comcast. The size of the deal showed the importance of cable providers in the growth of the internet.

  2. Pipes + Content. One of the major issues surrounding TCI as they faced anti-trust scrutiny was their ownership of multiple TV channels. Malone realized that the content companies could make significant profits, especially when content was shown across multiple cable systems. TCI enjoyed the same Scale Economies Power as Netflix. Once the cable channel produces content, any way to spread the content cost over more subscribers is a no-brainer. However, these content deals were worrisome given TCI’s massive cable presence (>8,000,000 subscribers). TCI would frequently demand that channels take an equity investment to access TCI’s cable system. “In exchange for getting on TCI systems, TCI drove a tough bargain. He demanded that cable networks either allow TCI to invest in them directly, or they had to give TCI discounts on price, since TCI bought in bulk. In return for most-favored-nation-status on price, TCI gave any programmer immediate access to nearly one-fifth of all US subscribers in a single stroke.” TCI would impose its dominant position - we can either carry your channel and make an investment, or you can miss out on 8 million subscribers. Channels would frequently choose the former. Malone tried to avoid anti-trust by creating Liberty Media. This spinoff featured all of TCI’s investments in cable providers, offering a pseudo-separation from the telecom giant (although John Malone would completely control liberty).

  3. Early, Not Wrong. Several times in history, companies or people were early to an idea before it was feasible. Webvan formed the concept of an online grocery store that could deliver fresh groceries to your house. It raised $800M before flaming out in the public markets. Later, Instacart came along and is now worth over $30B. There are many examples: Napster/Spotify, MySpace/Facebook, Pets.com/Chewy, Go Corporation/iPad, and Loudcloud/AWS. The early idea in the telecom industry was the information superhighway. We’ve discussed this before, but the idea is that you would use your tv to access the outside world, including ordering Pizza, accessing bank info, video calling friends, watching shows, and on-demand movies. The first instantiation of this idea was the QUBE, an expensive set-top box that gave users a plethora of additional interactive services. The QUBE was the launch project of a joint venture between American Express and Warner Communications to launch a cable system in the late 1970s. The QUBE was introduced in 1982 but cost way too much money to produce. With steep losses and mounting debt, Warner Amex Cable “abandoned the QUBE because it was financially infeasible.” In 1992, Malone delivered a now-famous speech on the future of the television industry, predicting that TVs would offer 500 channels to subscribers, with movies, communications, and shopping. 10 years after the QUBE’s failure, Time Warner tried to fulfill Malone’s promise by launching the Full-Service Network (FSN) with the same idea - offering a ton of services to users through a specialized hardware + software approach. This box was still insanely expensive (>$1,000 per box) because the company had to develop all hardware and software. After significant losses, the project was closed. It wasn’t until recently that TV’s evolved to what so many people thought they might become during those exciting internet boom years of the late 1990s. In this example and several above, sometimes the idea is correct, but the medium or user experience is wrong. It turned out that people used a computer and the internet to access shop, order food, or chat with friends, not the TV. In 2015, Domino’s announced that you could now order Pizza from your TV.

Business Themes

john-malone-8-664x442-c-center.jpg
  1. Complicated Transactions. Perhaps the craziest deal in John Malone’s years of experience in complex deal-making was his spinoff of Liberty Media. Liberty represented the content arm of TCI and held positions in famous channels like CNN and BET. Malone was intrigued at structuring a deal that would evade taxes and give himself the most potential upside. To create this “artificial” upside, Malone engineered a rights offering, whereby existing TCI shareholders could purchase the right to swap 16 shares of TCI for 1 share of Liberty. Malone set the price to swap at a ridiculously high value of TCI shares - ~valuing Liberty at $300 per share. “It seemed like such a lopsided offer: 16 shares of TCI for just 1 share of Liberty? That valued Liberty at $3000 a share, for a total market value of more than $600M by Malone’s reckoning. How could that be, analysts asked, given that Liberty posed a loss on revenue fo a mere $52M for the pro-forma nine months? No one on Wall Street expected the stock to trade up to $300 anytime soon.” The complexity of the rights offering + spinoff made the transaction opaque enough that even seasoned investors were confused about how it all worked and declined to buy the rights. This deal meant Malone would have more control of the newly separate Liberty Media. At the same time, the stock spin had such low participation that shares were initially thinly traded. Once people realized the quality of the company’s assets, the stock price shot up, along with Malone’s net worth. Even crazier, Malone took a loan from the new Liberty Media to buy shares of the company, meaning he had just created a massive amount of value by putting up hardly any capital. For a man that loved complex deals, this deal is one of his most complex and most lucrative.

  2. Deal Maker Extraordinaire / Levered Rollups. John Malone and TCI loved deals and hated taxes. When TCI was building out cable networks, they acquired a new cable system almost every two weeks. Malone popularized using EBITDA (earnings before interest, taxes, depreciation, and amortization) as a proxy for real cash flow relative to net income, which incorporates tax and interest payments. To Malone, debt could be used for acquisitions to limit paying taxes and build scale. Once banks got comfortable with EBITDA, Malone went on an acquisition tear. “From 1984 to 1987, Malone had spent nearly $3B for more than 150 cable companies, placing TCI wires into one out of nearly every five with cable in the country, a penetration that was twice that of its next largest rival.” Throughout his career, he rallied many different cable leaders to find a deal that worked for everyone. In 1986, when fellow industry titan Ted Turner ran into financial trouble, Malone reached out to Viacom leader Sumner Redstone, to avoid letting Time Inc (owner of HBO) buy Turner’s CNN. After a quick negotiation, 31 cable operators agreed to rescue Turner Broadcasting with a $550M investment, allowing Turner to maintain control and avoid a takeover. Later, Malone led an industry consortium that included TCI, Comcast, and Cox to create a high speed internet service called, At Home, in 1996. “At Home was responsible for designing the high-speed network and providing services such as e-mail, and a home page featuring news, entertainment, sports, and chat groups. Cable operators were required to upgrade their local systems to accommodate two-way transmission, as well as handle marketing, billing, and customer complaints, for which they would get 65% of the revenue.” At Home ended up buying early internet search company Excite in a famous $7.5B deal, that diluted cable owners and eventually led to bankruptcy for the combined companies. Malone’s instinct was always to try his best to work with a counterparty because he genuinely believed a deal between two competitors provided better outcomes to everyone.

  3. Tracking Stocks. Malone popularized the use of tracking stocks, which are publicly traded companies that mirror the operating performance of the underlying asset owned by a company. John Malone loved tracking stocks because they could be used to issue equity to finance operations and give investors access to specific divisions of a conglomerate while allowing the parent to maintain full control. While tracking stocks have been out of favor (except for Liberty Media, LOL), they were once highly regarded and even featured in the original planning of AT&T’s $48B purchase of TCI in 1998. AT&T financed its TCI acquisition with debt and new AT&T stock, diluting existing shareholders. AT&T CEO Michael Armstrong had initially agreed to use tracking stocks to separate TCI’s business from the declining but cash-flowing telephone business but changed his mind after AT&T’s stock rocketed following the TCI deal announcement. Malone was angry with Armstrong’s actions, and the book includes an explanation: “heres why you should mess with it, Mike: You’ve just issued more than 400 million new shares of AT&T to buy a business that produces no earnings. It will be a huge money-loser for years, given how much you’ll spend on broadband. That’s going to sharply dilute your earnings per share, and your old shareholders like earnings. That will hurt your stock price, and then you can’t use stock to make more acquisitions, then you’re stuck. If you create a tracking stock to the performance of cable, you separate out the losses we produce and show better earnings for your main shareholders; and you can use the tracker to buy more cable interests in tax-free deals.” Tracking stocks all but faded from existence following the internet bubble and early 2000s due to their difficulty of implementation and complexity, which can confuse shareholders and cause the businesses to trade at a large discount. This all begs the question, though - which companies could use tracking stock today? Imagine an AWS tracker, a Youtube tracker, an Instagram tracker, or an Xbox tracker - all of these could allow cloud companies to attract new shareholders, do more specific tax-free mergers, and raise additional capital specific to a business unit.

Dig Deeper

  • John Malone’s Latest Interview with CNBC (Nov 2021)

  • John Malone on LionTree’s Kindred Cast

  • A History of AT&T

  • Colorado Experience: The Cable Revolution

  • An Overview on Spinoffs

tags: John Malone, TCI, CNN, TBS, BET, Cable, Comcast, Microsoft, Netflix, Liberty Media, Napster, Spotify, MySpace, Facebook, Pets.com, Chewy, Go Corporation, iPad, Loudcloud, AWS, American Express, Warner, Time Warner, Domino's, Viacom, Sumner Redstone, Ted Turner, Bill Gates, At Home, Excite, AT&T, Michael Armstrong, Bob Magness, Instagram, YouTube, Xbox
categories: Non-Fiction
 

October 2020 - Working in Public: The Making and Maintenance of Open Source Software by Nadia Eghbal

This month we covered Nadia Eghbal’s instant classic about open-source software. Open-source software has been around since the late seventies but only recently it has gained significant public and business attention.

Tech Themes

The four types of open source communities described in Working in Public

The four types of open source communities described in Working in Public

  1. Misunderstood Communities. Open source is frequently viewed as an overwhelmingly positive force for good - taking software and making it free for everyone to use. Many think of open source as community-driven, where everyone participates and contributes to making the software better. The theory is that so many eyeballs and contributors to the software improves security, improves reliability, and increases distribution. In reality, open-source communities take the shape of the “90-9-1” rule and act more like social media than you could think. According to Wikipedia, the "90–9–1” rule states that for websites where users can both create and edit content, 1% of people create content, 9% edit or modify that content, and 90% view the content without contributing. To show how this applies to open source communities, Eghbal cites a study by North Carolina State Researchers: “One study found that in more than 85% of open source projects the research examined on Github, less than 5% of developers were responsible for 95% of code and social interactions.” These creators, contributors, and maintainers are developer influencers: “Each of these developers commands a large audience of people who follow them personally; they have the attention of thousands of developers.” Unlike Instagram and Twitch influencers, who often actively try to build their audiences, open-source developer influencers sometimes find the attention off-putting - they simply published something to help others and suddenly found themselves with actual influence. The challenging truth of open source is that core contributors and maintainers give significant amounts of their time and attention to their communities - often spending hours at a time responding to pull requests (requests for changes / new features) on Github. Evan Czaplicki’s insightful talk entitled “The Hard Parts of Open Source,” speaks to this challenging dynamic. Evan created the open-source project, Elm, a functional programming language that compiles Javascript, because he wanted to make functional programming more accessible to developers. As one of its core maintainers, he has repeatedly been hit with requests of “Why don’t you just…” from non-contributing developers angrily asking why a feature wasn’t included in the latest release. As fastlane creator, Felix Krause put it, “The bigger your project becomes, the harder it is to keep the innovation you had in the beginning of your project. Suddenly you have to consider hundreds of different use cases…Once you pass a few thousand active users, you’ll notice that helping your users takes more time than actually working on your project. People submit all kinds of issues, most of them aren’t actually issues, but feature requests or questions.” When you use open-source software, remember who is contributing and maintaining it - and the days and years poured into the project for the sole goal of increasing its utility for the masses.

  2. Git it? Git was created by Linus Torvalds in 2005. We talked about Torvalds last month, who also created the most famous open-source operating system, Linux. Git was born in response to a skirmish with Larry McAvoy, the head of proprietary tool BitKeeper, over the potential misuse of his product. Torvalds went on vacation for a week and hammered out the most dominant version control system today - git. Version control systems allow developers to work simultaneously on projects, committing any changes to a centralized branch of code. It also allows for any changes to be rolled back to earlier versions which can be enormously helpful if a bug is found in the main branch. Git ushered in a new wave of version control, but the open-source version was somewhat difficult to use for the untrained developer. Enter Github and GitLab - two companies built around the idea of making the git version control system easier for developers to use. Github came first, in 2007, offering a platform to host and share projects. The Github platform was free, but not open source - developers couldn’t build onto their hosting platform - only use it. GitLab started in 2014 to offer an alternative, fully-open sourced platform that allowed individuals to self-host a Github-like tracking program, providing improved security and control. Because of Github’s first mover advantage, however, it has become the dominant platform upon which developers build: “Github is still by far the dominant market player: while it’s hard to find public numbers on GitLab’s adoption, its website claims more than 100,000 organizations use its product, whereas GitHub claims more than 2.9 million organizations.” Developers find GitHub incredibly easy to use, creating an enormous wave of open source projects and code-sharing. The company added 10 million new users in 2019 alone - bringing the total to over 40 million worldwide. This growth prompted Microsoft to buy GitHub in 2018 for $7.5B. We are in the early stages of this development explosion, and it will be interesting to see how increased code accessibility changes the world over the next ten years.

  3. Developing and Maintaining an Ecosystem Forever. Open source communities are unique and complex - with different user and contributor dynamics. Eghbal tries to segment the different types of open source communities into four buckets - federations, clubs, stadiums, and toys - characterized below in the two by two matrix - based on contributor growth and user growth. Federations are the pinnacle of open source software development - many contributors and many users, creating a vibrant ecosystem of innovative development. Clubs represent more niche and focused communities, including vertical-specific tools like astronomy package, Astropy. Stadiums are highly centralized but large communities - this typically means only a few contributors but a significant user base. It is up to these core contributors to lead the ecosystem as opposed to decentralized federations that have so many contributors they can go in all directions. Lastly, there are toys, which have low user growth and low contributor growth but may actually be very useful projects. Interestingly, projects can shift in and out of these community types as they become more or less relevant. For example, developers from Yahoo open-sourced their Hadoop project based on Google’s File System and Map Reduce papers. The initial project slowly became huge, moving from a stadium to a federation, and formed subprojects around it, like Apache Spark. What’s interesting, is that projects mature and change, and code can remain in production for a number of years after the project’s day in the spotlight is gone. According to Eghbal, “Some of the oldest code ever written is still running in production today. Fortran, which was first developed in 1957 at IBM, is still widely used in aerospace, weather forecasting, and other computational industries.” These ecosystems can exist forever, but the costs of these ecosystems (creation, distribution, and maintenance) are often hidden, especially the maintenance aspect. The cost of creation and distribution has dropped significantly in the past ten years - with many of the world’s developers all working in the same ecosystem on GitHub - but it has also increased the total cost of maintenance, and that maintenance cost can be significant. Bootstrap co-creator Jacob Thornton likens maintenance costs to caring for an old dog: “I’ve created endlessly more and more projects that have now turned [from puppies] into dogs. Almost every project I release will get 2,000, 3,000 watchers, which is enough to have this guilt, which is essentially like ‘I need to maintain this, I need to take care of this dog.” Communities change from toys to clubs to stadiums to federations but they may also change back as new tools are developed. Old projects still need to be maintained and that code and maintenance comes down to committed developers.

Business Themes

1_c7udbm7fJtdkZEE6tl1mWQ.png
  1. Revenue Model Matching. One of the earliest code-hosting platforms was SourceForge, a company founded in 1999. The Company pioneered the idea of code-hosting - letting developers publish their code for easy download. It became famous for letting open-source developers use the platform free of charge. SourceForge was created by VA Software, an internet bubble darling that saw its stock price decimated when the bubble finally burst. The challenge with scaling SourceForge was a revenue model mismatch - VA Software made money with paid advertising, which allowed it to offer its tools to developers for free, but meant its revenue model was highly variable. When the company went public, it was still a small and unproven business, posting $17M in revenue and $31M in costs. The revenue model mismatch is starting to rear its head again, with traditional software as a service (SaaS) recurring subscription models catching some heat. Many cloud service and API companies are pricing by usage rather than a fixed, high margin subscription fee. This is the classic electric utility model - you only pay for what you use. Snowflake CEO Frank Slootman (who formerly ran SaaS pioneer ServiceNow) commented: “I also did not like SaaS that much as a business model, felt it not equitable for customers.” Snowflake instead charges based on credits which pay for usage. The issue with usage-based billing has traditionally been price transparency, which can be obfuscated with customer credit systems and incalculable pricing, like Amazon Web Services. This revenue model mismatch was just one problem for SourceForge. As git became the dominant version control system, SourceForge was reluctant to support it - opting for its traditional tools instead. Pricing norms change, and new technology comes out every day, it’s imperative that businesses have a strong grasp of the value they provide to their customers and align their revenue model with customers, so a fair trade-off is created.

  2. Open Core Model. There has been enormous growth in open source businesses in the past few years, which typically operate on an open core model. The open core model means the Company offers a free, normally feature limited, version of its software and also a proprietary, enterprise version with additional features. Developers might adopt the free version but hit usage limits or feature constraints, causing them to purchase the paid version. The open-source “core” is often just that - freely available for anyone to download and modify; the core's actual source code is normally published on GitHub, and developers can fork the project or do whatever they wish with that open core. The commercial product is normally closed source and not available for modification, providing the business a product. Joseph Jacks, who runs Open Source Software (OSS) Capital, an investment firm focused on open source, displays four types of open core business model (pictured above). The business models differ based on how much of the software is open source. Github, interestingly, employs the “thick” model of being mostly proprietary, with only 10% of its software truly open-sourced. Its funny that the site that hosts and facilitates the most open source development is proprietary. Jacks nails the most important question in the open core model: “How much stays open vs. How much stays closed?” The consequences can be dire to a business - open source too much and all of a sudden other companies can quickly recreate your tool. Many DevOps tools have experienced the perils of open source, with some companies losing control of the project it was supposed to facilitate. On the flip side, keeping more of the software closed source goes against the open-source ethos, which can be viewed as organizations selling out. The continuous delivery pipeline project Jenkins has struggled to satiate its growing user base, leading to the CEO of the Jenkins company, CloudBees, posting the blog post entitled, “Shifting Gears”: “But at the same time, the incremental, autonomous nature of our community made us demonstrably unable to solve certain kinds of problems. And after 10+ years, these unsolved problems are getting more pronounced, and they are taking a toll — segments of users correctly feel that the community doesn’t get them, because we have shown an inability to address some of their greatest difficulties in using Jenkins. And I know some of those problems, such as service instability, matter to all of us.” Striking this balance is incredibly tough, especially in a world of competing projects and finite development time and money in a commercial setting. Furthermore, large companies like AWS are taking open core tools like Elastic and MongoDB and recreating them in proprietary fashions (Elasticsearch Service and DocumentDB) prompting company CEO’s to appropriately lash out. Commercializing open source software is a never-ending battle against proprietary players and yourself.

  3. Compensation for Open Source. Eghabl characterizes two types of funders of open-source - institutions (companies, governments, universities) and individuals (usually developers who are direct users). Companies like to fund improved code quality, influence, and access to core projects. The largest groups of contributors to open source projects are mainly corporations like Microsoft, Google, Red Hat, IBM, and Intel. These corporations are big enough and profitable enough to hire individuals and allow them to strike a comfortable balance between time spent on commercial software and time spent on open source. This also functions as a marketing expense for the big corporations; big companies like having influencer developers on payroll to get the company’s name out into the ecosystem. Evan You, who authored Vue.js, a javascript framework described company backed open-source projects: “The thing about company-backed open-source projects is that in a lot of cases… they want to make it sort of an open standard for a certain industry, or sometimes they simply open-source it to serve as some sort of publicity improvement to help with recruiting… If this project no longer serves that purpose, then most companies will probably just cut it, or (in other terms) just give it to the community and let the community drive it.” In contrast to company-funded projects, developer-funded projects are often donation based. With the rise of online tools for encouraging payments like Stripe and Patreon, more and more funding is being directed to individual open source developers. Unfortunately though, it is still hard for many open source developers to pursue open source on individual contributions, especially if they work on multiple projects at the same time. Open source developer Sindre Sorhus explains: “It’s a lot harder to attract company sponsors when you maintain a lot of projects of varying sizes instead of just one large popular project like Babel, even if many of those projects are the backbone of the Node.js ecosystem.” Whether working in a company or as an individual developer, building and maintaining open source software takes significant time and effort and rarely leads to significant monetary compensation.

Dig Deeper

  • List of Commercial Open Source Software Businesses by OSS Capital

  • How to Build an Open Source Business by Peter Levine (General Partner at Andreessen Horowitz)

  • The Mind Behind Linux (a talk by Linus Torvalds)

  • What is open source - a blog post by Red Hat

  • Why Open Source is Hard by PHP Developer Jose Diaz Gonzalez

  • The Complicated Economy of Open Source

tags: Github, Gitlab, Google, Twitch, Instagram, E;, Elm, Javascript, Open Source, Git, Linus Torvalds, Linux, Microsoft, MapReduce, IBM, Fortran, Node, Vue, SourceForge, VA Software, Snowflake, Frank Slootman, ServiceNow, SaaS, AWS, DevOps, CloudBees, Jenkins, Intel, Red Hat, batch2
categories: Non-Fiction
 

July 2020 - Innovator's Dilemma by Clayton Christensen

This month we review the technology classic, the Innovator’s Dilemma, by Clayton Christensen. The book attempts to answer the age-old question: why do dominant companies eventually fail?

Tech Themes

  1. The Actual Definition of Disruptive Technology. Disruption is a term that is frequently thrown around in Silicon Valley circles. Every startup thinks its technology is disruptive, meaning it changes how the customer currently performs a task or service. The actual definition, discussed in detail throughout the book, is relatively specific. Christensen re-emphasizes this distinction in a 2015 Harvard Business Review article: "Specifically, as incumbents focus on improving their products and services for their most demanding (and usually most profitable) customers, they exceed the needs of some segments and ignore the needs of others. Entrants that prove disruptive begin by successfully targeting those overlooked segments, gaining a foothold by delivering more-suitable functionality—frequently at a lower price. Incumbents, chasing higher profitability in more-demanding segments, tend not to respond vigorously. Entrants then move upmarket, delivering the performance that incumbents' mainstream customers require, while preserving the advantages that drove their early success. When mainstream customers start adopting the entrants' offerings in volume, disruption has occurred." The book posits that there are generally two types of innovation: sustaining and disruptive. While disruptive innovation focuses on low-end or new, small market entry, sustaining innovation merely continues markets along their already determined axes. For example, in the book, Christensen discusses the disk drive industry, mapping out the jumps which pack more memory and power into each subsequent product release. There is a slew of sustaining jumps for each disruptive jump that improves product performance for existing customers but doesn't necessarily get non-customers to become customers. It is only when new use cases emerge, like rugged disk usage and PCs arrive, that disruption occurs. Understanding the specific definition can help companies and individuals better navigate muddled tech messaging; Uber, for example, is shown to be a sustaining technology because its market already existed, and the company didn't offer lower prices or a new business model. Understanding the intricacies of the definition can help incumbents spot disruptive competitors.

  2. Value Networks. Value networks are an underappreciated and somewhat confusing topic covered in The Innovator's Dilemma's early chapters. A value network is defined as "The context within which a firm identifies and responds to customers' needs, solves problems, procures input, reacts to competitors, and strives for profit." A value network seems all-encompassing on the surface. In reality, a value network serves to simplify the lens through which an organization must make complex decisions every day. Shown as a nested product architecture, a value network attempts to show where a company interacts with other products. By distilling the product down to its most atomic components (literally computer hardware), we can see all of the considerations that impact a business. Once we have this holistic view, we can consider the decisions and tradeoffs that face an organization every day. The takeaway here is that organizations care about different levels of performance for different products. For example, when looking at cloud computing services at AWS, Azure, or GCP, we see Amazon EC2 instances, Azure VMs, and Google Cloud VMs with different operating systems, different purposes (general, compute, memory), and different sizes. General-purpose might be fine for basic enterprise applications, while gaming applications might need compute-optimized, and real-time big data analytics may need a memory-optimized VM. While it gets somewhat forgotten throughout the book, this point means that organizations focused on producing only compute-intensive machines may not be the best for memory-intensive, because the customers of the organization may not have a use for them. In the book's example, some customers (of bigger memory providers) looked at smaller memory applications and said there was no need. In reality, there was massive demand in the rugged, portable market for smaller memory disks. When approaching disruptive innovation, it's essential to recognize your organization's current value network so that you don't target new technologies at those who don't need it.

  3. Product Commoditization. Christensen spends a lot of time describing the dynamics of the disk drive industry, where companies continually supplied increasingly smaller drives with better performance. Christensen's description of commoditization is very interesting: "A product becomes a commodity within a specific market segment when the repeated changes in the basis of competition, completely play themselves out, that is, when market needs on each attribute or dimension of performance have been fully satisfied by more than one available product." At this point, products begin competing primarily on price. In the disk drive industry, companies first competed on capacity, then on size, then on reliability, and finally on price. This price war is reminiscent of the current state of the Continuous Integration / Continuous Deployment (CI/CD) market, a subsegment of DevOps software. Companies in the space, including Github, CircleCI, Gitlab, and others are now competing primarily on price to win new business. Each of the cloud providers has similar technologies native to their public cloud offerings (AWS CodePipeline and CloudFormation, GitHub Actions, Google Cloud Build). They are giving it away for free because of their scale. The building block of CI/CD software is git, an open-source version control system founded by Linux founder Linus Torvalds. With all the providers leveraging a massive open-source project, there is little room for true differentiation. Christensen even says: "It may, in fact, be the case that the product offerings of competitors in a market continue to be differentiated from each other. But differentiation loses its meaning when the features and functionality have exceeded what the market demands." Only time will tell whether these companies can pivot into burgeoning highly differentiated technologies.

Business Themes

Innovator Dilemma.png
R1512B_BIG_MODEL-1200x1035.png
  1. Resources-Processes-Value (RPV) Framework. The RPV framework is a powerful lens for understanding the challenges that large businesses face. Companies have resources (people, assets, technology, product designs, brands, information, cash, relationships with customers, etc.) that can be transformed into greater value products and services. The way organizations go about converting these resources is the organization's processes. These processes can be formal (documented sales strategies, for example) or informal (culture and habitual routines). Processes are the big reasons organizations struggle to deal with emerging technologies. Because culture and habit are ingrained in the organization, the same process used to launch a mature, slow-growing market may be applied to a fast-growing, dynamic sector. Christensen puts it best: "This means the very mechanisms through which organizations create value are intrinsically inimical to change." Lastly, companies have values, or "the standards by which employees make prioritization decisions." When there is a mismatch between the resources, processes, and values of an organization and the product or market that an organization is chasing, its rare the business can be successful in competing in the disruptive market. To see this misalignment in action, Christensen describes a meeting with a CEO who had identified the disruptive change happening in the disk-drive market and had gotten a product to market to meet the growing market. In response to a publication showing the fast growth of the market, the CEO lamented to Christensen: "I know that's what they think, but they're wrong. There isn't a market. We've had that drive in our catalog for 18 months. Everyone knows we've got it, but nobody wants it." The issue was not the product or market demand, but the organization's values. As Christensen continues, "But among the employees, there was nothing about an $80 million, low-end market that solved the growth and profit problems of a multi-billion dollar company – especially when capable competitors were doing all they could to steal away the customers providing those billions. And way at the other end of the company there was nothing about supplying prototype companies of 1.8-inch drives to an automaker that solved the problem of meeting the 1994 quotas of salespeople whose contacts and expertise were based so solidly in the computer industry." The CEO cared about the product, but his team did not. The RPV framework helps evaluate large companies and the challenges they face in launching new products.

  2. How to manage through technological change. Christensen points out three primary ways of managing through disruptive technology change: 1. "Acquire a different organization whose processes and values are a close match with the new task." 2. "Try to change the processes and values of the current organization." 3. "Separate out an independent organization and develop within it the new processes and values that are required to solve the new problem." Acquisitions are a way to get out ahead of disruptive change. There are so many examples but two recent ones come to mind: Microsoft's acquisition of Github and Facebook's acquisition of Instagram. Microsoft paid a whopping $7.5B for Github in 2018 when the Github was rumored to be at roughly $200M in revenue (37.5x Revenue multiple!). Github was undoubtedly a mature business with a great product, but it didn't have a ton of enterprise adoption. Diane Greene at Google Cloud, tried to get Sundar Pichai to pay more, but he said no. Github has changed Azure's position within the market and continued its anti-Amazon strategy of pushing open-source technology. In contrast to the Github acquisition, Instagram was only 13 employees when it was acquired for $1B. Zuckerberg saw the threat the social network represented to Facebook, and today the acquisition is regularly touted as one of the best ever. Instagram was developing a social network solely based on photographs, right at the time every person suddenly had an excellent smartphone camera in their pocket. The acquisition occurred right when the market was ballooning, and Facebook capitalized on that growth. The second way of managing technological change is through changing cultural norms. This is rarely successful, because you are fighting against all of the processes and values deeply embedded in the organization. Indra Nooyi cited a desire to move faster on culture as one of her biggest regrets as a young executive: "I’d say I was a little too respectful of the heritage and culture [of PepsiCo]. You’ve got to make a break with the past. I was more patient than I should’ve been. When you know you have to make a change, at some point you have to say enough is enough. The people who have been in the company for 20-30 years pull you down. If I had to do it all over again, I might have hastened the pace of change even more." Lastly, Christensen prescribes creating an independent organization matched to the resources, processes, and values that the new market requires. Three great spin-out, spin-in examples with different flavors of this come to mind. First, Cisco developed a spin-ins practice whereby they would take members of their organization and start a new company that they would fund to develop a new process. The spin-ins worked for a time but caused major cultural issues. Second, as we've discussed, one of the key reasons AWS was born was that Chris Pinkham was in South Africa, thousands of miles away from Amazon Corporate in Seattle; this distance and that team's focus allowed it to come up with a major advance in computing. Lastly, Mastercard started Mastercard Labs a few years ago. CEO Ajay Banga told his team: "I need two commercial products in three years." He doesn't tell his CFO their budget, and he is the only person from his executive team that interacts with the business. This separation of resources, processes, and values allows those smaller organizations to be more nimble in finding emerging technology products and markets.

  3. Discovering Emerging Markets.

    The resources-processes-values framework can also show us why established firms fail to address emerging markets. Established companies rely on formal budgeting and forecasting processes whereby resources are allocated based on market estimates and revenue forecasts. Christensen highlights several important factors for tackling emerging markets, including focusing on ideas, failure, and learning. Underpinning all of these ideas is the impossibility of predicting the scale and growth rate of disruptive technologies: "Experts' forecasts will always be wrong. It is simply impossible to predict with any useful degree of precision how disruptive products will be used or how large their markets will be." Because of this challenge, relying too heavily on these estimates to underpin financial projections can cause businesses to view initial market development as a failure or not worthy of the companies time. When HP launched a new 1.3-inch disk drive, which could be embedded in PDAs, the company mandated that its revenues had to scale up to $150M within three years, in line with market estimates. That market never materialized, and the initiative was abandoned as a failed investment. Christensen argues that because disruptive technologies are threats, planning has to come after action, and thus strategic and financial planning must be discovery-based rather than execution-based. Companies should focus on learning their customer's needs and the right business model to attack the problem, rather than plan to execute their initial vision. As he puts it: "Research has shown, in fact, that the vast majority of successful new business ventures, abandoned their original business strategies when they began implementing their initial plans and learned what would and would not work." One big fan of Christensen's work is Jeff Bezos, and its easy to see why with Amazon's focus on releasing new products in this discovery manner. The pace of product releases is simply staggering (~almost one per day). Bezos even talked about this exact issue in his 2016 shareholder letter: "The senior team at Amazon is determined to keep our decision-making velocity high. Speed matters in business – plus a high-velocity decision making environment is more fun too. We don't know all the answers, but here are some thoughts. First, never use a one-size-fits-all decision-making process. Many decisions are reversible, two-way doors. Those decisions can use a light-weight process. For those, so what if you're wrong? I wrote about this in more detail in last year's letter. Second, most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you're probably being slow." Amazon is one of the first large organizations to truly embrace this decision-making style, and clearly, the results speak for themselves.

Dig Deeper

  • What Jeff Bezos Tells His Executives To Read

  • Github Cuts Subscription Price by More Than Half

  • Ajay Banga Opening Address at MasterCard Innovation Forum 2014

  • Clayton Christensen Describing Disruptive Innovation

  • Why Cisco’s Spin-Ins Never Caught On

tags: Amazon, Google Cloud, Microsoft, Azure, Github, Gitlab, CircleCI, Pepsi, Jeff Bezos, Indra Nooyi, Mastercard, Ajay Banga, HP, Uber, RPV, Facebook, Instagram, Cisco, batch2
categories: Non-Fiction
 

About Contact Us | Recommend a Book Disclaimer