• Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search
Tech Book of the Month
  • Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search

April 2021 - Innovator's Solution by Clayton Christensen and Michael Raynor

This month we take another look at disruptive innovation in the counter piece to Clayton Christensen’s Innovator’s Dilemma, our July 2020 book. The book crystallizes the types of disruptive innovation and provides frameworks for how incumbents can introduce or combat these innovations. The book was a pleasure to read and will serve as a great reference for the future.

Tech Themes

  1. Integration and Outsourcing. Today, technology companies rely on a variety of software tools and open source components to build their products. When you stitch all of these components together, you get the full product architecture. A great example is seen here with Gitlab, an SMB DevOps provider. They have Postgres for a relational database, Redis for caching, NGINX for request routing, Sentry for monitoring and error tracking and so on. Each of these subsystems interacts with each other to form the powerful Gitlab project. These interaction points are called interfaces. The key product development question for companies is: “Which things do I build internally and which do I outsource?” A simple answer offered by many MBA students is “Outsource everything that is not part of your core competence.” As Clayton Christensen points out, “The problem with core-competence/not-your-core-competence categorization is that what might seem to be a non-core activity today might become an absolutely critical competence to have mastered in a proprietary way in the future, and vice versa.” A great example that we’ve discussed before is IBM’s decision to go with Microsoft DOS for its Operating System and Intel for its Microprocessor. At the time, IBM thought it was making a strategic decision to outsource things that were not within its core competence but they inadvertently gave almost all of the industry profits from personal computing to Intel and Microsoft. Other competitors copied their modular approach and the whole industry slugged it out on price. The question of whether to outsource really depends on what might be important in the future. But that is difficult to predict, so the question of integration vs. outsourcing really comes down to the state of the product and market itself: is this product “not good enough” yet? If the answer is yes, then a proprietary, integrated architecture is likely needed just to make the actual product work for customers. Over time, as competitors enter the market and the fully integrated platform becomes more commoditized, the individual subsystems become increasingly important competitive drivers. So the decision to outsource or build internally must be made on the status of product and the market its attacking.

  2. Commoditization within Stacks. The above point leads to the unbelievable idea of how companies fall into the commoditization trap. This happens from overshooting, where companies create products that are too good (which I find counter-intuitive, who thought that doing your job really well would cause customers to leave!). Christensen describes this through the lens of a salesperson “‘Why can’t they see that our product is better than the competition? They’re treating it like a commodity!’ This is evidence of overshooting…there is a performance surplus. Customers are happy to accept improved products, but unwilling to pay a premium price to get them.” At this time, the things demanded by customers flip - they are willing to pay premium prices for innovations along a new trajectory of performance, most likely speed, convenience, and customization. “The pressure of competing along this new trajectory of improvement forces a gradual evolution in product architectures, away from the interdependent, proprietary architectures that had the advantage in the not-good-enough era toward modular designs in the era of performance surplus. In a modular world, you can prosper by outsourcing or by supplying just one element.” This process of integration, to modularization and back, is super fascinating. As an example of modularization, let’s take the streaming company Confluent, the makers of the open-source software project Apache Kafka. Confluent offers a real-time communications service that allows companies to stream data (as events) rather than batching large data transfers. Their product is often a sub-system underpinning real-time applications, like providing data to traders at Citigroup. Clearly, the basis of competition in trading has pivoted over the years as more and more banking companies offer the service. Companies are prioritizing a new axis, speed, to differentiate amongst competing services, and when speed is the basis of competition, you use Confluent and Kafka to beat out the competition. Now let’s fast forward five years and assume all banks use Kafka and Confluent for their traders, the modular sub-system is thus commoditized. What happens? I’d posit that the axis would shift again, maybe towards convenience, or customization where traders want specific info displayed maybe on a mobile phone or tablet. The fundamental idea is that “Disruption and commoditization can be seen as two sides of the same coin. That’s because the process of commoditization initiates a reciprocal process of de-commoditization [somewhere else in the stack].”

  3. The Disruptive Becomes the Disruptor. Disruption is a relative term. As we’ve discussed previously, disruption is often mischaracterized as startups enter markets and challenge incumbents. Disruption is really a focused and contextual concept whereby products that are “not good enough” by market standards enter a market with a simpler, more convenient, or less expensive product. These products and markets are often dismissed by incumbents or even ceded by market leaders as those leaders continue to move up-market to chase even bigger customers. Its fascinating to watch the disruptive become the disrupted. A great example would be department stores - initially, Macy’s offered a massive selection that couldn’t be found in any single store and customers loved it. They did this by turning inventory three times per year with 40% gross margins for a 120% return on capital invested in inventory. In the 1960s, Walmart and Kmart attacked the full-service department stores by offering a similar selection at much cheaper prices. They did this by setting up a value system whereby they could make 23% gross margins but turn inventories 5 times per year, enabling them to earn the industry golden 120% return on capital invested in inventory. Full-service department stores decided not to compete against these lower gross margin products and shifted more space to beauty and cosmetics that offered even higher gross margins (55%) than the 40% they were used to. This meant they could increase their return on capital invested in inventory and their profits while avoiding a competitive threat. This process continued with discount stores eventually pushing Macy’s out of most categories until Macy’s had nowhere to go. All of a sudden the initially disruptive department stores had become disrupted. We see this in technology markets as well. I’m not 100% this qualifies but think about Salesforce and Oracle. Marc Benioff had spent a number of years at Oracle and left to start Salesforce, which pioneered selling subscription, cloud software, on a per-seat revenue model. This meant a much cheaper option compared to traditional Oracle/Siebel CRM software. Salesforce was initially adopted by smaller customers that didn’t need the feature-rich platform offered by Oracle. Oracle dismissed Salesforce as competition even as Oracle CEO Larry Ellison seeded Salesforce and sat on Salesforce’s board. Today, Salesforce is a $200B company and briefly passed Oracle in market cap a few months ago. But now, Salesforce has raised its prices and mostly targets large enterprise buyers to hit its ambitious growth initiatives. Down-market competitors like Hubspot have come into the market with cheaper solutions and more fully integrated marketing tools to help smaller businesses that aren’t ready for a fully-featured Salesforce platform. Disruption is always contextual and it never stops.

Business Themes

1_fnX5OXzCcYOyPfRHA7o7ug.png
  1. Low-end-Market vs. New-Market Disruption. There are two types of established methods for disruption: Low-end-market (Down-market) and New-market. Low-end-market disruption seeks to establish performance that is “not good enough” along traditional lines, and targets overserved customers in the low-end of the mainstream market. It typically utilizes a new operating or financial approach with structurally different margins than up-market competitors. Amazon.com is a quintessential low-end market disruptor compared to traditional bookstores, offering prices so low they angered book publishers while offering unmatched convenience to customers allowing them to purchase books online. In contrast, Robinhood is a great example of a new-market disruption. Traditional discount brokerages like Charles Schwab and Fidelity had been around for a while (themselves disruptors of full-service models like Morgan Stanley Wealth Management). But Robinhood targeted a group of people that weren’t consuming in the market, namely teens and millennials, and they did it in an easy-to-use app with a much better user interface compared to Schwab and Fidelity. Robinhood also pioneered new pricing with zero-fee trading and made revenue via a new financial approach, payment for order flow (PFOF). Robinhood makes money by being a data provider to market makers - basically, large hedge funds, like Citadel, pay Robinhood for data on their transactions to help optimize customers buying and selling prices. When approaching big markets its important to ask: Is this targeted at a non-consumer today or am I competing at a structurally lower margin with a new financial model and a “not quite good enough” product? This determines whether you are providing a low-end market disruption or a new-market disruption.

  2. Jobs To Be Done. The jobs to be done framework was one of the most important frameworks that Clayton Christensen ever introduced. Marketers typically use advertising platforms like Facebook and Google to target specific demographics with their ads. These segments are narrowly defined: “Males over 55, living in New York City, with household income above $100,000.” The issue with this categorization method is that while these are attributes that may be correlated with a product purchase, customers do not look up exactly how marketers expect them to behave and purchase the products expected by their attributes. There may be a correlation but simply targeting certain demographics does not yield a great result. The marketers need to understand why the customer is adopting the product. This is where the Jobs to Be Done framework comes in. As Christensen describes it, “Customers - people and companies - have ‘jobs’ that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can ‘hire’ to get the job done. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently, and inexpensively as possible.” Christensen zeroes in on the contextual adoption of products; it is the circumstance and not the demographics that matter most. Christensen describes ways for people to view competition and feature development through the Jobs to Be Done lens using Blackberry as an example (later disrupted by the iPhone). While the immature smartphone market was seeing feature competition from Microsoft, Motorola, and Nokia, Blackberry and its parent company RIM came out with a simple to use device that allowed for short productivity bursts when the time was available. This meant they leaned into features that competed not with other smartphone providers (like better cellular reception), but rather things that allowed for these easy “productive” sessions like email, wall street journal updates, and simple games. The Blackberry was later disrupted by the iPhone which offered more interesting applications in an easier to use package. Interestingly, the first iPhone shipped without an app store (but as a proprietary, interdependent product) and was viewed as not good enough for work purposes, allowing the Blackberry to co-exist. Management even dismissed the iPhone as a competitor initially. It wasn’t long until the iPhone caught up and eventually surpassed the Blackberry as the world’s leading mobile phone.

  3. Brand Strategies. Companies may choose to address customers in a number of different circumstances and address a number of Jobs to Be Done. It’s important that the Company establishes specific ways of communicating the circumstance to the customer. Branding is powerful, something that Warren Buffett, Terry Smith, and Clayton Christensen have all recognized as durable growth providers. As Christensen puts it: “Brands are, at the beginning, hollow words into which marketers stuff meaning. if a brand’s meaning is positioned on a job to be done, then when the job arises in a customer’s life, he or she will remember the brand and hire the product. Customers pay significant premiums for brands that do a job well.” So what can a large corporate company do when faced with a disruptive challenger to its branding turf? It’s simple - add a word to their leading brand, targeted at the circumstance in which a customer might find themself. Think about Marriott, one of the leading hotel chains. They offer a number of hotel brands: Courtyard by Marriott for business travel, Residence Inn by Marriott for a home away from home, the Ritz Carlton for high-end luxurious stays, Marriott Vacation Club for resort destination hotels. Each brand is targeted at a different Job to Be Done and customers intuitively understand what the brands stand for based on experience or advertising. A great technology example is Amazon Web Services (AWS), the cloud computing division of Amazon.com. Amazon invented the cloud, and rather than launch with the Amazon.com brand, which might have confused their normal e-commerce customers, they created a completely new brand targeted at a different set of buyers and problems, that maintained the quality and recognition that Amazon had become known for. Another great retail example is the SNKRs app released by Nike. Nike understands that some customers are sneakerheads, and want to know the latest about all Nike shoe drops, so Nike created a distinct, branded app called SNKRS, that gives news and updates on the latest, trendiest sneakers. These buyers might not be interested in logging into the Nike app and may become angry after sifting through all of the different types of apparel offered by Nike, just to find new shoes. The SNKRS app offers a new set of consumers and an easy way to find what they are looking for (convenience), which benefits Nike’s core business. Branding is powerful, and understanding the Job to Be Done helps focus the right brand for the right job.

Dig Deeper

  • Clayton Christensen’s Overview on Disruptive Innovation

  • Jobs to Be Done: 4 Real-World Examples

  • A Peek Inside Marriott’s Marketing Strategy & Why It Works So Well

  • The Rise and Fall of Blackberry

  • Payment for Order Flow Overview

  • How Commoditization Happens

tags: Clayton Christensen, AWS, Nike, Amazon, Marriott, Warren Buffett, Terry Smith, Blackberry, RIM, Microsoft, Motorola, iPhone, Facebook, Google, Robinhood, Citadel, Schwab, Fidelity, Morgan Stanley, Oracle, Salesforce, Walmart, Macy's, Kmart, Confluent, Kafka, Citigroup, Intel, Gitlab, Redis
categories: Non-Fiction
 

October 2020 - Working in Public: The Making and Maintenance of Open Source Software by Nadia Eghbal

This month we covered Nadia Eghbal’s instant classic about open-source software. Open-source software has been around since the late seventies but only recently it has gained significant public and business attention.

Tech Themes

The four types of open source communities described in Working in Public

The four types of open source communities described in Working in Public

  1. Misunderstood Communities. Open source is frequently viewed as an overwhelmingly positive force for good - taking software and making it free for everyone to use. Many think of open source as community-driven, where everyone participates and contributes to making the software better. The theory is that so many eyeballs and contributors to the software improves security, improves reliability, and increases distribution. In reality, open-source communities take the shape of the “90-9-1” rule and act more like social media than you could think. According to Wikipedia, the "90–9–1” rule states that for websites where users can both create and edit content, 1% of people create content, 9% edit or modify that content, and 90% view the content without contributing. To show how this applies to open source communities, Eghbal cites a study by North Carolina State Researchers: “One study found that in more than 85% of open source projects the research examined on Github, less than 5% of developers were responsible for 95% of code and social interactions.” These creators, contributors, and maintainers are developer influencers: “Each of these developers commands a large audience of people who follow them personally; they have the attention of thousands of developers.” Unlike Instagram and Twitch influencers, who often actively try to build their audiences, open-source developer influencers sometimes find the attention off-putting - they simply published something to help others and suddenly found themselves with actual influence. The challenging truth of open source is that core contributors and maintainers give significant amounts of their time and attention to their communities - often spending hours at a time responding to pull requests (requests for changes / new features) on Github. Evan Czaplicki’s insightful talk entitled “The Hard Parts of Open Source,” speaks to this challenging dynamic. Evan created the open-source project, Elm, a functional programming language that compiles Javascript, because he wanted to make functional programming more accessible to developers. As one of its core maintainers, he has repeatedly been hit with requests of “Why don’t you just…” from non-contributing developers angrily asking why a feature wasn’t included in the latest release. As fastlane creator, Felix Krause put it, “The bigger your project becomes, the harder it is to keep the innovation you had in the beginning of your project. Suddenly you have to consider hundreds of different use cases…Once you pass a few thousand active users, you’ll notice that helping your users takes more time than actually working on your project. People submit all kinds of issues, most of them aren’t actually issues, but feature requests or questions.” When you use open-source software, remember who is contributing and maintaining it - and the days and years poured into the project for the sole goal of increasing its utility for the masses.

  2. Git it? Git was created by Linus Torvalds in 2005. We talked about Torvalds last month, who also created the most famous open-source operating system, Linux. Git was born in response to a skirmish with Larry McAvoy, the head of proprietary tool BitKeeper, over the potential misuse of his product. Torvalds went on vacation for a week and hammered out the most dominant version control system today - git. Version control systems allow developers to work simultaneously on projects, committing any changes to a centralized branch of code. It also allows for any changes to be rolled back to earlier versions which can be enormously helpful if a bug is found in the main branch. Git ushered in a new wave of version control, but the open-source version was somewhat difficult to use for the untrained developer. Enter Github and GitLab - two companies built around the idea of making the git version control system easier for developers to use. Github came first, in 2007, offering a platform to host and share projects. The Github platform was free, but not open source - developers couldn’t build onto their hosting platform - only use it. GitLab started in 2014 to offer an alternative, fully-open sourced platform that allowed individuals to self-host a Github-like tracking program, providing improved security and control. Because of Github’s first mover advantage, however, it has become the dominant platform upon which developers build: “Github is still by far the dominant market player: while it’s hard to find public numbers on GitLab’s adoption, its website claims more than 100,000 organizations use its product, whereas GitHub claims more than 2.9 million organizations.” Developers find GitHub incredibly easy to use, creating an enormous wave of open source projects and code-sharing. The company added 10 million new users in 2019 alone - bringing the total to over 40 million worldwide. This growth prompted Microsoft to buy GitHub in 2018 for $7.5B. We are in the early stages of this development explosion, and it will be interesting to see how increased code accessibility changes the world over the next ten years.

  3. Developing and Maintaining an Ecosystem Forever. Open source communities are unique and complex - with different user and contributor dynamics. Eghbal tries to segment the different types of open source communities into four buckets - federations, clubs, stadiums, and toys - characterized below in the two by two matrix - based on contributor growth and user growth. Federations are the pinnacle of open source software development - many contributors and many users, creating a vibrant ecosystem of innovative development. Clubs represent more niche and focused communities, including vertical-specific tools like astronomy package, Astropy. Stadiums are highly centralized but large communities - this typically means only a few contributors but a significant user base. It is up to these core contributors to lead the ecosystem as opposed to decentralized federations that have so many contributors they can go in all directions. Lastly, there are toys, which have low user growth and low contributor growth but may actually be very useful projects. Interestingly, projects can shift in and out of these community types as they become more or less relevant. For example, developers from Yahoo open-sourced their Hadoop project based on Google’s File System and Map Reduce papers. The initial project slowly became huge, moving from a stadium to a federation, and formed subprojects around it, like Apache Spark. What’s interesting, is that projects mature and change, and code can remain in production for a number of years after the project’s day in the spotlight is gone. According to Eghbal, “Some of the oldest code ever written is still running in production today. Fortran, which was first developed in 1957 at IBM, is still widely used in aerospace, weather forecasting, and other computational industries.” These ecosystems can exist forever, but the costs of these ecosystems (creation, distribution, and maintenance) are often hidden, especially the maintenance aspect. The cost of creation and distribution has dropped significantly in the past ten years - with many of the world’s developers all working in the same ecosystem on GitHub - but it has also increased the total cost of maintenance, and that maintenance cost can be significant. Bootstrap co-creator Jacob Thornton likens maintenance costs to caring for an old dog: “I’ve created endlessly more and more projects that have now turned [from puppies] into dogs. Almost every project I release will get 2,000, 3,000 watchers, which is enough to have this guilt, which is essentially like ‘I need to maintain this, I need to take care of this dog.” Communities change from toys to clubs to stadiums to federations but they may also change back as new tools are developed. Old projects still need to be maintained and that code and maintenance comes down to committed developers.

Business Themes

1_c7udbm7fJtdkZEE6tl1mWQ.png
  1. Revenue Model Matching. One of the earliest code-hosting platforms was SourceForge, a company founded in 1999. The Company pioneered the idea of code-hosting - letting developers publish their code for easy download. It became famous for letting open-source developers use the platform free of charge. SourceForge was created by VA Software, an internet bubble darling that saw its stock price decimated when the bubble finally burst. The challenge with scaling SourceForge was a revenue model mismatch - VA Software made money with paid advertising, which allowed it to offer its tools to developers for free, but meant its revenue model was highly variable. When the company went public, it was still a small and unproven business, posting $17M in revenue and $31M in costs. The revenue model mismatch is starting to rear its head again, with traditional software as a service (SaaS) recurring subscription models catching some heat. Many cloud service and API companies are pricing by usage rather than a fixed, high margin subscription fee. This is the classic electric utility model - you only pay for what you use. Snowflake CEO Frank Slootman (who formerly ran SaaS pioneer ServiceNow) commented: “I also did not like SaaS that much as a business model, felt it not equitable for customers.” Snowflake instead charges based on credits which pay for usage. The issue with usage-based billing has traditionally been price transparency, which can be obfuscated with customer credit systems and incalculable pricing, like Amazon Web Services. This revenue model mismatch was just one problem for SourceForge. As git became the dominant version control system, SourceForge was reluctant to support it - opting for its traditional tools instead. Pricing norms change, and new technology comes out every day, it’s imperative that businesses have a strong grasp of the value they provide to their customers and align their revenue model with customers, so a fair trade-off is created.

  2. Open Core Model. There has been enormous growth in open source businesses in the past few years, which typically operate on an open core model. The open core model means the Company offers a free, normally feature limited, version of its software and also a proprietary, enterprise version with additional features. Developers might adopt the free version but hit usage limits or feature constraints, causing them to purchase the paid version. The open-source “core” is often just that - freely available for anyone to download and modify; the core's actual source code is normally published on GitHub, and developers can fork the project or do whatever they wish with that open core. The commercial product is normally closed source and not available for modification, providing the business a product. Joseph Jacks, who runs Open Source Software (OSS) Capital, an investment firm focused on open source, displays four types of open core business model (pictured above). The business models differ based on how much of the software is open source. Github, interestingly, employs the “thick” model of being mostly proprietary, with only 10% of its software truly open-sourced. Its funny that the site that hosts and facilitates the most open source development is proprietary. Jacks nails the most important question in the open core model: “How much stays open vs. How much stays closed?” The consequences can be dire to a business - open source too much and all of a sudden other companies can quickly recreate your tool. Many DevOps tools have experienced the perils of open source, with some companies losing control of the project it was supposed to facilitate. On the flip side, keeping more of the software closed source goes against the open-source ethos, which can be viewed as organizations selling out. The continuous delivery pipeline project Jenkins has struggled to satiate its growing user base, leading to the CEO of the Jenkins company, CloudBees, posting the blog post entitled, “Shifting Gears”: “But at the same time, the incremental, autonomous nature of our community made us demonstrably unable to solve certain kinds of problems. And after 10+ years, these unsolved problems are getting more pronounced, and they are taking a toll — segments of users correctly feel that the community doesn’t get them, because we have shown an inability to address some of their greatest difficulties in using Jenkins. And I know some of those problems, such as service instability, matter to all of us.” Striking this balance is incredibly tough, especially in a world of competing projects and finite development time and money in a commercial setting. Furthermore, large companies like AWS are taking open core tools like Elastic and MongoDB and recreating them in proprietary fashions (Elasticsearch Service and DocumentDB) prompting company CEO’s to appropriately lash out. Commercializing open source software is a never-ending battle against proprietary players and yourself.

  3. Compensation for Open Source. Eghabl characterizes two types of funders of open-source - institutions (companies, governments, universities) and individuals (usually developers who are direct users). Companies like to fund improved code quality, influence, and access to core projects. The largest groups of contributors to open source projects are mainly corporations like Microsoft, Google, Red Hat, IBM, and Intel. These corporations are big enough and profitable enough to hire individuals and allow them to strike a comfortable balance between time spent on commercial software and time spent on open source. This also functions as a marketing expense for the big corporations; big companies like having influencer developers on payroll to get the company’s name out into the ecosystem. Evan You, who authored Vue.js, a javascript framework described company backed open-source projects: “The thing about company-backed open-source projects is that in a lot of cases… they want to make it sort of an open standard for a certain industry, or sometimes they simply open-source it to serve as some sort of publicity improvement to help with recruiting… If this project no longer serves that purpose, then most companies will probably just cut it, or (in other terms) just give it to the community and let the community drive it.” In contrast to company-funded projects, developer-funded projects are often donation based. With the rise of online tools for encouraging payments like Stripe and Patreon, more and more funding is being directed to individual open source developers. Unfortunately though, it is still hard for many open source developers to pursue open source on individual contributions, especially if they work on multiple projects at the same time. Open source developer Sindre Sorhus explains: “It’s a lot harder to attract company sponsors when you maintain a lot of projects of varying sizes instead of just one large popular project like Babel, even if many of those projects are the backbone of the Node.js ecosystem.” Whether working in a company or as an individual developer, building and maintaining open source software takes significant time and effort and rarely leads to significant monetary compensation.

Dig Deeper

  • List of Commercial Open Source Software Businesses by OSS Capital

  • How to Build an Open Source Business by Peter Levine (General Partner at Andreessen Horowitz)

  • The Mind Behind Linux (a talk by Linus Torvalds)

  • What is open source - a blog post by Red Hat

  • Why Open Source is Hard by PHP Developer Jose Diaz Gonzalez

  • The Complicated Economy of Open Source

tags: Github, Gitlab, Google, Twitch, Instagram, E;, Elm, Javascript, Open Source, Git, Linus Torvalds, Linux, Microsoft, MapReduce, IBM, Fortran, Node, Vue, SourceForge, VA Software, Snowflake, Frank Slootman, ServiceNow, SaaS, AWS, DevOps, CloudBees, Jenkins, Intel, Red Hat, batch2
categories: Non-Fiction
 

July 2020 - Innovator's Dilemma by Clayton Christensen

This month we review the technology classic, the Innovator’s Dilemma, by Clayton Christensen. The book attempts to answer the age-old question: why do dominant companies eventually fail?

Tech Themes

  1. The Actual Definition of Disruptive Technology. Disruption is a term that is frequently thrown around in Silicon Valley circles. Every startup thinks its technology is disruptive, meaning it changes how the customer currently performs a task or service. The actual definition, discussed in detail throughout the book, is relatively specific. Christensen re-emphasizes this distinction in a 2015 Harvard Business Review article: "Specifically, as incumbents focus on improving their products and services for their most demanding (and usually most profitable) customers, they exceed the needs of some segments and ignore the needs of others. Entrants that prove disruptive begin by successfully targeting those overlooked segments, gaining a foothold by delivering more-suitable functionality—frequently at a lower price. Incumbents, chasing higher profitability in more-demanding segments, tend not to respond vigorously. Entrants then move upmarket, delivering the performance that incumbents' mainstream customers require, while preserving the advantages that drove their early success. When mainstream customers start adopting the entrants' offerings in volume, disruption has occurred." The book posits that there are generally two types of innovation: sustaining and disruptive. While disruptive innovation focuses on low-end or new, small market entry, sustaining innovation merely continues markets along their already determined axes. For example, in the book, Christensen discusses the disk drive industry, mapping out the jumps which pack more memory and power into each subsequent product release. There is a slew of sustaining jumps for each disruptive jump that improves product performance for existing customers but doesn't necessarily get non-customers to become customers. It is only when new use cases emerge, like rugged disk usage and PCs arrive, that disruption occurs. Understanding the specific definition can help companies and individuals better navigate muddled tech messaging; Uber, for example, is shown to be a sustaining technology because its market already existed, and the company didn't offer lower prices or a new business model. Understanding the intricacies of the definition can help incumbents spot disruptive competitors.

  2. Value Networks. Value networks are an underappreciated and somewhat confusing topic covered in The Innovator's Dilemma's early chapters. A value network is defined as "The context within which a firm identifies and responds to customers' needs, solves problems, procures input, reacts to competitors, and strives for profit." A value network seems all-encompassing on the surface. In reality, a value network serves to simplify the lens through which an organization must make complex decisions every day. Shown as a nested product architecture, a value network attempts to show where a company interacts with other products. By distilling the product down to its most atomic components (literally computer hardware), we can see all of the considerations that impact a business. Once we have this holistic view, we can consider the decisions and tradeoffs that face an organization every day. The takeaway here is that organizations care about different levels of performance for different products. For example, when looking at cloud computing services at AWS, Azure, or GCP, we see Amazon EC2 instances, Azure VMs, and Google Cloud VMs with different operating systems, different purposes (general, compute, memory), and different sizes. General-purpose might be fine for basic enterprise applications, while gaming applications might need compute-optimized, and real-time big data analytics may need a memory-optimized VM. While it gets somewhat forgotten throughout the book, this point means that organizations focused on producing only compute-intensive machines may not be the best for memory-intensive, because the customers of the organization may not have a use for them. In the book's example, some customers (of bigger memory providers) looked at smaller memory applications and said there was no need. In reality, there was massive demand in the rugged, portable market for smaller memory disks. When approaching disruptive innovation, it's essential to recognize your organization's current value network so that you don't target new technologies at those who don't need it.

  3. Product Commoditization. Christensen spends a lot of time describing the dynamics of the disk drive industry, where companies continually supplied increasingly smaller drives with better performance. Christensen's description of commoditization is very interesting: "A product becomes a commodity within a specific market segment when the repeated changes in the basis of competition, completely play themselves out, that is, when market needs on each attribute or dimension of performance have been fully satisfied by more than one available product." At this point, products begin competing primarily on price. In the disk drive industry, companies first competed on capacity, then on size, then on reliability, and finally on price. This price war is reminiscent of the current state of the Continuous Integration / Continuous Deployment (CI/CD) market, a subsegment of DevOps software. Companies in the space, including Github, CircleCI, Gitlab, and others are now competing primarily on price to win new business. Each of the cloud providers has similar technologies native to their public cloud offerings (AWS CodePipeline and CloudFormation, GitHub Actions, Google Cloud Build). They are giving it away for free because of their scale. The building block of CI/CD software is git, an open-source version control system founded by Linux founder Linus Torvalds. With all the providers leveraging a massive open-source project, there is little room for true differentiation. Christensen even says: "It may, in fact, be the case that the product offerings of competitors in a market continue to be differentiated from each other. But differentiation loses its meaning when the features and functionality have exceeded what the market demands." Only time will tell whether these companies can pivot into burgeoning highly differentiated technologies.

Business Themes

Innovator Dilemma.png
R1512B_BIG_MODEL-1200x1035.png
  1. Resources-Processes-Value (RPV) Framework. The RPV framework is a powerful lens for understanding the challenges that large businesses face. Companies have resources (people, assets, technology, product designs, brands, information, cash, relationships with customers, etc.) that can be transformed into greater value products and services. The way organizations go about converting these resources is the organization's processes. These processes can be formal (documented sales strategies, for example) or informal (culture and habitual routines). Processes are the big reasons organizations struggle to deal with emerging technologies. Because culture and habit are ingrained in the organization, the same process used to launch a mature, slow-growing market may be applied to a fast-growing, dynamic sector. Christensen puts it best: "This means the very mechanisms through which organizations create value are intrinsically inimical to change." Lastly, companies have values, or "the standards by which employees make prioritization decisions." When there is a mismatch between the resources, processes, and values of an organization and the product or market that an organization is chasing, its rare the business can be successful in competing in the disruptive market. To see this misalignment in action, Christensen describes a meeting with a CEO who had identified the disruptive change happening in the disk-drive market and had gotten a product to market to meet the growing market. In response to a publication showing the fast growth of the market, the CEO lamented to Christensen: "I know that's what they think, but they're wrong. There isn't a market. We've had that drive in our catalog for 18 months. Everyone knows we've got it, but nobody wants it." The issue was not the product or market demand, but the organization's values. As Christensen continues, "But among the employees, there was nothing about an $80 million, low-end market that solved the growth and profit problems of a multi-billion dollar company – especially when capable competitors were doing all they could to steal away the customers providing those billions. And way at the other end of the company there was nothing about supplying prototype companies of 1.8-inch drives to an automaker that solved the problem of meeting the 1994 quotas of salespeople whose contacts and expertise were based so solidly in the computer industry." The CEO cared about the product, but his team did not. The RPV framework helps evaluate large companies and the challenges they face in launching new products.

  2. How to manage through technological change. Christensen points out three primary ways of managing through disruptive technology change: 1. "Acquire a different organization whose processes and values are a close match with the new task." 2. "Try to change the processes and values of the current organization." 3. "Separate out an independent organization and develop within it the new processes and values that are required to solve the new problem." Acquisitions are a way to get out ahead of disruptive change. There are so many examples but two recent ones come to mind: Microsoft's acquisition of Github and Facebook's acquisition of Instagram. Microsoft paid a whopping $7.5B for Github in 2018 when the Github was rumored to be at roughly $200M in revenue (37.5x Revenue multiple!). Github was undoubtedly a mature business with a great product, but it didn't have a ton of enterprise adoption. Diane Greene at Google Cloud, tried to get Sundar Pichai to pay more, but he said no. Github has changed Azure's position within the market and continued its anti-Amazon strategy of pushing open-source technology. In contrast to the Github acquisition, Instagram was only 13 employees when it was acquired for $1B. Zuckerberg saw the threat the social network represented to Facebook, and today the acquisition is regularly touted as one of the best ever. Instagram was developing a social network solely based on photographs, right at the time every person suddenly had an excellent smartphone camera in their pocket. The acquisition occurred right when the market was ballooning, and Facebook capitalized on that growth. The second way of managing technological change is through changing cultural norms. This is rarely successful, because you are fighting against all of the processes and values deeply embedded in the organization. Indra Nooyi cited a desire to move faster on culture as one of her biggest regrets as a young executive: "I’d say I was a little too respectful of the heritage and culture [of PepsiCo]. You’ve got to make a break with the past. I was more patient than I should’ve been. When you know you have to make a change, at some point you have to say enough is enough. The people who have been in the company for 20-30 years pull you down. If I had to do it all over again, I might have hastened the pace of change even more." Lastly, Christensen prescribes creating an independent organization matched to the resources, processes, and values that the new market requires. Three great spin-out, spin-in examples with different flavors of this come to mind. First, Cisco developed a spin-ins practice whereby they would take members of their organization and start a new company that they would fund to develop a new process. The spin-ins worked for a time but caused major cultural issues. Second, as we've discussed, one of the key reasons AWS was born was that Chris Pinkham was in South Africa, thousands of miles away from Amazon Corporate in Seattle; this distance and that team's focus allowed it to come up with a major advance in computing. Lastly, Mastercard started Mastercard Labs a few years ago. CEO Ajay Banga told his team: "I need two commercial products in three years." He doesn't tell his CFO their budget, and he is the only person from his executive team that interacts with the business. This separation of resources, processes, and values allows those smaller organizations to be more nimble in finding emerging technology products and markets.

  3. Discovering Emerging Markets.

    The resources-processes-values framework can also show us why established firms fail to address emerging markets. Established companies rely on formal budgeting and forecasting processes whereby resources are allocated based on market estimates and revenue forecasts. Christensen highlights several important factors for tackling emerging markets, including focusing on ideas, failure, and learning. Underpinning all of these ideas is the impossibility of predicting the scale and growth rate of disruptive technologies: "Experts' forecasts will always be wrong. It is simply impossible to predict with any useful degree of precision how disruptive products will be used or how large their markets will be." Because of this challenge, relying too heavily on these estimates to underpin financial projections can cause businesses to view initial market development as a failure or not worthy of the companies time. When HP launched a new 1.3-inch disk drive, which could be embedded in PDAs, the company mandated that its revenues had to scale up to $150M within three years, in line with market estimates. That market never materialized, and the initiative was abandoned as a failed investment. Christensen argues that because disruptive technologies are threats, planning has to come after action, and thus strategic and financial planning must be discovery-based rather than execution-based. Companies should focus on learning their customer's needs and the right business model to attack the problem, rather than plan to execute their initial vision. As he puts it: "Research has shown, in fact, that the vast majority of successful new business ventures, abandoned their original business strategies when they began implementing their initial plans and learned what would and would not work." One big fan of Christensen's work is Jeff Bezos, and its easy to see why with Amazon's focus on releasing new products in this discovery manner. The pace of product releases is simply staggering (~almost one per day). Bezos even talked about this exact issue in his 2016 shareholder letter: "The senior team at Amazon is determined to keep our decision-making velocity high. Speed matters in business – plus a high-velocity decision making environment is more fun too. We don't know all the answers, but here are some thoughts. First, never use a one-size-fits-all decision-making process. Many decisions are reversible, two-way doors. Those decisions can use a light-weight process. For those, so what if you're wrong? I wrote about this in more detail in last year's letter. Second, most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you're probably being slow." Amazon is one of the first large organizations to truly embrace this decision-making style, and clearly, the results speak for themselves.

Dig Deeper

  • What Jeff Bezos Tells His Executives To Read

  • Github Cuts Subscription Price by More Than Half

  • Ajay Banga Opening Address at MasterCard Innovation Forum 2014

  • Clayton Christensen Describing Disruptive Innovation

  • Why Cisco’s Spin-Ins Never Caught On

tags: Amazon, Google Cloud, Microsoft, Azure, Github, Gitlab, CircleCI, Pepsi, Jeff Bezos, Indra Nooyi, Mastercard, Ajay Banga, HP, Uber, RPV, Facebook, Instagram, Cisco, batch2
categories: Non-Fiction
 

About Contact Us | Recommend a Book Disclaimer