• Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search
Tech Book of the Month
  • Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search

May 2021 - Crossing the Chasm by Geoffrey Moore

This month we take a look at a classic high-tech growth marketing book. Originally published in 1991, Crossing the Chasm became a beloved book within the tech industry although its glory seems to have faded over the years. While the book is often overly prescriptive in its suggestions, it provides several useful frameworks to address growth challenges primarily early on in a company’s history.

Tech Themes

  1. Technology Adoption Life Cycle. The core framework of the book discusses the evolution of new technology adoption. It was an interesting micro-view of the broader phenomena described in Carlota Perez’s Technological Revolutions. In Moore’s Chasm-crossing world, there are five personas that dominate adoption: innovators, early adopters, early majority, late majority, and laggards. Innovators are technologists, happy to accept more challenging user experiences to push the boundaries of their capabilities and knowledge. Early adopters are intuitive buyers that enjoy trying new technologies but want a slightly better experience. The early majority are “wait and see” folks that want others to battle test the technology before trying it out, but don’t typically wait too long before buying. The late majority want significant reference material and usage before buying a product. Laggards simply don’t want anything to do with new technology. It is interesting to think of this adoption pattern in concert with big technology migrations of the past twenty years including: mainframes to on-premise servers to cloud computing, home phones to cell phones to iphone/android, radio to CDs to downloadable music to Spotify, and cash to check to credit/debit to mobile payments. Each of these massive migration patterns feels very aligned with this adoption model. Everyone knows someone ready to apply the latest tech, and someone who doesn’t want anything to do with it (Warren Buffett!).

  2. Crossing the Chasm. If we accept the above as a general way products are adopted by society (obviously its much more of a mish/mash in reality), we can posit that the most important step is from the early adopters to the early majority - the spot where the bell curve (shown below) really opens up. This is what Geoffrey Moore calls Crossing the Chasm. This idea is highly reminiscent of Clay Christensen’s “not good enough” disruption pattern and Gartner’s technology hype cycle. The examples Moore uses (in 1991) are also striking: Neural networking software and desktop video conferencing. Moore lamented: “With each of these exciting, functional technologies it has been possible to establish a working system and to get innovators to adopt it. But it has not as yet been possible to carry that success over to the early adopters.” Both of these technologies have clearly crossed into the mainstream with Google’s TensorFlow machine learning library and video conferencing tools like Zoom that make it super easy to speak with anyone over video instantly. So what was the great unlock for these technologies, that made these commercially viable and successfully adopted products? Well since 1990 there have been major changes in several important underlying technologies - computer storage and data processing capabilities are almost limitless with cloud computing, network bandwidth has grown exponentially and costs have dropped, and software has greatly improved the ability to make great user experiences for customers. This is a version of not-good-enough technologies that have benefited substantially from changes in underlying inputs. The systems you could deploy in 1990 just could not have been comparable to what you can deploy today. The real question is - are there different types of adoption curves for differently technologies and do they really follow a normal distribution as Moore shows here?

  3. Making Markets & Product Alternatives. Moore positions the book as if you were a marketing executive at a high-tech company and offers several exercises to help you identify a target market, customer, and use case. Chapter six, “Define the Battle” covers the best way to position a product within a target market. For early markets, competition comes from non-consumption, and the company has to offer a “Whole Product” that enables the user to actually derive benefit from the product. Thus, Moore recommends targeting innovators and early adopters who are technologist visionaries able to see the benefit of the product. This also mirrors Clayton Christensen’s commoditization de-commoditization framework, where new market products must offer all of the core components to a system combined into one solution; over time the axis of commoditization shifts toward the underlying components as companies differentiate by using faster and better sub-components. Positioning in these market scenarios should be focused on the contrast between your product and legacy ways of performing the task (use our software instead of pen and paper as an example). In mainstream markets, companies should position their products within the established buying criteria developed by pragmatist buyers. A market alternative serves as the incumbent, well-known provider and a product alternative is a near upstart competitor that you are clearly beating. What’s odd here is that you are constantly referring to your competitors as alternatives to your product, which seems counter-intuitive but obviously, enterprise buyers have alternatives they are considering and you need to make the case that your solution is the best. Choosing a market alternative lets you procure a budget previously used for a similar solution, and the product alternative can help differentiate your technology relative to other upstarts. Moore’s simple positioning formula has helped hundreds of companies establish their go-to-market message: “For (target customers—beachhead segment only) • Who are dissatisfied with (the current market alternative) • Our product is a (new product category) • That provides (key problem-solving capability). • Unlike (the product alternative), • We have assembled (key whole product features for your specific application).”

Business Themes

0_KIXz2tAVqXVREkyd.png
Whole-Product-5-PRODUCT-LEVELS-PHILIP-KOTLER.png
Zz0xZTMzMGUxNGRlNWQxMWVhYTYyMTBhMTMzNTllZGE5ZA==.png
  1. What happened to these examples? Moore offers a number of examples of Crossing the Chasm, but what actually happened to these companies after this book was written? Clarify Software was bought in October 1999 by Nortel for $2.1B (a 16x revenue multiple) and then divested by Nortel to Amdocs in October 2001 for $200M - an epic disaster of capital allocation. Documentum was acquired by EMC in 2003 for $1.7B in stock and was later sold to OpenText in 2017 for $1.6B. 3Com Palm Pilot was a mess of acquisitions/divestitures. Palm was acquired by U.S Robotics which was acquired by 3COM in 1997 and then subsequently spun out in a 2000 IPO which saw a 94% drop. Palm stopped making PDA devices in 2008 and in 2010, HP acquired Palm for $1.2B in cash. Smartcard maker Gemplus merged with competitor Axalto in an 1.8Bn euro deal in 2005, creating Gemalto, which was later acquired by Thales in 2019 for $8.4Bn. So my three questions are: Did these companies really cross the chasm or were they just readily available success stories of their time? Do you need to be the company that leads the chasm crossing or can someone else do it to your benefit? What is the next step in the chasm journey after its crossed and why did so many of these companies fail after a time?

  2. Whole Products. Moore leans into an idea called the Whole Product Concept which was popularized by Theodore Levitt’s 1983 book The Marketing Imagination and Bill Davidow’s (of early VC Mohr Davidow) 1986 book Marketing High Technology. Moore explains the idea: “The concept is very straightforward: There is a gap between the marketing promise made to the customer—the compelling value proposition—and the ability of the shipped product to fulfill that promise. For that gap to be overcome, the product must be augmented by a variety of services and ancillary products to become the whole product.” There are four different perceptions of the product: “1. Generic product: This is what is shipped in the box and what is covered by the purchasing contract. 2.Expected product: This is the product that the consumer thought she was buying when she bought the generic product. It is the minimum configuration of products and services necessary to have any chance of achieving the buying objective. For example, people who are buying personal computers for the first time expect to get a monitor with their purchase-how else could you use the computer?—but in fact, in most cases, it is not part of the generic product. 3.Augmented product: This is the product fleshed out to provide the maximum chance of achieving the buying objective. In the case of a personal computer, this would include a variety of products, such as software, a hard disk drive, and a printer, as well as a variety of services, such as a customer hotline, advanced training, and readily accessible service centers. 4. Potential product: This represents the product’s room for growth as more and more ancillary products come on the market and as customer-specific enhancements to the system are made. These are the product features that have maybe expected or additional to drive adoption.” Moore makes a subtle point that after a while, investments in the generic/out-of-the-box product functionality drive less and less purchase behavior, in tandem with broader market adoption. Customers want to be wooed by the latest technology and as products become similar, customers care less about what’s in the product today, and more about what’s coming. Moore emphasizes Whole Product Planning where you can see how you get to those additional features into the product over time - but Moore was also operating in an era when product decisions and development processes were on two-year+ timelines and not in the DevOps era of today, where product updates are pushed daily in some cases. In the bottoms-up/DevOps era, its become clear that finding your niche users, driving strong adoption from them, and integrating feature ideas from them as soon as possible can yield a big success.

  3. Distribution Channels. Moore focuses on each of the potential ways a company can distribute its solutions: Direct Sales, two-tier retail, one-tier retail, internet retail, two-tier value-added reselling, national roll-ups, original equipment manufacturers (OEMs), and system integrators. As Moore puts it, “The number-one corporate objective, when crossing the chasm, is to secure a channel into the mainstream market with which the pragmatist customer will be comfortable.” These distribution types are clearly relics of technology distribution in the early 1990s. Great direct sales have produced some of the best and biggest technology companies of yesterday including IBM, Oracle, CA Technologies, SAP, and HP. What’s so fascinating about this framework is that you just need one channel to reach the pragmatist customer and in the last 10 years, that channel has become the internet for many technology products. Moore even recognizes that direct sales had produced poor customer alignment: “First, wherever vendors have been able to achieve lock-in with customers through proprietary technology, there has been the temptation to exploit the relationship through unfairly expensive maintenance agreements [Oracle did this big time] topped by charging for some new releases as if they were new products. This was one of the main forces behind the open systems rebellion that undermined so many vendors’ account control—which, in turn, decrease predictability of revenues, putting the system further in jeopardy.” So what is the strategy used by popular open-source bottoms up go-to-market motions at companies like Github, Hashicorp, Redis, Confluent and others? Its straightforward - the internet and simple APIs (normally on Github) provide the fastest channel to reach the developer end market while they are coding. When you look at Open Source scaling, it can take years and years to Cross the Chasm because most of these early open source adopters are technology innovators, however, eventually, solutions permeate into massive enterprises and make the jump. With these new go-to-market motions coming on board, driven by the internet, we’ve seen large companies grow from primarily inbound marketing tactics and less direct outbound sales. The companies named above as well as Shopify, Twilio, Monday.com and others have done a great job growing to a massive scale on the backs of their products (product-led growth) instead of a salesforce. What’s important to realize is that distribution is an abstract term and no single motion or strategy is right for every company. The next distribution channel will surprise everyone!

Dig Deeper

  • How the sales team behind Monday is changing the way workplaces collaborate

  • An Overview of the Technology Adoption Lifecycle

  • A Brief History of the Cloud at NDC Conference

  • Frank Slootman (Snowflake) and Geoffrey Moore Discuss Disruptive Innovations and the Future of Tech

  • Growth, Sales, and a New Era of B2B by Martin Casado (GP at Andreessen Horowitz)

  • Strata 2014: Geoffrey Moore, "Crossing the Chasm: What's New, What's Not"

tags: Crossing the Chasm, Github, Hashicorp, Redis, Monday.com, Confluent, Open Source, Snowflake, Shopify, Twilio, Geoffrey Moore, Gartner, TensorFlow, Google, Clayton Christensen, Zoom, nORTEL, Amdocs, OpenText, EMC, HP, CA, IBM, Oracle, SAP, Gemalto, DevOps
categories: Non-Fiction
 

April 2021 - Innovator's Solution by Clayton Christensen and Michael Raynor

This month we take another look at disruptive innovation in the counter piece to Clayton Christensen’s Innovator’s Dilemma, our July 2020 book. The book crystallizes the types of disruptive innovation and provides frameworks for how incumbents can introduce or combat these innovations. The book was a pleasure to read and will serve as a great reference for the future.

Tech Themes

  1. Integration and Outsourcing. Today, technology companies rely on a variety of software tools and open source components to build their products. When you stitch all of these components together, you get the full product architecture. A great example is seen here with Gitlab, an SMB DevOps provider. They have Postgres for a relational database, Redis for caching, NGINX for request routing, Sentry for monitoring and error tracking and so on. Each of these subsystems interacts with each other to form the powerful Gitlab project. These interaction points are called interfaces. The key product development question for companies is: “Which things do I build internally and which do I outsource?” A simple answer offered by many MBA students is “Outsource everything that is not part of your core competence.” As Clayton Christensen points out, “The problem with core-competence/not-your-core-competence categorization is that what might seem to be a non-core activity today might become an absolutely critical competence to have mastered in a proprietary way in the future, and vice versa.” A great example that we’ve discussed before is IBM’s decision to go with Microsoft DOS for its Operating System and Intel for its Microprocessor. At the time, IBM thought it was making a strategic decision to outsource things that were not within its core competence but they inadvertently gave almost all of the industry profits from personal computing to Intel and Microsoft. Other competitors copied their modular approach and the whole industry slugged it out on price. The question of whether to outsource really depends on what might be important in the future. But that is difficult to predict, so the question of integration vs. outsourcing really comes down to the state of the product and market itself: is this product “not good enough” yet? If the answer is yes, then a proprietary, integrated architecture is likely needed just to make the actual product work for customers. Over time, as competitors enter the market and the fully integrated platform becomes more commoditized, the individual subsystems become increasingly important competitive drivers. So the decision to outsource or build internally must be made on the status of product and the market its attacking.

  2. Commoditization within Stacks. The above point leads to the unbelievable idea of how companies fall into the commoditization trap. This happens from overshooting, where companies create products that are too good (which I find counter-intuitive, who thought that doing your job really well would cause customers to leave!). Christensen describes this through the lens of a salesperson “‘Why can’t they see that our product is better than the competition? They’re treating it like a commodity!’ This is evidence of overshooting…there is a performance surplus. Customers are happy to accept improved products, but unwilling to pay a premium price to get them.” At this time, the things demanded by customers flip - they are willing to pay premium prices for innovations along a new trajectory of performance, most likely speed, convenience, and customization. “The pressure of competing along this new trajectory of improvement forces a gradual evolution in product architectures, away from the interdependent, proprietary architectures that had the advantage in the not-good-enough era toward modular designs in the era of performance surplus. In a modular world, you can prosper by outsourcing or by supplying just one element.” This process of integration, to modularization and back, is super fascinating. As an example of modularization, let’s take the streaming company Confluent, the makers of the open-source software project Apache Kafka. Confluent offers a real-time communications service that allows companies to stream data (as events) rather than batching large data transfers. Their product is often a sub-system underpinning real-time applications, like providing data to traders at Citigroup. Clearly, the basis of competition in trading has pivoted over the years as more and more banking companies offer the service. Companies are prioritizing a new axis, speed, to differentiate amongst competing services, and when speed is the basis of competition, you use Confluent and Kafka to beat out the competition. Now let’s fast forward five years and assume all banks use Kafka and Confluent for their traders, the modular sub-system is thus commoditized. What happens? I’d posit that the axis would shift again, maybe towards convenience, or customization where traders want specific info displayed maybe on a mobile phone or tablet. The fundamental idea is that “Disruption and commoditization can be seen as two sides of the same coin. That’s because the process of commoditization initiates a reciprocal process of de-commoditization [somewhere else in the stack].”

  3. The Disruptive Becomes the Disruptor. Disruption is a relative term. As we’ve discussed previously, disruption is often mischaracterized as startups enter markets and challenge incumbents. Disruption is really a focused and contextual concept whereby products that are “not good enough” by market standards enter a market with a simpler, more convenient, or less expensive product. These products and markets are often dismissed by incumbents or even ceded by market leaders as those leaders continue to move up-market to chase even bigger customers. Its fascinating to watch the disruptive become the disrupted. A great example would be department stores - initially, Macy’s offered a massive selection that couldn’t be found in any single store and customers loved it. They did this by turning inventory three times per year with 40% gross margins for a 120% return on capital invested in inventory. In the 1960s, Walmart and Kmart attacked the full-service department stores by offering a similar selection at much cheaper prices. They did this by setting up a value system whereby they could make 23% gross margins but turn inventories 5 times per year, enabling them to earn the industry golden 120% return on capital invested in inventory. Full-service department stores decided not to compete against these lower gross margin products and shifted more space to beauty and cosmetics that offered even higher gross margins (55%) than the 40% they were used to. This meant they could increase their return on capital invested in inventory and their profits while avoiding a competitive threat. This process continued with discount stores eventually pushing Macy’s out of most categories until Macy’s had nowhere to go. All of a sudden the initially disruptive department stores had become disrupted. We see this in technology markets as well. I’m not 100% this qualifies but think about Salesforce and Oracle. Marc Benioff had spent a number of years at Oracle and left to start Salesforce, which pioneered selling subscription, cloud software, on a per-seat revenue model. This meant a much cheaper option compared to traditional Oracle/Siebel CRM software. Salesforce was initially adopted by smaller customers that didn’t need the feature-rich platform offered by Oracle. Oracle dismissed Salesforce as competition even as Oracle CEO Larry Ellison seeded Salesforce and sat on Salesforce’s board. Today, Salesforce is a $200B company and briefly passed Oracle in market cap a few months ago. But now, Salesforce has raised its prices and mostly targets large enterprise buyers to hit its ambitious growth initiatives. Down-market competitors like Hubspot have come into the market with cheaper solutions and more fully integrated marketing tools to help smaller businesses that aren’t ready for a fully-featured Salesforce platform. Disruption is always contextual and it never stops.

Business Themes

1_fnX5OXzCcYOyPfRHA7o7ug.png
  1. Low-end-Market vs. New-Market Disruption. There are two types of established methods for disruption: Low-end-market (Down-market) and New-market. Low-end-market disruption seeks to establish performance that is “not good enough” along traditional lines, and targets overserved customers in the low-end of the mainstream market. It typically utilizes a new operating or financial approach with structurally different margins than up-market competitors. Amazon.com is a quintessential low-end market disruptor compared to traditional bookstores, offering prices so low they angered book publishers while offering unmatched convenience to customers allowing them to purchase books online. In contrast, Robinhood is a great example of a new-market disruption. Traditional discount brokerages like Charles Schwab and Fidelity had been around for a while (themselves disruptors of full-service models like Morgan Stanley Wealth Management). But Robinhood targeted a group of people that weren’t consuming in the market, namely teens and millennials, and they did it in an easy-to-use app with a much better user interface compared to Schwab and Fidelity. Robinhood also pioneered new pricing with zero-fee trading and made revenue via a new financial approach, payment for order flow (PFOF). Robinhood makes money by being a data provider to market makers - basically, large hedge funds, like Citadel, pay Robinhood for data on their transactions to help optimize customers buying and selling prices. When approaching big markets its important to ask: Is this targeted at a non-consumer today or am I competing at a structurally lower margin with a new financial model and a “not quite good enough” product? This determines whether you are providing a low-end market disruption or a new-market disruption.

  2. Jobs To Be Done. The jobs to be done framework was one of the most important frameworks that Clayton Christensen ever introduced. Marketers typically use advertising platforms like Facebook and Google to target specific demographics with their ads. These segments are narrowly defined: “Males over 55, living in New York City, with household income above $100,000.” The issue with this categorization method is that while these are attributes that may be correlated with a product purchase, customers do not look up exactly how marketers expect them to behave and purchase the products expected by their attributes. There may be a correlation but simply targeting certain demographics does not yield a great result. The marketers need to understand why the customer is adopting the product. This is where the Jobs to Be Done framework comes in. As Christensen describes it, “Customers - people and companies - have ‘jobs’ that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can ‘hire’ to get the job done. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently, and inexpensively as possible.” Christensen zeroes in on the contextual adoption of products; it is the circumstance and not the demographics that matter most. Christensen describes ways for people to view competition and feature development through the Jobs to Be Done lens using Blackberry as an example (later disrupted by the iPhone). While the immature smartphone market was seeing feature competition from Microsoft, Motorola, and Nokia, Blackberry and its parent company RIM came out with a simple to use device that allowed for short productivity bursts when the time was available. This meant they leaned into features that competed not with other smartphone providers (like better cellular reception), but rather things that allowed for these easy “productive” sessions like email, wall street journal updates, and simple games. The Blackberry was later disrupted by the iPhone which offered more interesting applications in an easier to use package. Interestingly, the first iPhone shipped without an app store (but as a proprietary, interdependent product) and was viewed as not good enough for work purposes, allowing the Blackberry to co-exist. Management even dismissed the iPhone as a competitor initially. It wasn’t long until the iPhone caught up and eventually surpassed the Blackberry as the world’s leading mobile phone.

  3. Brand Strategies. Companies may choose to address customers in a number of different circumstances and address a number of Jobs to Be Done. It’s important that the Company establishes specific ways of communicating the circumstance to the customer. Branding is powerful, something that Warren Buffett, Terry Smith, and Clayton Christensen have all recognized as durable growth providers. As Christensen puts it: “Brands are, at the beginning, hollow words into which marketers stuff meaning. if a brand’s meaning is positioned on a job to be done, then when the job arises in a customer’s life, he or she will remember the brand and hire the product. Customers pay significant premiums for brands that do a job well.” So what can a large corporate company do when faced with a disruptive challenger to its branding turf? It’s simple - add a word to their leading brand, targeted at the circumstance in which a customer might find themself. Think about Marriott, one of the leading hotel chains. They offer a number of hotel brands: Courtyard by Marriott for business travel, Residence Inn by Marriott for a home away from home, the Ritz Carlton for high-end luxurious stays, Marriott Vacation Club for resort destination hotels. Each brand is targeted at a different Job to Be Done and customers intuitively understand what the brands stand for based on experience or advertising. A great technology example is Amazon Web Services (AWS), the cloud computing division of Amazon.com. Amazon invented the cloud, and rather than launch with the Amazon.com brand, which might have confused their normal e-commerce customers, they created a completely new brand targeted at a different set of buyers and problems, that maintained the quality and recognition that Amazon had become known for. Another great retail example is the SNKRs app released by Nike. Nike understands that some customers are sneakerheads, and want to know the latest about all Nike shoe drops, so Nike created a distinct, branded app called SNKRS, that gives news and updates on the latest, trendiest sneakers. These buyers might not be interested in logging into the Nike app and may become angry after sifting through all of the different types of apparel offered by Nike, just to find new shoes. The SNKRS app offers a new set of consumers and an easy way to find what they are looking for (convenience), which benefits Nike’s core business. Branding is powerful, and understanding the Job to Be Done helps focus the right brand for the right job.

Dig Deeper

  • Clayton Christensen’s Overview on Disruptive Innovation

  • Jobs to Be Done: 4 Real-World Examples

  • A Peek Inside Marriott’s Marketing Strategy & Why It Works So Well

  • The Rise and Fall of Blackberry

  • Payment for Order Flow Overview

  • How Commoditization Happens

tags: Clayton Christensen, AWS, Nike, Amazon, Marriott, Warren Buffett, Terry Smith, Blackberry, RIM, Microsoft, Motorola, iPhone, Facebook, Google, Robinhood, Citadel, Schwab, Fidelity, Morgan Stanley, Oracle, Salesforce, Walmart, Macy's, Kmart, Confluent, Kafka, Citigroup, Intel, Gitlab, Redis
categories: Non-Fiction
 

February 2021 - Rise of the Data Cloud by Frank Slootman and Steve Hamm

This month we read a new book by the CEO of Snowflake and author of our November 2020 book, Tape Sucks. The book covers Snowflake’s founding, products, strategy, industry specific solutions and partnerships. Although the content is somewhat interesting, it reads more like a marketing book than an actually useful guide to cloud data warehousing. Nonetheless, its a solid quick read on the state of the data infrastructure ecosystem.

Tech Themes

  1. The Data Warehouse. A data warehouse is a type of database that is optimized for analytics. These optimizations mainly revolve around complex query performance, the ability to handle multiple data types, the ability to integrate data from different applications, and the ability to run fast queries across large data sets. In contrast to a normal database (like Postgres), a data warehouse is purpose-built for efficient retrieval of large data sets and not high performance read/write transactions like a typical relational database. The industry began in the late 1970s and early 80’s, driven by work done by the “Father of Data Warehousing” Bill Inmon and early competitor Ralph Kimball, who was a former Xerox PARC designer. In 1986, Kimball launched Redbrick Systems and Inmon launched Prism Solutions in 1991, with its leading product the Prism Warehouse Manager. Prism went public in 1995 and was acquired by Ardent Software in 1998 for $42M while Red Brick was acquired by Informix for ~$35M in 1998. In the background, a company called Teradata, which was formed in the late 1970s by researchers at Cal and employees from Citibank, was going through their own journey to the data warehouse. Teradata would IPO in 1987, get acquired by NCR in 1991; NCR itself would get acquired by AT&T in 1991; NCR would then spin out of AT&T in 1997, and Teradata would spin out of NCR through IPO in 2007. What a whirlwind of corporate acquisitions! Around that time, other new data warehouses were popping up on the scene including Netezza (launched in 1999) and Vertica (2005). Netezza, Vertica, and Teradata were great solutions but they were physical hardware that ran a highly efficient data warehouse on-premise. The issue was, as data began to grow on the hardware, it became really difficult to add more hardware boxes and to know how to manage queries optimally across the disparate hardware. Snowflake wanted to leverage the unlimited storage and computing power of the cloud to allow for infinitely scalable data warehouses. This was an absolute game-changer as early customer Accordant Media described, “In the first five minutes, I was sold. Cloud-based. Storage separate from compute. Virtual warehouses that can go up and down. I said, ‘That’s what we want!’”

  2. Storage + Compute. Snowflake was launched in 2012 by Benoit Dageville (Oracle), Thierry Cruanes (Oracle) and Marcin Żukowski (Vectorwise). Mike Speiser and Sutter Hill Ventures provided the initial capital to fund the formation of the company. After numerous whiteboarding sessions, the technical founders decided to try something crazy, separating data storage from compute (processing power). This allowed Snowflake’s product to scale the storage (i.e. add more boxes) and put tons of computing power behind very complex queries. What may have been limited by Vertica hardware, was now possible with Snowflake. At this point, the cloud had only been around for about 5 years and unlike today, there were only a few services offered by the main providers. The team took a huge risk to 1) bet on the long-term success of the public cloud providers and 2) try something that had never successfully been accomplished before. When they got it to work, it felt like magic. “One of the early customers was using a $20 million system to do behavioral analysis of online advertising results. Typically, one big analytics job would take about thirty days to complete. When they tried the same job on an early version of Snowflake;’s data warehouse, it took just six minutes. After Mike learned about this, he said to himself: ‘Holy shit, we need to hire a lot of sales people. This product will sell itself.’” This idea was so crazy that not even Amazon (where Snowflake runs) thought of unbundling storage and compute when they built their cloud-native data warehouse, Redshift, in 2013. Funny enough, Amazon also sought to attract people away from Oracle, hence the name Red-Shift. It would take Amazon almost seven years to re-design their data warehouse to separate storage and compute in Redshift RA3 which launched in 2019. On top of these functional benefits, there is a massive gap in the cost of storage and the cost of compute and separating the two made Snowflake a significantly more cost-competitive solution than traditional hardware systems.

  3. The Battle for Data Pipelines. A typical data pipeline (shown below) consists of pulling data from many sources, perform ETL/ELT (extract, load, transform and vice versa), centralizing it in a data warehouse or data lake, and connecting that data to visualization tools like Tableau or Looker. All parts of this data stack are facing intense competition. On the ETL/ELT side, you have companies like Fivetran and Matillion and on the data warehouse/data lake side you have Snowflake and Databricks. Fivetran focuses on the extract and load portion of ETL, providing a data integration tool that allows you to connect to all of your operational systems (salesforce, zendesk, workday, etc.) and pull them all together in Snowflake for comprehensive analysis. Matillion is similar, except it connects to your systems and imports raw data into Snowflake, and then transforms it (checking for NULL’s, ensuring matching records, removing blanks) in your Snowflake data warehouse. Matillion thus focuses on the load and transform steps in ETL while Fivetran focuses on the extract and load portions and leverages dbt (data build tool) to do transformations. The data warehouse vs. data lake debate is a complex and highly technical discussion but it mainly comes down to Databricks vs. Snowflake. Databricks is primarily a Machine Learning platform that allows you to run Apache Spark (an open-source ML framework) at scale. Databricks’s main product, Delta Lake allows you to store all data types - structured and unstructured for real-time and complex analytical processes. As Datagrom points out here, the platforms come down to three differences: data structure, data ownership, and use case versatility. Snowflake requires structured or semi-structured data prior to running a query while Databricks does not. Similarly, while Snowflake decouples data storage from compute, it does not decouple data ownership meaning Snowflake maintains all of your data, whereas you can run Databricks on top of any data source you have whether it be on-premise or in the cloud. Lastly, Databricks acts more as a processing layer (able to function in code like python as well as SQL) while Snowflake acts as a query and storage layer (mainly driven by SQL). Snowflake performs best with business intelligence querying while Databricks performs best with data science and machine learning. Both platforms can be used by the same organizations and I expect both to be massive companies (Databricks recently raised at a $28B valuation!). All of these tools are blending together and competing against each other - Databricks just launched a new LakeHouse (Data lake + data warehouse - I know the name is hilarious) and Snowflake is leaning heavily into its data lake. We will see who wins!

An interesting data platform battle is brewing that will play out over the next 5-10 years: The Data Warehouse vs the Data Lakehouse, and the race to create the data cloud

Who's the biggest threat to @snowflake? I think it's @databricks, not AWS Redshifthttps://t.co/R2b77XPXB7

— Jamin Ball (@jaminball) January 26, 2021

Business Themes

Lakehouse_v1.png
architecture-overview.png
  1. Marketing Customers. This book at its core, is a marketing document. Sure, it gives a nice story of how the company was built, the insights of its founding team, and some obstacles they overcame. But the majority of the book is just a “Imagine what you could do with data” exploration across a variety of industries and use cases. Its not good or bad, but its an interesting way of marketing - that’s for sure. Its annoying they spent so little on the technology and actual company building. Our May 2019 book, The Everything Store, about Jeff Bezos and Amazon was perfect because it covered all of the decision making and challenging moments to build a long-term company. This book just talks about customer and partner use cases over and over. Slootman’s section is only about 20 pages and five of them cover case studies from Square, Walmart, Capital One, Fair, and Blackboard. I suspect it may be due to the controversial ousting of their long-time CEO Bob Muglia for Frank Slootman, co-author of this book. As this Forbes article noted: “Just one problem: No one told Muglia until the day the company announced the coup. Speaking publicly about his departure for the first time, Muglia tells Forbes that it took him months to get over the shock.” One day we will hear the actual unfiltered story of Snowflake and it will make for an interesting comparison to this book.

  2. Timing & Building. We often forget how important timing is in startups. Being the right investor or company at the right time can do a lot to drive unbelievable returns. Consider Don Valentine at Sequoia in the early 1970’s. We know that venture capital fund performance persists, in part due to incredible branding at firms like Sequoia that has built up over years and years (obviously reinforced by top-notch talents like Mike Moritz and Doug Leone). Don is a great investor and took significant risks on unproven individuals like Steve Jobs (Apple), Nolan Bushnell (Atari), and Trip Hawkins (EA). But he also had unfettered access to the birth of an entirely new ecosystem and knowledge of how that ecosystem would change business, built up from his years at Fairchild Semiconductor. Don is a unique person and capitalized on that incredible knowledgebase, veritably creating the VC industry. Sequoia is a top firm because he was in the right place at the right time with the right knowledge. Now let’s cover some companies that weren’t: Cloudera, Hortonworks, and MapR. In 2005, Yahoo engineers Doug Cutting and Mike Cafarella, inspired by the Google File System paper, created Hadoop, a distributed file system for storing and accessing data like never before. Hadoop spawned many companies like Cloudera, Hortonworks, and MapR that were built to commercialize the open-source Hadoop project. All of the companies came out of the gate fast with big funding - Cloudera raised $1B at a $4B valuation prior to its 2017 IPO, Hortonworks raised $260M at a $1B valuation prior to its 2014 IPO, and MapR $300M before it was acquired by HPE in 2019. The companies all had one thing in problem however, they were on-premise and built prior to the cloud gaining traction. That meant it required significant internal expertise and resources to run Cloudera, Hortonworks, and MapR software. In 2018, Cloudera and Hortonworks merged (at a $5B valuation) because the competitive pressure from the cloud was eroding both of their businesses. MapR was quietly acquired for less than it raised. Today Cloudera trades at a $5B valuation meaning no shareholder return since the merger and the business is only recently slightly profitable at its current low growth rate. This cautionary case study shows how important timing is and how difficult it is to build a lasting company in the data infrastructure world. As the new analytics stack is built with Fivetran, Matillion, dbt, Snowflake, and Databricks, it will be interesting to see which companies exist 10 years from now. Its probable that some new technology will come along and hurt every company in the stack, but for now the coast is clear - the scariest time for any of these companies.

  3. Burn Baby Burn. Snowflake burns A LOT of money. In the Nine months ended October 31, 2020, Snowflake burned $343M, including $169M in their third quarter alone. Why would Snowflake burn so much money? Because they are growing efficiently! What does efficient growth mean? As we discussed in the last Frank Slootman book - sales and marketing efficiency is a key hallmark to understand the quality of growth a company is experiencing. According to their filings, Snowflake added ~$230M of revenue and spent $325M in sales and marketing. This is actually not terribly efficient - it supposes a dollar invested in sales and marketing yielded $0.70 of incremental revenue. While you would like this number to be closer to 1x (i.e. $1 in S&M yield $1 in revenue - hence a repeatable go-to-market motion), it is not terrible. ServiceNow (Slootman’s old company), actually operates less efficiently - for every dollar it invests in sales and marketing, it generates only $0.55 of subscription revenue. Crowdstrike, on the other hand, operates a partner-driven go-to-market, which enables it to generate more while spending less - created $0.90 for every dollar invested in sales and marketing over the last nine months. However, there is a key thing that distinguishes the data warehouse compared to these other companies and Ben Thompson at Stratechery nails it here: “Think about this in the context of Snowflake’s business: the entire concept of a data warehouse is that it contains nearly all of a company’s data, which (1) it has to be sold to the highest levels of the company, because you will only get the full benefit if everyone in the company is contributing their data and (2) once the data is in the data warehouse it will be exceptionally difficult and expensive to move it somewhere else. Both of these suggest that Snowflake should spend more on sales and marketing, not less. Selling to the executive suite is inherently more expensive than a bottoms-up approach. Data warehouses have inherently large lifetime values given the fact that the data, once imported, isn’t going anywhere.” I hope Snowflake burns more money in the future, and builds a sustainable long-term business.

Dig Deeper

  • Early Youtube Videos Describing Snowflake’s Architecture and Re-inventing the Data Warehouse

  • NCR’s spinoff of Teradata in 2007

  • Fraser Harris of Fivetran and Tristan Handy of dbt speak at the Modern Data Stack Conference

  • Don Valentine, Sequoia Capital: "Target Big Markets" - A discussion at Stanford

  • The Mike Speiser Incubation Playbook (an essay by Kevin Kwok)

tags: Snowflake, Data Warehouse, Oracle, Vertica, Netezza, IBM, Databricks, Apache Spark, Open Source, Fivetran, Matillion, dbt, Data Lake, Sequoia, ServiceNow, Crowdstrike, Cloudera, Hortonworks, MapR, BigQuery, Frank Slootman, Teradata, Xerox, Informix, NCR, AT&T, Benoit Dageville, Mike Speiser, Sutter Hill Ventures, Redshift, Amazon, ETL, Hadoop, SQL
categories: Non-Fiction
 

October 2019 - The Design of Everyday Things by Don Norman

Psychologist Don Norman takes us through an exploratory journey of the basics in functional design. As the consumerization of software grows, this book’s key principles will become increasingly important.

Tech Themes

  1. Discoverability and Understanding. Discoverability and Understanding are two of the most key principles in design. Discoverability answers the questions of, “Is it possible to figure out what actions are possible and where and how to perform them?” Discoverability is absolutely crucial for first time application users because poor discovery of actions leads to low likelihood of repeat use. In terms of Discoverability, Scott Berkun notes that designers should prioritize what can be discovered easily: “Things that most people do, most often, should be prioritized first. Things that some people do, somewhat often, should come second. Things that few people do, infrequently, should come last.” Understanding answers the questions of: “What does it all mean? How is the product supposed to be used? What do all the different controls and settings mean?” We have all seen and used applications where features and complications dominate the settings and layout of the app. Understanding is simply about allowing the user to make sense of what is going on in the application. Together, Discoverability and Understanding lay the ground work for successful task completion before a user is familiar with an application.

  2. Affordances, Signifiers and Mappings. Affordances represent the set of possible actions that are possible; signifiers communicate the correct action that should take place. If we think about a door, depending on the design, possible affordances could be: push, slide, pull, twist the knob, etc. Signifiers represent the correct action or the action the designer would like you to perform. In the context of a door, a signifier might be a metal plate that makes it obvious that the door must be pushed. Mappings provide straightforward correspondence between two sets of objects. For example, when setting the brightness on an iPhone, swiping up increases brightness and swiping down decreases brightness, as would be expected by a new user. Design issues occur when there is a mismatch in affordances, signifiers and mappings. Doors provide another great example of poor coordination between affordances, signifiers and mappings - everyone has encountered a door with a handle that says push over it. This normally followed by an uncomfortable pushing and pulling motion to discover the actions possible with the door. Why are there handles if I am supposed to push? Good design and alignment between affordances, signifiers and mappings make life easier for everyone.

  3. The Seven Stages of Action. Norman lay outs the psychology underpinning user decisions in seven stages - Goal, Plan, Specify, Perform, Perceive, Interpret, Compare. The first three (Goal, Plan, Specify) represent the clarification of an action to be taken on the World. Once the action is Performed, the final three steps (Perceive, Interpret, Compare) are trying to make sense of the new state of the World. The seven stages of action help generalize the typical user’s interactions with the World. With these stages in mind, designers can understand potential breakdowns in discoverability, understanding, affordances, signifiers, and mappings. As users perform actions within applications, understanding each part of the customer journey allows designers to prioritize feature development and discoverability.

Business Themes

Normans-seven-stages-of-action-Redrawn-from-Norman-2001.png
  1. The best product does not always win, but... If the best product always won out, large entrenched incumbents across the software ecosystem like IBM, Microsoft, Google, SAP, and Oracle would be much smaller companies. Why are there so many large behemoths that won’t fall? Each company has made deliberate design decisions to reduce the amount of customer churn. While most of the large enterprise software providers suffer from Feature Creep, the product and deployment complexity can often be a deterrent to churn. For example, Enterprise CIOs do not want to spend budget to re-platform from AWS to Azure, unless there was a major incident or continued frustration with ease of use. Interestingly enough though, as we’ve discussed, the transition from license-maintenance software to SaaS, as well as the consumerization of the enterprise, are changing the necessity of good design and user experience. If we look at Oracle for example. The business has made several acquisitions of applications to be built on Oracle Databases. But the poor user experience and complexity of the applications is starting to push Oracle out of businesses.

  2. Shipping products on time and on budget. “The day a product development process starts, it is behind schedule and above budget.” The product design process is often long and complex because there is a wide array of disciplines involved in the process. Each discipline thinks they are the most important part of the process and may have different reasons for including a singular feature, which may conflict with good design. To alleviate some of that complexity, Norman suggests hiring design researchers that are separate from the product development focus. These researchers focus on how users are working in the field and are coming up with additional use cases / designs all the time. When the development process kicks off, target features and functionality have already been suggested.

  3. Why should business leaders care about good design? We have already discussed how product design can act as a deterrent to churn. If processes and applications become integral to Company function, then there is a low chance of churn, unless there is continued frustration with ease of use. Measuring product market fit is difficult but from a metrics perspective; companies can look at gross churn ($ or customer amount that left / beginning ARR or beginning customers) or NPS to judge how well their product is being received. Good design is a direct contributor to improved NPS and better retention. When you complement good design with several hooks into the customers, churn reduces.

Dig Deeper

  • UX Fundamentals from General Assembly

  • Why game design is crucial for preventing churn

  • Figma and InVision - the latest product development tools

  • Examples of bad user experience design

  • Introduction to Software Usage Analytics

tags: Internet, UX, UI, Design, Apple, App Store, AWS, Azure, Amazon, Microsoft, Oracle, batch2
categories: Non-Fiction
 

August 2019 - How Google Works by Eric Schmidt and Jonathan Rosenberg

While at times it reads as a piece of Google propaganda, this book offers insight into the management techniques that Larry, Sergey and Eric employed to grow the Company to massive scale. Its hard to read this book and expect that all of these practices were actually implemented – it reads like a “How to build a utopia work culture” - but some of the principles are interesting, and more importantly it gives us insight into what Google values in their products and operations.

Tech Themes

  1. Smart Creatives. Perhaps the most important emphasis in the book is placed on the recruiting and hiring of what Eric Schmidt and Jonathan Rosenberg have termed: Smart Creatives – “people who combine technical & business knowledge, creativity and always-learning attitude.” While these seem like the desired platitudes of every silicon valley employee, it gives a window into what Google finds important in its employees. For example, unlike Amazon, which has both business product managers and technical product managers, Google prefers its PMs to be both business focused and highly technical. Smart Creatives are mentioned hundreds of times in the book and continually underpin the success of new product launches. The book almost harps on it too much, to the point where it feels like Eric Schmidt was trying to convince all Googlers that they were truly unique.

  2. Meetings, Q&A, Data and Information Management. Google is one of the many Silicon Valley companies that hosts company wide all-hands Q&A sessions on Friday where anyone can ask a question of Google’s leadership. Information transparency is critically important to Google, and they try to allow data to be accessible throughout the organization at all times. This trickles into other aspects of Google’s management philosophy including meetings and information management. At Google, meetings have a single owner, and while laptops largely remain closed, it’s the owner’s job to present the relevant data and derive the correct insights for the team. To that end, Google makes its information transparently available for all to access – this process is designed to avoid information asymmetry at management levels. One key issue faced by poor management teams is only receiving the best information at the top – this is countered by Amazon through incredibly blunt and aggressive communication; Google, on the other hand, maintains its intense focus on data and results to direct product strategy, so much so that it even studies its own teams productivity using internal data. Google’s laser focus on data makes sense given its main advertising products harvest the world’s internet user data for their benefit, so understanding how to leverage data is always a priority at Google.

  3. 80/20 Time. As part of Google’s product innovation strategy, employees can spend 20% of their work time on creative projects separate from their current role. While the idea sounds like an awesome to keep employees interested and motivated, in practice, its much more structured. Ideas have to be approved by managers and they are only allowed if they can directly impact Google’s business. Some great innovations were spawned out of this policy including Gmail and Google Maps but Google employees have joked that it should be called “120%” time rather than 80%.

Business Themes

  1. Google’s Cloud Strategy. “You should spend 80% of your time on 80% of your revenue.” This quote speaks volumes when it comes to Google’s business strategy. Google clearly is the leader in Search and search advertising. Not only is it the default search engine preferred by most users, it also owns the browser market that directs searches to Google, and the most used operating system. It has certainly created a dominant position in the market and even done illegal things to maintain that advantage. Google also maintains and mines your data, and as Stratechery has pointed out, they are not hiding it anywhere. But what happens when the next wave of computing comes, and you are so focused on your core business that you end up light years behind competition from Amazon (Web Services) and Microsoft (Azure)? That’s where Google finds itself today, and recent outages and issues haven’t helped. So what is Google’s “Cloud Strategy?” The answer is lower priced, open source alternatives. Google famously developed and open sourced, Kubernetes, the container orchestration platform, which has become an increasingly important technology as developers opt for light weight alternatives to traditional virtual machines. They have followed this open sourcing with a, “We are going to open source everything” mentality that is also being employed, a bit more defensively at Microsoft. Google seeks to be an open source layer, either through Kubernetes (which runs in Azure and AWS) or through other open source platforms (Anthos) and just touch some of your company’s low churn cloud spend. Their issue is scale and support. With their knowledge of data centers and parallel computing, cloud capabilities seemed like an obvious place where Google could win, but they fumbled on building a great product because they were so focused on protecting their core business. They are in a catch up position and new CEO of Google Cloud, Thomas Kurian (formerly at Oracle), isn’t afraid to make acquisitions to build out missing product capabilities, which is why it bought Looker earlier this year. It makes sense why a company as focused as Google is on data, would want a cloud focused data analysis tool. Now they are betting on M&A and a highly open-sourced multi-public cloud future as the only way they can win.

  2. “Objective” Key Results. As mentioned previously, the way Google combats potential information asymmetries by empowering individuals throughout the organization with data. This extends to famous venture capitalist (who invested in both Google and Amazon) John Doerr’s favorite data to examine – OKRs – Objective key results. Each Googler has a specific set of OKRs that they are responsible for maintaining on a quarterly basis. Every person’s OKRs are readily available for anyone to see throughout the Company i.e. full transparency. OKRs are public, measurable, and ambitious. This keeps engineers focused and accountable, as long as the OKRs are set correctly and actually measure outcomes. These fit so perfectly with Google’s focus on mining and monitoring data at all times: their products and their employees need to be data driven at all times.

Dig Deeper

  • Recent reports highlight numerous cultural issues at Google, that are not addressed in the book

  • Google Cloud was plagued by internal clashes and missed acquisitions

  • PayPal mafia veteran, Keith Rabois, won’t fund Google PM’s as founders

  • List of Google’s biggest product failures over time

  • Stadia: Google’s game streaming service

tags: Google, Cloud Computing, Scaling, Management, Internet, China, John Doerr, OKRs, Oracle, GCP, Google Cloud, Android, Amazon
categories: Non-Fiction
 

February 2019 - Cloud: Seven Clear Business Models by Timothy Chou

While this book is relatively old for internet standards, it illuminates the early transition to SaaS (Software as a Service) from traditional software license and maintenance models. Timothy Chou, current Head of IoT at the Alchemist Accelerator, former Head of On Demand Applications at Oracle, and a lecturer at Stanford, details seven different business models for selling software and the pros/cons of each.

Tech Themes

  1. The rise of SaaS. Software-as-a-Service (SaaS) is an application that can be accessed through a web browser and is managed and hosted by a third-party (likely a public cloud - Google, Microsoft, or AWS). Let’s flash back to the 90’s, a time when software was sold in shrink-wrapped boxes as perpetual licenses. What this meant was you owned whatever version of the software you purchased, in perpetuity. Most of the time you would pay a maintenance cost (normally 20% of the overall license value) to receive basic upkeep services to the software and get minor bugs fixed. However, when the new version 2.0 came out, you would have to pay another big license fee, re-install the software and go through the hassle of upgrading all existing systems. On the backs of increased internet adoption, SaaS allowed companies to deliver a standard product, over the internet, typically at lower price point to end users. This meant smaller companies like salesforce (at the time) could compete with giants like Siebel Systems (acquired by Oracle for $5.85Bn in 2005) because companies could now purchase the software in an on-demand, by-user fashion without going to the store, at a much lower price point.

  2. How cloud empowers SaaS. As an extension, standardization of product means you can aptly define the necessary computing resources - thereby also standardizing your costs. At the same time that SaaS was gaining momentum, the three mega public cloud players emerged, starting with Amazon (in 2006), then Google and eventually Microsoft. This allowed companies to host software in the cloud and not on their own servers (infrastructure that was hard to manage internally). So instead of racking (pun intended) up costs with an internal infrastructure team managing complex hardware - you could offload your workloads to the cloud. Infrastructure as a service (IaaS) was born. Because SaaS is delivered over the internet at lower prices, the cloud became an integral part of scaling SaaS businesses. As the number of users grew on your SaaS platform, you simply purchased more computing space on the cloud to handle those additional users. Instead of spending big amounts of money on complex infrastructural costs/decisions, a company could now focus entirely on its product and go-to-market strategy, enabling it to reach scale much more quickly.

  3. The titans of enterprise software. Software has absolutely changed in the last 20 years and will likely continue to evolve as more specialized products and services become available. That being said, the perennial software acquirers will continue to be perennial software acquirers. At the beginning of his book, Chou highlights fifteen companies that had gone public since 1999: Concur (IPO: 1999, acquired by SAP for $8.3B in 2014), Webex (IPO: 2002, acquired by Cisco in for $3.2B in 2007), Kintera (IPO: 2003, acquired by Blackbaud for $46M in 2008), Salesforce.com (IPO: 2004), RightNow Technologies (IPO: 2004, acquired by Oracle for $1.5B in 2011), Websidestory (IPO: 2004, acquired by Omniture in 2008 for $394M), Kenexa (IPO: 2005, acquired by IBM for $1.3B in 2012), Taleo (IPO: 2005, acquired for $1.9B by Oracle in 2012), DealerTrack (IPO 2005, acquired by Cox Automotive in 2015 for $4.0B), Vocus (IPO: 2005, acquired by GTCR in 2014 for $446M), Omniture (IPO: 2006, acquired by Adobe for $1.8B in 2009), Constant Contact (IPO: 2007, acquired by Endurance International for $1B in 2015), SuccessFactors (IPO: 2007, acquired by SAP for $3.4B in 2011), NetSuite (IPO 2007: acquired by Oracle for $9.3B in 2016) and Opentable (IPO: 2009, acquired by Priceline for $2.6B in 2015). Oracle, IBM, Cisco and SAP have been some of the most active serial acquirers in tech history and this trend is only continuing. Interestingly enough, Salesforce.com is now in a similar position. What it shows is that if you can come to dominate a horizontal application - CRM (salesforce), ERP (SAP/Oracle), or Infrastructure (Google/Amazon/Microsoft) you can build a massive moat that allows you to become the serial acquirer in that space. You then have first and highest dibs at every target in your industry because you can underwrite an acquisition to the highest strategic multiple. Look for these acquirers to continue to make big deals when it can further lock in their market dominant position especially when you see their core business slow.

    Business Themes

Here we see the “Cash Gap” in the subscription model - customer acquisition expenses are incurred upfront but are recouped over time.

Here we see the “Cash Gap” in the subscription model - customer acquisition expenses are incurred upfront but are recouped over time.

  1. The misaligned incentives of traditional license/maintenance model. Software was traditionally sold as perpetual licenses, where a user could access that version of the software forever. Because users were paying to use something forever, the typical price point was very high for any given enterprise software license. This meant that large software upgrades were made at the the most senior levels of management and were large investments from a dollars and time perspective. On top of that initial license came the 20% support costs paid annually to receive patch updates. At the software vendor, this structure created interesting incentives. First, product updates were usually focused on break-fix and not new, “game-changing” upgrades because supporting multiple, separate versions of the software (especially, pre-IaaS) was incredibly costly. This slowed the pace of innovation at those large software providers (turning them into serial acquirers). Second, the sales team became focused on selling customers on new releases directly after they signed the initial deal. This happened because once you made that initial purchase, you owned that version forever; what better way to get more money off of you than introduce a new feature and re-sell you the whole system again. Salespeople were also incredibly focused on closing deals in a certain quarter because any single deal could make or break not only their quarterly sales quota, but also the Company’s revenue targets. If one big deal slipped from Q4 to Q1 the following year, a Company may have to report lower growth numbers to the stock market driving the stock price down. Third, once you made the initial purchase, the software vendor would direct all problems and product inquiries to customer support who were typically overburdened by requests. Additionally, the maintenance/support costs were built into the initial contract so you may end up contractually obligated to pay for support for a product that you don’t like and cannot change. The Company viewed it as: “You’ve already purchased the software, so why should I waste time ensuring you have a great experience with it - unless you are looking to buy the next version, I’m going to spend my time selling to new leads.” These incentives limited product changes/upgrades, focused salespeople completely on new leads, and hurt customer experience, all at the benefit of the Company over the user.

  2. What are CAC and LTV? CAC or customer acquisition costs are key to understand for any type of software business. As HubSpot and distinguished SaaS investor, David Skok notes, its typically measured as, “the entire cost of sales and marketing over a given period, including salaries and other headcount related expenses, and divide it by the number of customers that you acquired in that period.” Once the software sales model shifted from license/maintenance to SaaS, instead of hard-to-predict, big new license sales, companies started to receive monthly recurring payments. Enterprise software contracts are typically year-long, which means that once a customer signs the Company will know exactly how much revenue it should plan to receive over the coming year. Furthermore, with recurring subscriptions, as long as the customer was happy, the Company could be reasonably assured that customer would renew. This idea led to the concept of Lifetime Value of a customer or LTV. LTV is the total amount of revenue a customer will pay the Company until it churns or cancels the subscription. The logic followed that if you could acquire a customer (CAC) for less than the lifetime value of the customer (LTV), over time you would make money on that individual customer. Typically, investors view a 3:1 LTV to CAC ratio as viable for a healthy SaaS company.

Dig Deeper

  • Bill Gates 1995 memo on the state of early internet competition: The Internet Tidal Wave

  • Andy Jassey on how Amazon Web Services got started

  • Why CAC can be a Startup Killer?

  • How CAC is different for different types of software

  • Basic SaaS Economics by David Skok

tags: Cloud Computing, SaaS, License, Maintenance, Business Models, software, Salesforce, SAP, Oracle, Cisco, IaaS, batch2
categories: Non-Fiction
 

About Contact Us | Recommend a Book Disclaimer