• Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search
Tech Book of the Month
  • Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search

May 2023 - Constellation Software Letters by Mark Leonard

We cover Canada’s biggest and quietest software company and their brilliant leader Mark Leonard.

Tech Themes

  1. Critics and Critiques. For a long time, Constellation heard the same critiques: Roll-ups never work, the businesses you are buying are old, the markets you are buying in are small, the delivery method of license/maintenance is phasing out. All of these are valid concerns. Constellation is a roll-up of many software businesses. Roll-ups, aka acquiring several businesses as the primary method of growth, do have tendency to blow up. The most frequent version for a blowup is leverage. Companies finance acquisitions with debt and eventually they make a couple of poor acquisition decisions and the debt load is too big, and they go bankrupt. A recent example of this is Thrashio, an Amazon third party sellers roll-up. RetailTouchPoints lays out the simple strategy: “Back in 2021, firms like Thrasio were able to buy these Amazon-based businesses for around 4X to 6X EBITDA and then turn that into a 15X to 25X valuation on the combined business.” However, demand for many of these products waned in the post-pandemic era, and Thrasio had too much debt to handle with the lower amount of sales. Bankruptcy isn’t all bad - several companies have emerged from bankruptcy with restructured debt, in a better position than before. To avoid the issue of leverage, Constellation has never taken on meaningful (> 1-2x EBITDA) leverage. This may change in the coming years, but for now it remains accurate. Concerns around market size and delivery method (SaaS vs. License/Maintenance) are also valid. Constellation has software businesses in very niche markets, like boating maintenance software that are inherently limited in size. They will never have a $1B revenue boat maintenance software business, the market just isn’t that big. However, the lack of enthusiasm over a small niche market tends to offer better business characteristics - fewer competitors, more likely adoption of de-facto technology, highly specialized software that is core to a business. Constellation’s insight to combine thousands of these niche markets was brilliant. Lastly, delivery methods have changed. Most customers now prefer to buy cloud software, where they can access technology through a browser on any device and benefit from continuous upgrades. Furthermore, SaaS businesses are subscriptions compared to license maintenance businesses where you pay a signficant sum for the license up-front and then a correspondingly smaller sum for maintenance. SaaS subscriptions tend to cost more over the long-term and have less volatile revenue spikes, but can be less profitable because of the need to continuously improve products and provide the service 24/7. Interestingly, Constellation continued to avoid SaaS even after it was the dominant method of buying software. From the 2014 letter: “The SaaS’y businesses also have higher organic growth rates in recurring revenues than do our traditional businesses. Unfortunately, our SaaS’y businesses have higher average attrition, lower profitability and require a far higher percentage of new name client acquisition per annum to maintain their revenues. We continue to buy and invest in SaaS businesses and products. We'll either learn to run them better, or they will prove to be less financially attractive than our traditional businesses - I expect the former, but suspect that the latter will also prove to be true.” While 2014 was certainly earlier in the cloud transformation, its not surprising that an organization built around the financial characteristics of license maintenance software struggled to make this transition. They are finally embarking on this journey, led their by their customers, and its causing license revenue to decline. License revenue has declined each of the last six quarters. The critiques are valid but Constellations assiduousness allowed them to side-step and even benefit from these critics as they scaled.

  2. Initiatives, Investing for Organic Growth, and Measurement. Although Leonard believes that organic growth is an important measure of success of a software company, he lays out in the Q1’07 letter the challenges of Constellation’s internal organic growth projects, dubbed Initiatives. “In 2003, we instituted a program to forecast and track many of the larger Initiatives that were embedded in our Core businesses (we define Initiatives as significant Research & Development and Sales and Marketing projects). Our Operating Groups responded by increasing the amount of investment that they categorized as Initiatives (e.g. a 3 fold increase in 2005, and almost another 50% increase during 2006). Initially, the associated Organic Revenue growth was strong. Several of the Initiatives became very successful. Others languished, and many of the worst Initiatives were terminated before they consumed significant amounts of capital.” The last sentence is the hardest one to stomach. Terminating initiatives before they had consumed lots of capital, is the smart thing to do. It is the rational thing to do. However, I believe this is at the heart of why Constellation has struggled with organic growth over time. Now I’ll be the first to admit that Constellation’s strategy has been incredible, and my criticism is in no way taking that away from them. Frankly, they won’t care what I say. But, as a very astute colleague pointed out to me, this position of measuring all internal R&D and S&M initiatives, is almost self-fulfilling. At the time Leonard wasn’t concerned with the potential for lack of internal investment and organic growth. He even remarked as so: “I’m not yet worried about our declining investment in Initiatives because I believe that it will be self-correcting. As we make fewer investments in new Initiatives, I’m confident that our remaining Initiatives will be the pick of the litter, and that they are likely to generate better returns. That will, in turn, encourage the Operating Groups to increase their investment in Initiatives. This cycle will take a while to play out, so I do not expect to see increased new Initiative investment for several quarters or even years.” By 2013, he had changed his tune: “Organic growth is, to my mind, the toughest management challenge in a software company, but potentially the most rewarding. The feedback cycle is very long, so experience and wisdom accrete at painfully slow rates. We tracked their progress every quarter, and pretty much every quarter the forecast IRR's eroded. Even the best Initiatives took more time and more investment than anticipated. As the data came in, two things happened at the business unit level: we started doing a better job of managing Initiatives, and our RDSM spending decreased. Some of the adaptations made were obvious: we worked hard to keep the early burn-rate of Initiatives down until we had a proof of concept and market acceptance, sometimes even getting clients to pay for the early development; we triaged Initiatives earlier if our key assumptions proved wrong; and we created dedicated Initiative Champion positions so an Initiative was less likely to drag on with a low but perpetual burn rate under a part-time leader who didn’t feel ultimately responsible. But the most surprising adaptation, was that the number of new Initiatives plummeted. By the time we stopped centrally collecting Initiative IRR data in Q4 2010, our RDSM spending as a percent of Net Revenue had hit an all-time low.” So how could the most calculating, strategic software company of maybe all time struggle to produce attractive organic growth prospects? I’d argue two things - 1) Incentives and 2) Rationality. First, on incentives, the Operating Group managers are compensated on ROIC and net revenue growth. If you are a BU manager and could invest in your business vs. buy another company that has declining organic growth but is priced appropriately (i.e. cheaply) requiring minimal capital outlay, you achieve both objectives by buying lower organic growers or even decliners. It is almost similar to buying ads to fill a hole in churned revenue. As long as you keep pressing the advertising button, you will keep gathering customers. But when you stop, it will be painful and growth will stall out. If I’m a BU manager buying meh software companies that achieve good ROIC and I’m growing revenues because of my acquisitions, it just means I need to keep finding more acquisitions to achieve my growth hurdles. Over time this is a challenge, but it may be multiple years before I have a bad acquisition growth year. Clearly, the incentives are not aligned for organic growth. Connected to the first point, the “buy growth for low cash outlays” strategy is perfectly rational based on the incentives. The key to its rationality is the known vs. the unknown. In buying a small, niche VMS business - way more is known about the range of outcomes. If you compare this to an organic growth initiative, it is clear why again, you choose the acquisition path. Organic growth investments are like venture capital. If sizeable, they can have an outsized impact on business potential. However, the returns are unknown. Simple probability illustrates that a 90% chance of a 20% ROIC and a 10% chance of a 10% ROIC, yields a 19% ROIC. I’d argue however, that with organic initiatives, particularly large, complex organic initiatives, there is an almost un-estimable return. If we use Amazon Web Services as perhaps the greatest organic growth initiative ever produced we can see why. Here is a reasonably capital-intensive business outside the core of Amazon’s online retailing applications. Sure, you can claim that they were already using AWS internally to run their operations, so the lift was not as strong. But it is still far afield from bookselling. AWS as an investment could never happen inside of Constellation (besides it being horizontal software). What manager is going to tank their ROIC via a capital-intensive initiative for several years to realize an astronomical gain down the line? What manager is going to send back to Constellation HQ, that they found a business that has the potential for $85B in revenue and $20B in operating profit 15 years out? You may say, “Vertical markets are small, they can’t produce large outcomes.” Constellation started after Veeva, a $30B public company, and Appfolio, a $7.5B company. The crux of the problem is that it is impossible to measure via a spreadsheet, the unknown and unknowable expected returns of the best organic growth initiatives. As Zeckhauser has discussed, the probabilities and associated gains/losses tend to be severely mispriced in these unknown and unknowable situations. Clayton Christensen identified this exact problem through his work on disruptive innovation. He urged companies to focus on ideas, failure, and learning, noting that strategic and financial planning must be discovery-based rather than execution based. Maybe there were great initiatives within Constellation that never got launched because incentives and rationality stopped them in their tracks. It’s not that you should burn the boats and put all your money into the hot new thing, it’s that product creation and organic growth are inherently risky ventures, and a certain amount of expected loss can be necessary to find the real money-makers.

  3. Larger deals. Leonard stopped writing annual letters, but broke the streak in 2021, when he penned a short note, outlining that the company would be pursuing more larger deals at lower IRRs and looking to develop a new circle of competence outside of VMS. I believe his words were chosen carefully to reflect Warren Buffett’s discussion of Circle of Competence and Thomas Watson Sr.’s (founder of IBM) quote: “I’m no genius. I’m smart in spots - but I stay around those spots.” While I appreciate the idea behind it, I’m less inclined to stay within my circle of competence. I’m young, curious, and foolish, and I think it would be a waste to pigeon-hole myself so early. After all, Warren had to learn about insurance, banking, beverages, etc and he didn’t let his not-knowing preclude him from studying. In justifying larger deals, Leonard cited Constellation’s scale and ability to invest more effectively than current shareholders. He also laid out the company’s edge: “Most of our competitors maximise financial leverage and flip their acquisitions within 3-7 years. CSI appreciates the nuances of the VMS sector. We allow tremendous autonomy to our business unit managers. We are permanent and supportive stakeholders in the businesses that we control, even if their ultimate objective is to eventually be a publicly listed company. CSI’s unique philosophy will not appeal to all sellers and management teams, but we hope it will resonate with some.” Since then Constellation has acquired Allscript’s hospital unit business in March 2022 for $700m in cash, completed a spin-merger of Lumine Group into larger company, WideOrbit, to create a publicly traded telecom advertising software provider, and is rumored to be looking at purchasing a subsidiary of Black Knight, which may have to be divested for its own transaction with ICE. These larger deals no doubt come with more complexity, but one large benefit is they sit within larger operating groups, and are shielded during what may be difficult transition periods for the businesses. It allows the businesses to operate more long-term and focus on providing value to end customers. As for deals outside of VMS, Mark Leonard commented on it during the 2022 earnings call: “I took a hard look at a thermal oil situation. I was looking at close to $1B investment, and it was tax advantaged. So it was a clever structure. It was a time when the sector could not get financing. And unfortunately, the oil prices ran away on me. So I was trying to be opportunistic in a sector that was incredibly beat up. So that is an example….So what are the characteristics there? Complexity. Where its a troubled situation with — circumstances and there’s a lot of complexity. I think we can compete better than the average investor, particularly when people are willing to take capital forever.” The remark on complexity reminded me of Baupost, the firm founded by legendary investor Seth Klarman, who famously bought claims on Lehman Brothers Europe following the 2008 bankruptcy. When you have hyper rational individuals, complexity is their friend.

Business Themes

types_of_motivation.jpg
  1. Decentralized Operating Groups. Its safe to say that Mark Leonard is a BIG believer in decentralized operating groups. Constellation believes in pushing as much decision making authority as possible to the leaders of the various business units. The company operates six operating groups: Volaris, Harris, Topicus (now public), Jonas, Perseus, and Vela. Leonard mentioned the organizational structure in the context of organic growth: “When most of our current Operating Group Managers ran single BU’s, they had strong organic growth businesses. As those managers gave up their original BU management position to oversee a larger Group of BU’s (i.e. became Portfolio Managers), the organic growth of their original BU’s decreased and the profitability of those BU’s increased.” As an example of this dynamic, we can look at Vencora, a Fintech subsidiary of Volaris. Vencora is managed by a portfolio manager, itself a collection of Business Units (BUs) with their own leadership. The Operating Group leaders and Portfolio Managers are incentivized based on growth and ROIC. Furthermore, Constellation mandates that at least 25% (for some executives its 75%) of incentive compensation must be used to purchase shares in the company, on the open market. These shares cannot be sold for three years. This incentive system accomplishes three goals: It keeps broad alignment toward the success of Constellation as a whole, it avoids stock dilution, and it creates a system where employees continuously own more and more of the business. Acquisitions above $20M in revenue must be approved by the head office, who is constantly receiving cash from different subsidiaries and allocating to the highest value opportunities. At varying times, the company has instituted “Keep your capital” initiatives, particularly for the Volaris and Vela operating groups. As Leonard points out in the 2015 letter: “One of the nice side effects of the “keep your capital” restriction, is that while it usually drives down ROIC, it generates higher growth, which is the other factor in the bonus formula. Acquisitions also tend to create an attractive increase in base salaries as the team ends up managing more people, capital, BUs, etc. Currently, a couple of our Operating Groups are generating very high returns without deploying much capital and we are getting to the point that we’ll ask them to keep their capital if they don’t close acceptable acquisitions or pursue acceptable Initiatives shortly.” Because bonuses are paid on ROIC, if an operating group manager sends back a ton of cash to corporate and doesn’t do a lot of new acquisitions, then its ROIC is very high and bonuses will be high. However, because Volaris and Vela are so large, it does not benefit the Head Office to continuously receive these large dividend payments and then pay high bonuses. Head Office will have a mountain of cash with out a lot of easy opportunities to deploy it. Thus the Keep your Capital initiative tamps down bonuses (by tamping down ROIC) and forces the leaders of these businesses to search out productive ways to deploy capital. As a result, more internal growth initiatives are likely to be funded, when acquisitions remain scarce, thereby increasing organic growth. It also pushes BUs and Portfolio Managers to seek out acquisitions to use up some of the capital. Overall, the organizational structure gives extreme authority to individuals and operates with large and strong incentives toward M&A and ROIC.

  2. Selling Constellation. We all know about the epic “what would have happened” deals. A few that come to mind, Oracle buying TikTok US, Microsoft buying Yahoo for $55B, Yahoo acquiring Facebook, Facebook acquiring Snapchat, AT&T acquiring T-Mobile for $39B, JetBlue/Spirt, Ryanair/Aer Lingus. There are tons. Would you believe that Constellation was up for sale at one point? On April 4th 2011, the Constellation board announced that it was considering alternatives for the company. The company was $630m of revenue and $116m of Adj. EBITDA, growing revenue 44% year over year. Today, Constellation is $8.4B of revenue, with $1.16B of FCFA2S, growing revenue at 27% year over year. At the time, Leonard lamented: “I’m proud of the company that our employees and shareholders have built, and will be more than a little sad if it is sold.” To me, this is a critically important non-event to investigate. It goes to show that any company can prematurely cap its compounding. Today, Constellation is perhaps the most revered software company with the most beloved, mysterious genius leader. Imagine if Constellation had been bought by Oracle or another large software company? Where would Mark Leonard be today? Would we have the behemoth that exists today? After the process was concluded with no sale, Leonard discussed the importance of managing one’s own stock price. “I used to maintain that if we concentrated on fundamentals, then our stock price would take care of itself. The events of the last year have forced me to re-think that contention. I'm coming around to the belief that if our stock price strays too far (either high or low) from intrinsic value, then the business may suffer: Too low, and we may end up with the barbarians at the gate; too high, and we may lose previously loyal shareholders and shareholder-employees to more attractive opportunities.” Many technology CEOs could learn from Leonard, preserving an optimistic tone when the company is struggling or the market is punishing the company, and a pessimistic tone when the company is massively over-achieving, like COVID.

  3. Metrics. Leonard loves thinking about and building custom metrics. As he stated in the Q4’2007 letter, “Our favorite single metric for measuring our corporate performance is the sum of ROIC and Organic Net Revenue Growth (“ROIC+OGr”).” However, he is constantly tinkering and thinking about the best and most interesting measures. He generally focuses on three types of metrics: growth, profitability, and returns. For growth, his preferred measure is organic growth. He also believes net maintenance growth is correlated with the value of the business. “We believe that Net Maintenance Revenue is one of the best indicators of the intrinsic value of a software company and that the operating profitability of a low growth software business should correlate tightly to Net Maintenance Revenues.” I believe this correlation is driven by maintenance revenue’s high profitability and association with high EBITA levels (Operating income + amortization from intangibles). For profitability metrics, Leonard for a long time preferred Adj. Net Income (ANI) or EBITA. “ One of the areas where generally accepted accounting principles (“GAAP”) do a poor job of reflecting economic reality, is with goodwill and intangibles accounting. As managers we are at least partly to blame in that we tend to ignore these “expenses”, focusing on EBITA or EBITDA or “Adjusted” Net Income (which excludes Amortisation). The implicit assumption when you ignore Amortisation, is that the economic life of the asset is perpetual. In many instances (for our business) that assumption is correct.” He floated the idea of using free cash flow per share, but it suffers from volatility depending on working capital payments and doesn’t adjust for minority interest payments. Adj. Net Income does both of these things but doesn’t capture the actual cash into the business. In Q3’2019, Leonard adopted a new metric called Free Cash Flow Available to Shareholders (FCFA2S): “We calculate FCFA2S by taking net cash flow from operating activities per IFRS, subtracting the amounts that we spend on fixed assets and on servicing the capital we have sourced from other stakeholders (e.g. debt providers, lease providers, minority shareholders), and then adding interest and dividends earned on investments. The remaining FCFA2S is the uncommitted cashflow available to CSI's shareholders if we made no further acquisitions, nor repaid our other capital-providing stakeholders.” FCFA2S achieves a few happy mediums: 1) Similar to ANI, it is net of the cost of servicing capital (interest, dividends, lease payments) 2) It captures changes in working capital, while ANI does not 3) It reflects cash taxes as opposed to current taxes deducted from pre-tax income (this gets at a much more confusing discussion on deferred tax assets and the difference between book taxes and cash taxes) 4) When comparing FCFA2S to CFO, it tends to be closer than comparing ANI to reported net income. For return metrics, Leonard prefers ROIC (ANI/Average Invested Capital). In the 2015 letter, he laid out the challenge of this metric. First, ROIC can be infinity if a company grows large while reducing its working capital (common in software), effectively lowering the purchase price to zero. Infinity ROIC is a problem because bonuses are paid on ROIC. He contrasts ROIC with IRR but notes its drawbacks, that IRR does not indicate the hold period nor size of the investments. As is said at investing firms, “You can’t eat IRR.” In the 2017 letter, he discussed Incremental return on incremental invested capital ((ANI1 - ANI0)/(IC1-ICo)), but noted its volatility and challenge with handling share issuances / repurchases. Share issuances would increase IC, without an increase in ANI. When discussing high performance conglomerates (HPCs), he discusses EBITA Return (EBITA/Average Total Capital). He notes that: “ROIC is the return on the shareholders’ investment and EBITA Return is the return on all capital. In the former, financial leverage plays a role. In the latter only the operating efficiency with which all net assets are used is reflected, irrespective of whether those assets are financed with debt or shareholders’ investment.” This is similar to P/E vs. EV/EBITDA multiples, where P/E multiples should be used to value market capitalization (i.e. Price) while EV/EBITDA should be used to value the entirety of the business as it relates to debt and equityholders. Mark Leonard is a man of metrics, we will keep watching to see what he comes up with next! In this spirit, I will try to offer a metric for fast-growing software companies, where ROIC is effectively meaningless because negative working capital dynamics in software produce negative invested capital. Furthermore, faster growing companies generally spend ahead of growth and lose money so ANI, FCF, EBITA are all lower than they should be. If you believe the value of these businesses is closely related to revenue, you could use S&M efficiency, or net new ARR / S&M spend. While a helpful measure, many companies don’t disclose ARR. Furthermore, this doesn’t incorporate perhaps the most expensive investing cost, developing products. It also does not incorporate gross margins, which can vary between 50-90% for software companies. One metric you could use is incremental gross margin / (incremental S&M, R&D costs). Here the challenge would be the years it takes to develop products and GTM distribution. To get around this, you could use a cumulative number for R&D/S&M costs. You could also use future gross margin dollars and offset them, similar to the magic number. So our metric is 3 year + incremental gross margin / cumulative S&M and R&D costs. Not a great metric but it can’t hurt to try!

    Dig Deeper

  • Mark Leonard on the Harris Computer Group Podcast (2020)

  • Constellation Software Inc. -Annual General Meeting 2023

  • Mark Leonard: The Best Capital Allocator You’ve Never Heard Of

  • The Moments That Made Mark Miller

  • Topicus: Constellation Software 2.0

tags: Mark Leonard, Constellation Software, CSI, CSU, Harris, Topicus, Lumine, AppFolio, Thrasio, ROIC, FCF, EBITA, Mark Miller, Harris Computer, Volaris, SaaS, AWS, Zeckhauser, Clayton Christensen, IBM, Black Knight, ICE, Seth Klarman, Lehman, Jonas, Perseus, Vela, Vencora, FCFA2S, AT&T, T-Mobile
categories: Non-Fiction
 

April 2022 - Ask Your Developer by Jeff Lawson

This month we check out Jeff Lawson’s new book about API’s. Jeff was a founder and the first CTO at Stubhub, an early hire at AWS, and started Twilio in 2008. He has a very interesting perspective on the software ecosystem as it stands today and what it looks like in the future!

Tech Themes

  1. Start with the Problem, Not the Solution. Lawson repeats a mantra throughout the book related to developers: "Start with the problem, not the solution." This is something that Jeff learned as an early hire at AWS in 2004. Before AWS, Lawson had founded and sold a note-taking service to an internet flame out, co-founded Stubhub as its first CTO, and worked at an extreme sports retailer. His experience across four startups has guided him to a maniacal focus on the customer, and he wants that focus to extend to developers. If you tell developers the exact specification for something and give no context, they will fail to deliver great code. Beginning with the problem and the customer's specific description allows developers to use their creativity to solve the issue at hand. The key is to tell developers the business problem and how the issue works, let them talk to the customer, and help them understand it. That way, developers can use their imaginative, creative problem-solving abilities.

  2. Experiment to Innovate. Experimentation is at the root of invention, which drives business performance over the long term. Jeff calls on the story of the Wright Brothers to illustrate this point. The Wright Brothers were not the first to try to build a flying vehicle. When they achieved flight, they beat out a much better-funded competitor by simply doing something the other person wouldn't do – crash. The Wright brothers would make incremental changes to their flying machine, see what worked, fly it, crash it, and update the design again. The other competitor, Samuel Pierpont Langley, spent heavily on his "aerodome" machine (~$2m in today's dollars) and tried to build the exact specs of a flying machine but didn't run these quick and fast (and somewhat calamitous) experiments. This process of continual experimentation and innovation is the hallmark of a great product organization. Lawson loves the lean startup and its idea of innovation accounting. In innovation accounting, teams document exact experiments, set expectations, hypotheses, target goals, and then detail what happens in the experiment. Think of this as a lab notebook for product experimentation. When doing these experiments, they must have a business focus rather than just a technical ramification. Jeff always asks: "What will this help our customers do?" when evaluating experimentation and innovation. Agile - features, deadlines, quality, certainty - choose 3.

  3. Big Ideas Start Small. In 1986, famous computer scientist Fred Brooks, published a paper called No Silver Bullet, detailing how to manage software teams. Brooks contends that adding more developers and spending more money seldom gets a project to completion faster – normally, it does the opposite. Why is this? New people on the team need time to ramp up and get familiar with the code-base, so they are low productivity at the start. Additionally, developers on the project take a lot of time explaining the code base to new developers joining late. Lawson uses the example of GE Digital to show the issues of overinvesting when starting. Jeff Immelt started as CEO of GE in 2001, and later proclaimed in 2014 that GE would launch a new software/IoT division that would be a meaningful part of their future business. GE invested tons of money into the venture and put experienced leaders on the project; however, it generated minimal profit years later. Despite acquisitions like ServiceMax (later divested), the company spent hundreds of millions with hardly any return. Lawson believes the correct approach would be to invest in 100 small product teams with $1m each, and then as those ideas grow, add more $. This idea of planting seeds and seeing which ones flower and then investing more is the right way to do it, if you can. Start small and slowly gather steam until it makes sense to step on the gas.

Business Themes

lawson-hero-1536x1536.jpeg
  1. Software Infrastructure is Cheap. Software infrastructure has improved dramatically over the last fifteen years. In 2007, if you wanted to start a business, you had to buy servers, configure them, and manage your databases, networking equipment, security, compliance, and privacy. Today that is all handled by the cloud hyperscalers. Furthermore, new infrastructure providers sprouted as the cloud grew that could offer even better, specialized performance. On top of core cloud services like storage and compute, new companies like Datadog, Snowflake, Redis, Github, all make it easy to startup infrastructure for your software business. On top of that, creative tools are just as good. Lawson calls to mind the story of Lil Nas X, the now-famous rapper, who bought a beat online for $30, remixed it and launched it. That beat became "Old Town Road," which went 15x platinum and is now rated 490th on the list of best songs of all time. The startup costs for a new musician, software company, or consumer brand are very low because the infrastructure is so good.

  2. Organization Setup. Amazon has heavily influenced Lawson and Twilio, including Bezos's idea of two-pizza teams. The origin story of two pizza teams comes from a time at Amazon when teams were getting bigger and bigger, and people were becoming more removed from the customer. Slowly, many people throughout the company had almost no insight into the customer and their issues. Jeff introduced cutting his organization into two-pizza teams, i.e. two pizzas could reasonably feed the team. Lawson has adopted this in spades, with Twilio housing over 150 two-pizza teams. Every team has a core customer, whether internal or external. If you are on the platform infrastructure team, your customer may be internal developers who leverage the infrastructure team's development pipelines. If you are on the Voice team, your customer may be actual end customers building applications with Twilio's API-based voice solution. When these teams get large (beyond two pizzas), there is a somewhat natural process of mitosis, where the team splits into two. To do this, the teams detangle their respective codebases and modularize their service so other teams within the company can access it. They then set up collaboration contacts with their closely related teams; internally, everyone monitors how much they use each other microservice across the company. This monitoring allows companies to see where they may need to deploy more resources or create a new division.

  3. Hospitality. Many companies claim to be customer-focused, but few are. Amazon always leaves an empty chair in conference rooms to symbolize the customer in every meeting. Jeff Lawson and Twilio extended this idea – he asked customers for their shoes (the old adage: "walk a mile in someone's shoes") and then hung them throughout Twilio's office. Jeff is intensely focused on the customer and likens his approach to the one famous restauranteur Danny Meyer takes to his restaurants. Danny focuses on this idea of hospitality. In Danny's mind, hospitality goes beyond just focusing on the customer; it makes the customer feel like they are on the business side. While it may be hard to imagine this, everyone knows this feeling when someone goes out of their way to ensure that you have a positive experience. Meyer extends this to an idea about a gatekeeper vs. an agent. A gatekeeper makes it feel like they sit in between you and the product; they remove you from whats happening and make you feel like you are being pushed to do things. In contrast, an agent is a proactive member of an organization that tries to build a team-like atmosphere between the company and the individual customer. Beyond the customer focus, Jeff extends this to developers – developers want autonomy, mastery, and purpose. They want a mission that resonates with them, the freedom to choose how they approach development, and the ability to learn from the best around them. The idea of hospitality exists among all stakeholders of a business but, most importantly, employees and customers.

Dig Deeper

  • Twilio's Jeff Lawson on Building Software with Superpowers

  • The Golden Rule of Hospitality | Tony Robbins Interviews Danny Meyer

  • #SIGNALConf 2021 Keynote

  • How the Wright Brothers Did the 'Impossible'

  • Webinar: How to Focus on the Problem, Not the Solution by Spotify PM, Cindy Chen

tags: Jeff Lawson, Twilio, AWS, Amazon, Jeff Bezos, Stubhub, Wright Brothers, Samuel Pierpont Langley, Innovation Accounting, No Silver Bullet, Fred Brooks, GE, Jeff Immelt, ServiceMax, Lil Nas X, Two Pizza Teams, APIs, Danny Meyer
categories: Non-Fiction
 

March 2022 - Invent and Wander by Jeff Bezos

This month we go back to tech giant Amazon and review all of Jeff Bezos’s letters to shareholders. This book describes Amazon’s journey from e-commerce to cloud to everything in a quick and fascinating read!

Tech Themes

  1. The Customer Focus. These shareholder letters clearly show that Amazon fell in love with its customer and then sought to hammer out traditional operational challenges like cycle times, fulfillment times, and distribution capacity. In the 2008 letter, Bezos calls out: "We have strong conviction that customers value low prices, vast selection, and fast, convenient delivery and that these needs will remain stable over time. It is difficult for us to imagine that ten years from now, customers will want higher prices, less selection, or slower delivery." When a business is so clearly focused on delivering the best customer experience, with completely obvious drivers, its no wonder they succeeded. The entirety of the 2003 letter, entitled "What's good for customers is good for shareholders" is devoted to this idea. The customer is "divinely discontented" and will be very loyal until there is a slightly better service. If you continue to offer lower prices on items, more selection of things to buy, and faster delivery - customers will continue to be happy. Those tenants are not static - you can continually lower prices, add more items, and build more fulfillment centers (while getting faster) to keep customers happy. This learning curve continues in your favor - higher volumes mean cheaper to buy, lower prices means more customers, more items mean more new customers, higher volumes and more selection force the service operations to adjust to ship more. The flywheel continues all for the customer!

  2. Power of Invention. Throughout the shareholder letters, Bezos refers to the power of invention. From the 2018 letter: "We wanted to create a culture of builders - people who are curious, explorers. They like to invent. Even when they're experts, they are "fresh" with a beginner's mind. They see the way we do things as just the way we do things now. A builder's mentality helps us approach big, hard-to-solve opportunities with a humble conviction that success can come through iteration: invent, launch, reinvent, relaunch, start over, rinse, repeat, again and again." Bezos sees invention as the ruthless process of trying and failing repeatedly. The importance of invention was also highlighted in our January book 7 Powers, with Hamilton Helmer calling the idea critical to building more and future S curves. Invention is preceded by wandering and taking big bets - the hunch and the boldness. Bezos understands that the stakes for invention have to grow, too: "As a company grows, everything needs to scale, including the size of your failed experiments. If the size of your failures isn't growing, you're not going to be inventing at a size that can actually move the needle." Once you make these decisions, you have to be ready to watch the business scale, which sounds easy but requires constant attention to customer demand and value. Amazon's penchant for bold bets may inform Andy Jassy's recent decision to spend $10B making a competitor to Elon Musk/SpaceX's Starlink internet service. This decision is a big, bold bet on the future - we'll see if he is right in time.

  3. Long-Term Focus. Bezos always preached trading off the short-term gain for the long-term relationship. This mindset shows up everywhere at Amazon - selling an item below cost to drive more volumes and give consumers better prices, allowing negative reviews on sites when it means that Amazon may sell fewer products, and providing Prime with ever-faster and free delivery shipments. The list goes on and on - all aspects focused on building a long-term moat and relationship with the customer. However it's important to note that not every decision pans out, and it's critical to recognize when things are going sideways; sometimes, you get an unmistakable punch in the mouth to figure that out. Bezos's 2000 shareholder letter started with, "Ouch. It's been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80 percent from when I wrote you last year." It then went on to highlight something that I didn't see in any other shareholder letter, a mistake: "In retrospect, we significantly underestimated how much time would be available to enter these categories and underestimated how difficult it would be for a single category e-commerce companies to achieve the scale necessary to succeed…With a long enough financing runway, pets.com and living.com may have been able to acquire enough customers to achieve the needed scale. But when the capital markets closed the door on financing internet companies, these companies simply had no choice but to close their doors. As painful as that was, the alternative - investing more of our own capital in these companies to keep them afloat- would have been an even bigger mistake." During the mid to late 90s, Amazon was on an M&A and investment tear, and it wasn't until the bubble crashed that they looked back and realized their mistake. Still, optimizing for the long term means admitting those mistakes and changing Amazon's behavior to improve the business. When thinking long-term, the company continued to operate amazingly well.

Business Themes

Amazon+flywheel+model+colour.png
  1. Free Cash Flow per Share. Despite historical rhetoric that Bezos forewent profits in favor of growth, his annual shareholder letters continually reinforce the value of upfront cash flows to Amazon's business model. If Amazon could receive cash upfront and manage its working capital cycle (days in inventory + days AR - days AP), it could scale its operations without requiring tons of cash. He valued the free cash flow per share metric so intensely that he spent an entire shareholder letter (2004) walking through an example of how earnings can differ from cash flow in businesses that invest in infrastructure. This maniacal focus on a financial metric is an excellent reminder that Bezos was a hedge fund portfolio manager before starting Amazon. These multiple personas: the hedge fund manager, the operator, the inventor, the engineer - all make Bezos a different type of character and CEO. He clearly understood financials and modeling, something that can seem notoriously absent from public technology CEOs today.

  2. A 1,000 run home-run. Odds and sports have always captivated Warren Buffett, and he frequently liked to use Ted Williams's approach to hitting as a metaphor for investing. Bezos elaborates on this idea in his 2014 Letter (3 Big Ideas): "We all know that if you swing for the fences, you're going to strike out a lot, but you're also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score one thousand runs. This long-tailed distribution of returns is why its important to be bold. Big winners pay for so many experiments." AWS is certainly a case of a 1,000 run home-run. The company incubated the business and first wrote about it in 2006 when they had 240,000 registered developers. By 2015, AWS had 1,000,000 customers, and is now at a $74B+ run-rate. This idea also calls to mind Monish Pabrai's Spawners idea - or the idea that great companies can spawn entirely new massive drivers for their business - Google with Waymo, Amazon with AWS, Apple with the iPhone. These new businesses require a lot of care and experimentation to get right, but they are 1,000 home runs, and taking bold bets is important to realizing them.

  3. High Standards. How does Amazon achieve all that it does? While its culture has been called into question a few times, it's clear that Amazon has high expectations for its employees. The 2017 letter addresses this idea, diving into whether high standards are intrinsic/teachable and universal/domain-specific. Bezos believes that standards are teachable and driven by the environment while high standards tend to be domain-specific - high standards in one area do not mean you have high standards in another. This discussion of standards also calls back to Amazon's 2012 letter entitled "Internally Driven," where Bezos argues that he wants proactive employees. To identify and build a high standards culture, you need to recognize what high standards look like; then, you must have realistic expectations for how hard it should be or how long it will take. He illustrates this with a simple vignette on perfect handstands: "She decided to start her journey by taking a handstand workshop at her yoga studio. She then practiced for a while but wasn't getting the results she wanted. So, she hired a handstand coach. Yes, I know what you're thinking, but evidently this is an actual thing that exists. In the very first lesson, the coach gave her some wonderful advice. 'Most people,' he said, 'think that if they work hard, they should be able to master a handstand in about two weeks. The reality is that it takes about six months of daily practice. If you think you should be able to do it in two weeks, you're just going to end up quitting.' Unrealistic beliefs on scope – often hidden and undiscussed – kill high standards." Companies can develop high standards with clear scope and corresponding challenge recognition.

Dig Deeper

  • Jeff Bezo’s Regret Minimization Framework

  • Andy Jassy on Figuring Out What's Next for Amazon

  • Amazon’s Annual Reports and Shareholder Letters

  • Elements of Amazon’s Day 1 Culture

  • AWS re:Invent 2021 Keynote

tags: Jeff Bezos, Amazon, AWS, Invention, 7 Powers, Elon Musk, SpaceX, Andy Jassy, Hamilton Helmer, Prime, Working Capital, Warren Buffett, Ted Williams, Monish Pabrai, Spawners, High Standards
categories: Non-Fiction
 

February 2022 - Cable Cowboy by Mark Robichaux

This month we jump into the history of the cable industry in the US with Cable Cowboy. The book follows cable’s main character for over 30 years, John Malone, the intense, deal-addicted CEO of Telecommunications International (TCI).

Tech Themes

  1. Repurposed Infrastructure. Repurposed infrastructure is one of the incredible drivers of technological change covered in Carlota Perez’s Technology Revolutions and Financial Capital. When a new technology wave comes along, it builds on the backs of existing infrastructure to reach a massive scale. Railroads laid the foundation for oil transport pipelines. Later, telecommunications companies used the miles and miles of cleared railroad land to hang wires to provide phone service through the US. Cable systems were initially used to pull down broadcast signals and bring them to remote places. Over time, more and more content providers like CNN, TBS, BET started to produce shows with cable distribution in mind. Cable became a bigger and bigger presence, so when the internet began to gain steam in the early 1990s, Cable was ready to play a role. It just so happened that Cable was best positioned to provide internet service to individual homes because, unlike the phone companies’ copper wiring, Cable had made extensive use of coaxial fiber which provided much faster speeds. In 1997, after an extended period of underperformance for the Cable industry, Microsoft announced a $1B investment in Comcast. The size of the deal showed the importance of cable providers in the growth of the internet.

  2. Pipes + Content. One of the major issues surrounding TCI as they faced anti-trust scrutiny was their ownership of multiple TV channels. Malone realized that the content companies could make significant profits, especially when content was shown across multiple cable systems. TCI enjoyed the same Scale Economies Power as Netflix. Once the cable channel produces content, any way to spread the content cost over more subscribers is a no-brainer. However, these content deals were worrisome given TCI’s massive cable presence (>8,000,000 subscribers). TCI would frequently demand that channels take an equity investment to access TCI’s cable system. “In exchange for getting on TCI systems, TCI drove a tough bargain. He demanded that cable networks either allow TCI to invest in them directly, or they had to give TCI discounts on price, since TCI bought in bulk. In return for most-favored-nation-status on price, TCI gave any programmer immediate access to nearly one-fifth of all US subscribers in a single stroke.” TCI would impose its dominant position - we can either carry your channel and make an investment, or you can miss out on 8 million subscribers. Channels would frequently choose the former. Malone tried to avoid anti-trust by creating Liberty Media. This spinoff featured all of TCI’s investments in cable providers, offering a pseudo-separation from the telecom giant (although John Malone would completely control liberty).

  3. Early, Not Wrong. Several times in history, companies or people were early to an idea before it was feasible. Webvan formed the concept of an online grocery store that could deliver fresh groceries to your house. It raised $800M before flaming out in the public markets. Later, Instacart came along and is now worth over $30B. There are many examples: Napster/Spotify, MySpace/Facebook, Pets.com/Chewy, Go Corporation/iPad, and Loudcloud/AWS. The early idea in the telecom industry was the information superhighway. We’ve discussed this before, but the idea is that you would use your tv to access the outside world, including ordering Pizza, accessing bank info, video calling friends, watching shows, and on-demand movies. The first instantiation of this idea was the QUBE, an expensive set-top box that gave users a plethora of additional interactive services. The QUBE was the launch project of a joint venture between American Express and Warner Communications to launch a cable system in the late 1970s. The QUBE was introduced in 1982 but cost way too much money to produce. With steep losses and mounting debt, Warner Amex Cable “abandoned the QUBE because it was financially infeasible.” In 1992, Malone delivered a now-famous speech on the future of the television industry, predicting that TVs would offer 500 channels to subscribers, with movies, communications, and shopping. 10 years after the QUBE’s failure, Time Warner tried to fulfill Malone’s promise by launching the Full-Service Network (FSN) with the same idea - offering a ton of services to users through a specialized hardware + software approach. This box was still insanely expensive (>$1,000 per box) because the company had to develop all hardware and software. After significant losses, the project was closed. It wasn’t until recently that TV’s evolved to what so many people thought they might become during those exciting internet boom years of the late 1990s. In this example and several above, sometimes the idea is correct, but the medium or user experience is wrong. It turned out that people used a computer and the internet to access shop, order food, or chat with friends, not the TV. In 2015, Domino’s announced that you could now order Pizza from your TV.

Business Themes

john-malone-8-664x442-c-center.jpg
  1. Complicated Transactions. Perhaps the craziest deal in John Malone’s years of experience in complex deal-making was his spinoff of Liberty Media. Liberty represented the content arm of TCI and held positions in famous channels like CNN and BET. Malone was intrigued at structuring a deal that would evade taxes and give himself the most potential upside. To create this “artificial” upside, Malone engineered a rights offering, whereby existing TCI shareholders could purchase the right to swap 16 shares of TCI for 1 share of Liberty. Malone set the price to swap at a ridiculously high value of TCI shares - ~valuing Liberty at $300 per share. “It seemed like such a lopsided offer: 16 shares of TCI for just 1 share of Liberty? That valued Liberty at $3000 a share, for a total market value of more than $600M by Malone’s reckoning. How could that be, analysts asked, given that Liberty posed a loss on revenue fo a mere $52M for the pro-forma nine months? No one on Wall Street expected the stock to trade up to $300 anytime soon.” The complexity of the rights offering + spinoff made the transaction opaque enough that even seasoned investors were confused about how it all worked and declined to buy the rights. This deal meant Malone would have more control of the newly separate Liberty Media. At the same time, the stock spin had such low participation that shares were initially thinly traded. Once people realized the quality of the company’s assets, the stock price shot up, along with Malone’s net worth. Even crazier, Malone took a loan from the new Liberty Media to buy shares of the company, meaning he had just created a massive amount of value by putting up hardly any capital. For a man that loved complex deals, this deal is one of his most complex and most lucrative.

  2. Deal Maker Extraordinaire / Levered Rollups. John Malone and TCI loved deals and hated taxes. When TCI was building out cable networks, they acquired a new cable system almost every two weeks. Malone popularized using EBITDA (earnings before interest, taxes, depreciation, and amortization) as a proxy for real cash flow relative to net income, which incorporates tax and interest payments. To Malone, debt could be used for acquisitions to limit paying taxes and build scale. Once banks got comfortable with EBITDA, Malone went on an acquisition tear. “From 1984 to 1987, Malone had spent nearly $3B for more than 150 cable companies, placing TCI wires into one out of nearly every five with cable in the country, a penetration that was twice that of its next largest rival.” Throughout his career, he rallied many different cable leaders to find a deal that worked for everyone. In 1986, when fellow industry titan Ted Turner ran into financial trouble, Malone reached out to Viacom leader Sumner Redstone, to avoid letting Time Inc (owner of HBO) buy Turner’s CNN. After a quick negotiation, 31 cable operators agreed to rescue Turner Broadcasting with a $550M investment, allowing Turner to maintain control and avoid a takeover. Later, Malone led an industry consortium that included TCI, Comcast, and Cox to create a high speed internet service called, At Home, in 1996. “At Home was responsible for designing the high-speed network and providing services such as e-mail, and a home page featuring news, entertainment, sports, and chat groups. Cable operators were required to upgrade their local systems to accommodate two-way transmission, as well as handle marketing, billing, and customer complaints, for which they would get 65% of the revenue.” At Home ended up buying early internet search company Excite in a famous $7.5B deal, that diluted cable owners and eventually led to bankruptcy for the combined companies. Malone’s instinct was always to try his best to work with a counterparty because he genuinely believed a deal between two competitors provided better outcomes to everyone.

  3. Tracking Stocks. Malone popularized the use of tracking stocks, which are publicly traded companies that mirror the operating performance of the underlying asset owned by a company. John Malone loved tracking stocks because they could be used to issue equity to finance operations and give investors access to specific divisions of a conglomerate while allowing the parent to maintain full control. While tracking stocks have been out of favor (except for Liberty Media, LOL), they were once highly regarded and even featured in the original planning of AT&T’s $48B purchase of TCI in 1998. AT&T financed its TCI acquisition with debt and new AT&T stock, diluting existing shareholders. AT&T CEO Michael Armstrong had initially agreed to use tracking stocks to separate TCI’s business from the declining but cash-flowing telephone business but changed his mind after AT&T’s stock rocketed following the TCI deal announcement. Malone was angry with Armstrong’s actions, and the book includes an explanation: “heres why you should mess with it, Mike: You’ve just issued more than 400 million new shares of AT&T to buy a business that produces no earnings. It will be a huge money-loser for years, given how much you’ll spend on broadband. That’s going to sharply dilute your earnings per share, and your old shareholders like earnings. That will hurt your stock price, and then you can’t use stock to make more acquisitions, then you’re stuck. If you create a tracking stock to the performance of cable, you separate out the losses we produce and show better earnings for your main shareholders; and you can use the tracker to buy more cable interests in tax-free deals.” Tracking stocks all but faded from existence following the internet bubble and early 2000s due to their difficulty of implementation and complexity, which can confuse shareholders and cause the businesses to trade at a large discount. This all begs the question, though - which companies could use tracking stock today? Imagine an AWS tracker, a Youtube tracker, an Instagram tracker, or an Xbox tracker - all of these could allow cloud companies to attract new shareholders, do more specific tax-free mergers, and raise additional capital specific to a business unit.

Dig Deeper

  • John Malone’s Latest Interview with CNBC (Nov 2021)

  • John Malone on LionTree’s Kindred Cast

  • A History of AT&T

  • Colorado Experience: The Cable Revolution

  • An Overview on Spinoffs

tags: John Malone, TCI, CNN, TBS, BET, Cable, Comcast, Microsoft, Netflix, Liberty Media, Napster, Spotify, MySpace, Facebook, Pets.com, Chewy, Go Corporation, iPad, Loudcloud, AWS, American Express, Warner, Time Warner, Domino's, Viacom, Sumner Redstone, Ted Turner, Bill Gates, At Home, Excite, AT&T, Michael Armstrong, Bob Magness, Instagram, YouTube, Xbox
categories: Non-Fiction
 

April 2021 - Innovator's Solution by Clayton Christensen and Michael Raynor

This month we take another look at disruptive innovation in the counter piece to Clayton Christensen’s Innovator’s Dilemma, our July 2020 book. The book crystallizes the types of disruptive innovation and provides frameworks for how incumbents can introduce or combat these innovations. The book was a pleasure to read and will serve as a great reference for the future.

Tech Themes

  1. Integration and Outsourcing. Today, technology companies rely on a variety of software tools and open source components to build their products. When you stitch all of these components together, you get the full product architecture. A great example is seen here with Gitlab, an SMB DevOps provider. They have Postgres for a relational database, Redis for caching, NGINX for request routing, Sentry for monitoring and error tracking and so on. Each of these subsystems interacts with each other to form the powerful Gitlab project. These interaction points are called interfaces. The key product development question for companies is: “Which things do I build internally and which do I outsource?” A simple answer offered by many MBA students is “Outsource everything that is not part of your core competence.” As Clayton Christensen points out, “The problem with core-competence/not-your-core-competence categorization is that what might seem to be a non-core activity today might become an absolutely critical competence to have mastered in a proprietary way in the future, and vice versa.” A great example that we’ve discussed before is IBM’s decision to go with Microsoft DOS for its Operating System and Intel for its Microprocessor. At the time, IBM thought it was making a strategic decision to outsource things that were not within its core competence but they inadvertently gave almost all of the industry profits from personal computing to Intel and Microsoft. Other competitors copied their modular approach and the whole industry slugged it out on price. The question of whether to outsource really depends on what might be important in the future. But that is difficult to predict, so the question of integration vs. outsourcing really comes down to the state of the product and market itself: is this product “not good enough” yet? If the answer is yes, then a proprietary, integrated architecture is likely needed just to make the actual product work for customers. Over time, as competitors enter the market and the fully integrated platform becomes more commoditized, the individual subsystems become increasingly important competitive drivers. So the decision to outsource or build internally must be made on the status of product and the market its attacking.

  2. Commoditization within Stacks. The above point leads to the unbelievable idea of how companies fall into the commoditization trap. This happens from overshooting, where companies create products that are too good (which I find counter-intuitive, who thought that doing your job really well would cause customers to leave!). Christensen describes this through the lens of a salesperson “‘Why can’t they see that our product is better than the competition? They’re treating it like a commodity!’ This is evidence of overshooting…there is a performance surplus. Customers are happy to accept improved products, but unwilling to pay a premium price to get them.” At this time, the things demanded by customers flip - they are willing to pay premium prices for innovations along a new trajectory of performance, most likely speed, convenience, and customization. “The pressure of competing along this new trajectory of improvement forces a gradual evolution in product architectures, away from the interdependent, proprietary architectures that had the advantage in the not-good-enough era toward modular designs in the era of performance surplus. In a modular world, you can prosper by outsourcing or by supplying just one element.” This process of integration, to modularization and back, is super fascinating. As an example of modularization, let’s take the streaming company Confluent, the makers of the open-source software project Apache Kafka. Confluent offers a real-time communications service that allows companies to stream data (as events) rather than batching large data transfers. Their product is often a sub-system underpinning real-time applications, like providing data to traders at Citigroup. Clearly, the basis of competition in trading has pivoted over the years as more and more banking companies offer the service. Companies are prioritizing a new axis, speed, to differentiate amongst competing services, and when speed is the basis of competition, you use Confluent and Kafka to beat out the competition. Now let’s fast forward five years and assume all banks use Kafka and Confluent for their traders, the modular sub-system is thus commoditized. What happens? I’d posit that the axis would shift again, maybe towards convenience, or customization where traders want specific info displayed maybe on a mobile phone or tablet. The fundamental idea is that “Disruption and commoditization can be seen as two sides of the same coin. That’s because the process of commoditization initiates a reciprocal process of de-commoditization [somewhere else in the stack].”

  3. The Disruptive Becomes the Disruptor. Disruption is a relative term. As we’ve discussed previously, disruption is often mischaracterized as startups enter markets and challenge incumbents. Disruption is really a focused and contextual concept whereby products that are “not good enough” by market standards enter a market with a simpler, more convenient, or less expensive product. These products and markets are often dismissed by incumbents or even ceded by market leaders as those leaders continue to move up-market to chase even bigger customers. Its fascinating to watch the disruptive become the disrupted. A great example would be department stores - initially, Macy’s offered a massive selection that couldn’t be found in any single store and customers loved it. They did this by turning inventory three times per year with 40% gross margins for a 120% return on capital invested in inventory. In the 1960s, Walmart and Kmart attacked the full-service department stores by offering a similar selection at much cheaper prices. They did this by setting up a value system whereby they could make 23% gross margins but turn inventories 5 times per year, enabling them to earn the industry golden 120% return on capital invested in inventory. Full-service department stores decided not to compete against these lower gross margin products and shifted more space to beauty and cosmetics that offered even higher gross margins (55%) than the 40% they were used to. This meant they could increase their return on capital invested in inventory and their profits while avoiding a competitive threat. This process continued with discount stores eventually pushing Macy’s out of most categories until Macy’s had nowhere to go. All of a sudden the initially disruptive department stores had become disrupted. We see this in technology markets as well. I’m not 100% this qualifies but think about Salesforce and Oracle. Marc Benioff had spent a number of years at Oracle and left to start Salesforce, which pioneered selling subscription, cloud software, on a per-seat revenue model. This meant a much cheaper option compared to traditional Oracle/Siebel CRM software. Salesforce was initially adopted by smaller customers that didn’t need the feature-rich platform offered by Oracle. Oracle dismissed Salesforce as competition even as Oracle CEO Larry Ellison seeded Salesforce and sat on Salesforce’s board. Today, Salesforce is a $200B company and briefly passed Oracle in market cap a few months ago. But now, Salesforce has raised its prices and mostly targets large enterprise buyers to hit its ambitious growth initiatives. Down-market competitors like Hubspot have come into the market with cheaper solutions and more fully integrated marketing tools to help smaller businesses that aren’t ready for a fully-featured Salesforce platform. Disruption is always contextual and it never stops.

Business Themes

1_fnX5OXzCcYOyPfRHA7o7ug.png
  1. Low-end-Market vs. New-Market Disruption. There are two types of established methods for disruption: Low-end-market (Down-market) and New-market. Low-end-market disruption seeks to establish performance that is “not good enough” along traditional lines, and targets overserved customers in the low-end of the mainstream market. It typically utilizes a new operating or financial approach with structurally different margins than up-market competitors. Amazon.com is a quintessential low-end market disruptor compared to traditional bookstores, offering prices so low they angered book publishers while offering unmatched convenience to customers allowing them to purchase books online. In contrast, Robinhood is a great example of a new-market disruption. Traditional discount brokerages like Charles Schwab and Fidelity had been around for a while (themselves disruptors of full-service models like Morgan Stanley Wealth Management). But Robinhood targeted a group of people that weren’t consuming in the market, namely teens and millennials, and they did it in an easy-to-use app with a much better user interface compared to Schwab and Fidelity. Robinhood also pioneered new pricing with zero-fee trading and made revenue via a new financial approach, payment for order flow (PFOF). Robinhood makes money by being a data provider to market makers - basically, large hedge funds, like Citadel, pay Robinhood for data on their transactions to help optimize customers buying and selling prices. When approaching big markets its important to ask: Is this targeted at a non-consumer today or am I competing at a structurally lower margin with a new financial model and a “not quite good enough” product? This determines whether you are providing a low-end market disruption or a new-market disruption.

  2. Jobs To Be Done. The jobs to be done framework was one of the most important frameworks that Clayton Christensen ever introduced. Marketers typically use advertising platforms like Facebook and Google to target specific demographics with their ads. These segments are narrowly defined: “Males over 55, living in New York City, with household income above $100,000.” The issue with this categorization method is that while these are attributes that may be correlated with a product purchase, customers do not look up exactly how marketers expect them to behave and purchase the products expected by their attributes. There may be a correlation but simply targeting certain demographics does not yield a great result. The marketers need to understand why the customer is adopting the product. This is where the Jobs to Be Done framework comes in. As Christensen describes it, “Customers - people and companies - have ‘jobs’ that arise regularly and need to get done. When customers become aware of a job that they need to get done in their lives, they look around for a product or service that they can ‘hire’ to get the job done. Their thought processes originate with an awareness of needing to get something done, and then they set out to hire something or someone to do the job as effectively, conveniently, and inexpensively as possible.” Christensen zeroes in on the contextual adoption of products; it is the circumstance and not the demographics that matter most. Christensen describes ways for people to view competition and feature development through the Jobs to Be Done lens using Blackberry as an example (later disrupted by the iPhone). While the immature smartphone market was seeing feature competition from Microsoft, Motorola, and Nokia, Blackberry and its parent company RIM came out with a simple to use device that allowed for short productivity bursts when the time was available. This meant they leaned into features that competed not with other smartphone providers (like better cellular reception), but rather things that allowed for these easy “productive” sessions like email, wall street journal updates, and simple games. The Blackberry was later disrupted by the iPhone which offered more interesting applications in an easier to use package. Interestingly, the first iPhone shipped without an app store (but as a proprietary, interdependent product) and was viewed as not good enough for work purposes, allowing the Blackberry to co-exist. Management even dismissed the iPhone as a competitor initially. It wasn’t long until the iPhone caught up and eventually surpassed the Blackberry as the world’s leading mobile phone.

  3. Brand Strategies. Companies may choose to address customers in a number of different circumstances and address a number of Jobs to Be Done. It’s important that the Company establishes specific ways of communicating the circumstance to the customer. Branding is powerful, something that Warren Buffett, Terry Smith, and Clayton Christensen have all recognized as durable growth providers. As Christensen puts it: “Brands are, at the beginning, hollow words into which marketers stuff meaning. if a brand’s meaning is positioned on a job to be done, then when the job arises in a customer’s life, he or she will remember the brand and hire the product. Customers pay significant premiums for brands that do a job well.” So what can a large corporate company do when faced with a disruptive challenger to its branding turf? It’s simple - add a word to their leading brand, targeted at the circumstance in which a customer might find themself. Think about Marriott, one of the leading hotel chains. They offer a number of hotel brands: Courtyard by Marriott for business travel, Residence Inn by Marriott for a home away from home, the Ritz Carlton for high-end luxurious stays, Marriott Vacation Club for resort destination hotels. Each brand is targeted at a different Job to Be Done and customers intuitively understand what the brands stand for based on experience or advertising. A great technology example is Amazon Web Services (AWS), the cloud computing division of Amazon.com. Amazon invented the cloud, and rather than launch with the Amazon.com brand, which might have confused their normal e-commerce customers, they created a completely new brand targeted at a different set of buyers and problems, that maintained the quality and recognition that Amazon had become known for. Another great retail example is the SNKRs app released by Nike. Nike understands that some customers are sneakerheads, and want to know the latest about all Nike shoe drops, so Nike created a distinct, branded app called SNKRS, that gives news and updates on the latest, trendiest sneakers. These buyers might not be interested in logging into the Nike app and may become angry after sifting through all of the different types of apparel offered by Nike, just to find new shoes. The SNKRS app offers a new set of consumers and an easy way to find what they are looking for (convenience), which benefits Nike’s core business. Branding is powerful, and understanding the Job to Be Done helps focus the right brand for the right job.

Dig Deeper

  • Clayton Christensen’s Overview on Disruptive Innovation

  • Jobs to Be Done: 4 Real-World Examples

  • A Peek Inside Marriott’s Marketing Strategy & Why It Works So Well

  • The Rise and Fall of Blackberry

  • Payment for Order Flow Overview

  • How Commoditization Happens

tags: Clayton Christensen, AWS, Nike, Amazon, Marriott, Warren Buffett, Terry Smith, Blackberry, RIM, Microsoft, Motorola, iPhone, Facebook, Google, Robinhood, Citadel, Schwab, Fidelity, Morgan Stanley, Oracle, Salesforce, Walmart, Macy's, Kmart, Confluent, Kafka, Citigroup, Intel, Gitlab, Redis
categories: Non-Fiction
 

March 2021 - Payments Systems in the U.S. by Carol Coye Benson, Scott Loftesness, and Russ Jones

This month we dive into the fintech space for the first time! Glenbrook Partners is a famous payments consulting company. This classic book describes the history and current state of the many financial systems we use every day. While the book is a bit dated and reads like a textbook, it throws in some great real-world observations and provides a great foundation for any payments novice!

Tech Themes

  1. Mapping Open-Loop and Closed-Loop Networks. The major credit and debit card providers (Visa, Mastercard, American Express, China UnionPay, and Discover) all compete for the same spots in customer wallets but have unique and differing backgrounds and mechanics. The first credit card on the scene was the BankAmericard in the late 1950’s. As it took off, Bank of America started licensing the technology all across the US and created National BankAmericard Inc. (NBI) to facilitate its card program. NBI merged with its international counterpart (IBANCO) to form Visa in the mid-1970’s. Another group of California banks had created the Interbank Card Association (ICA) to compete with Visa and in 1979 renamed itself Mastercard. Both organizations remained owned by the banks until their IPO’s in 2006 (Mastercard) and 2008 (Visa). Both of these companies are known as open-loop networks, that is they work with any bank and require banks to sign up customers and merchants. As the bank points out, “This structure allows the two end parties to transact with each other without having direct relationships with each other’s banks.” This convenient feature of open-loop payments systems means that they can scale incredibly quickly. Any time a bank signs up a new customer or merchant, they immediately have access to the network of all other banks on the Mastercard / Visa network. In contrast to open-loop systems, American Express and Discover operate largely closed-loop systems, where they enroll each merchant and customer individually. Because of this onerous task of finding and signing up every single consumer/merchant, Amex and Discover cannot scale to nearly the size of Visa/Mastercard. However, there is no bank intermediation and the networks get total access to all transaction data, making them a go-to solution for things like loyalty programs, where a merchant may want to leverage data to target specific brand benefits at a customer. Open-loop systems like Apple Pay (its tied to your bank account) and closed-loop systems like Starbuck’s purchasing app (funds are pre-loaded and can only be redeemed at Starbucks) can be found everywhere. Even Snowflake, the data warehouse provider and subject of last month’s TBOTM is a closed-loop payments network. Customers buy Snowflake credits up-front, which can only be used to redeem Snowflake compute services. In contrast, AWS and other cloud’s are beginning to offer more open-loop style networks, where AWS credits can be redeemed against non-AWS software. Side note - these credit systems and odd-pricing structures deliberately mislead customers and obfuscate actual costs, allowing the cloud companies to better control gross margins and revenue growth. It’s fascinating to view the world through this open-loop / closed-loop dynamic.

  2. New Kids on the Block - What are Stripe, Adyen, and Marqeta? Stripe recently raised at a minuscule valuation of $95B, making it the highest valued private startup (ever?!). Marqeta, its API/card-issuing counterpart, is prepping a 2021 IPO that may value it at $10B. Adyen, a Dutch public company is worth close to $60B (Visa is worth $440B for comparison). Stripe and Marqeta are API-based payment service providers, which allow businesses to easily accept online payments and issue debit and credit cards for a variety of use cases. Adyen is a merchant account provider, which means it actually maintains the merchant account used to run a company’s business - this often comes with enormous scale benefits and reduced costs, which is why large customers like Nike have opted for Adyen. This merchant account clearing process can take quite a while which is why Stripe is focused on SMB’s - a business can sign up as a Stripe customer and almost immediately begin accepting online payments on the internet. Stripe and Marqeta’s API’s allow a seamless integration into payment checkout flows. On top of this basic but highly now simplified use case, Stripe and Marqeta (and Adyen) allow companies to issue debit and credit cards for all sorts of use cases. This is creating an absolute BOOM in fintech, as companies seek to try new and innovative ways of issuing credit/debit cards - such as expense management, banking-as-a-service, and buy-now-pay-later. Why is this now such a big thing when Stripe, Adyen, and Marqeta were all created before 2011? In 2016, Visa launched its first developer API’s which allowed companies like Stripe, Adyen, and Marqeta to become licensed Visa card issuers - now any merchant could issue their own branded Visa card. That is why Andreessen Horowitz’s fintech partner Angela Strange proclaimed: “Every company will be a fintech company.” (this is also clearly some VC marketing)! Mastercard followed suit in 2019, launching its open API called the Mastercard Innovation Engine. The big networks decided to support innovation - Visa is an investor in Stripe and Marqeta, AmEx is an investor in Stripe, and Mastercard is an investor in Marqeta. Surprisingly, no network providers are investors in Adyen. Fintech innovation has always seen that the upstarts re-write the incumbents (Visa and Mastercard are bigger than the banks with much better business models) - will the same happen here?

  3. Building a High Availability System. Do Mastercard and Visa have the highest availability needs of any system? Obviously, people are angry when Slack or Google Cloud goes down, but think about how many people are affected when Visa or Mastercard goes down? In 2018, a UK hardware failure prompted a five-hour outage at Visa: “Disgruntled customers at supermarkets, petrol stations and abroad vented their frustrations on social media when there was little information from the financial services firm. Bank transactions were also hit.” High availability is a measure of system uptime: “Availability is often expressed as a percentage indicating how much uptime is expected from a particular system or component in a given period of time, where a value of 100% would indicate that the system never fails. For instance, a system that guarantees 99% of availability in a period of one year can have up to 3.65 days of downtime (1%).” According to Statista, Visa handles ~185B transactions per year (a cool 6,000 per second), while UnionPay comes in second with 131B and Mastercard in third with 108B. For the last twelve months end June 30, 2020, Visa processed $8.7T in payments volume which means that the average transaction was ~$47. At 6,000 transactions per second, Visa loses $282,000 in payment volume every second it’s down. Mastercard and Visa have always been historically very cagey about disclosing data center operations (the only article I could find is from 2013) though they control their own operations much like other technology giants. “One of the keys to the [Visa] network's performance, Quinlan says, is capacity. And Visa has lots of it. Its two data centers--which are mirror images of each other and can operate interchangeably--are configured to process as many as 30,000 simultaneous transactions, or nearly three times as much as they've ever been asked to handle. Inside the pods, 376 servers, 277 switches, 85 routers, and 42 firewalls--all connected by 3,000 miles of cable--hum around the clock, enabling transactions around the globe in near real-time and keeping Visa's business running.” The data infrastructure challenges that payments systems are subjected to are massive and yet they all seem to perform very well. I’d love to learn more about how they do it!

Business Themes

interchange_fee.jpg
Interchange.png
  1. What is interchange and why does it exist? BigCommerce has a great simple definition for interchange: “Interchange fees are transaction fees that the merchant's bank account must pay whenever a customer uses a credit/debit card to make a purchase from their store. The fees are paid to the card-issuing bank to cover handling costs, fraud and bad debt costs and the risk involved in approving the payment.” What is crazy about interchange is that it is not the banks, but the networks (Mastercard, Visa, China UnionPay) that set interchange rates. On top of that, the networks set the rates but receive no revenue from interchange itself. As the book points out: “Since the card netork’s issuing customers are the recipients of interchange fees, the level of interchange that a network sets is an important element in the network’s competitive position. A higher level of interchange on one network’s card products naturally makes that network’s card products more attractive to card issuers.” The incentives here are wild - the card issuers (banks) want higher interchange because they receive the interchange from the merchant’s bank in a transaction, the card networks want more card issuing customers and offering higher interchange rates better positions them in competitive battles. The merchant is left worse off by higher interchange rates, as the merchant bank almost always passes this fee on to the merchant itself ($100 received via credit card turns out to only be $97 when it gets to their bank account because of fees). Visa and Mastercard have different interchange rates for every type of transaction and acceptance method - making it a complicated nightmare to actually understand their fees. The networks and their issuers may claim that increased interchange fees allow banks to invest more in fraud protection, risk management, and handling costs, but there is no way to verify this claim. This has caused a crazy war between merchants, the card networks, and the card issuers.

  2. Why is Jamie Dimon so pissed about fintechs? In a recent interview, Jamie Dimon, CEO of JP Morgan Chase, recently called fintechs “examples of unfair competition.” Dimon is angry about the famous (or infamous) Durbin Amendment, which was a last-minute addition included in the landmark Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. The Durbin amendment attempted to cap the interchange amount that could be charged by banks and tier the interchange rates based on the assets of the bank. In theory, capping the rates would mean that merchants paid less in fees, and the merchant would pass these lower fees onto the consumer by giving them lower prices thus spurring demand. The tiering would mean banks with >$10B in assets under management would make less in interchange fees, leveling the playing field for smaller banks and credit unions. “The regulated [bank with >$10B in assets] debit fee is 0.05% + $0.21, while the unregulated is 1.60% + $0.05. Before the Durbin Amendment the fee was 1.190% + $0.10.” While this did lower debit card interchange, a few unintended consequences resulted: 1. Regulators expected that banks would make substantially less revenue, however, they failed to recognize that banks might increase other fees to offset this lost revenue stream: “Banks have cut back on offering rewards for their debit cards. Banks have also started charging more for their checking accounts or they require a larger monthly balance.” In addition, many smaller banks couldn’t recoup the lost revenue amount, leading to many bankruptcies and consolidation. 2. Because a flat rate fee was introduced regardless of transaction size, smaller merchants were charged more in interchange than the prior system (which was pro-rated based on $ amount). “One problem with the Durbin Amendment is that it didn’t take small transactions into account,” said Ellen Cunningham, processing expert at CardFellow.com. “On a small transaction, 22 cents is a bigger bite than on a larger transaction. Convenience stores, coffee shops and others with smaller sales benefited from the original system, with a lower per-transaction fee even if it came with a higher percentage.” These small retailers ended up raising prices in some instances to combat these additional fees - causing the law to have the opposite effect of lowering costs to consumers. Dimon is angry that this law has allowed fintech companies to start charging higher prices for debit card transactions. As shown above, smaller banks earn a substantial amount more in interchange fees. These smaller banks are moving quickly to partner with fintechs, which now power hundreds of millions of dollars in account balances and Dimon believes they are not spending enough attention on anti-money laundering and fraud practices. In addition, fintech’s are making money in suspect ways - Chime makes 21% of its revenue through high out-of-network ATM fees, and cash advance companies like Dave, Branch, and Earnin’ are offering what amount to pay-day loans to customers.

  3. Mastercard and Visa: A history of regulation. Visa and Mastercard have been the subject of many regulatory battles over the years. The US Justice Department announced in March that it would be investigating Visa over online debit-card practices. In 1996, Visa and Mastercard were sued by merchants and settled for $3B. In 1998, the Department of Justice won a case against Visa and Mastercard for not allowing issuing banks to work with other card networks like AmEx and Discover. In 2009, Mastercard and Visa were sued by the European Union and forced to reduce debit card swipe fees by 0.2%. In 2012, Mastercard and Visa were sued for price-fixing fees and were forced to pay $6.25B in a settlement. The networks have been sued by the US, Europe, Australia, New Zealand, ATM Operators, Intuit, Starbucks, Amazon, Walmart, and many more. Each time they have been forced to modify fees and practices to ensure competition. However, this has also re-inforced their dominance as the biggest payment networks which is why no competitors have been established since the creation of the networks in the 1970’s. Also, leave it to the banks to establish a revenue source that is so good that it is almost entirely undefeatable by legislation. When, if ever, will Visa and Mastercard not be dominant payments companies?

Dig Deeper

  • American Banker: Big banks, Big Tech face-off over swipe fees

  • Stripe Sessions 2019 | The future of payments

  • China's growth cements UnionPay as world's largest card scheme

  • THE DAY THE CREDIT CARD WAS BORN by Joe Nocera (Washington Post)

  • Mine Safety Disclosure’s 2019 Visa Investment Case

  • FineMeValue’s Payments Overview

tags: Visa, Mastercard, American Express, Discover, Bank of America, Stripe, Marqeta, Adyen, Apple, Open-loop, Closed-loop, Snowflake, AWS, Nike, BNPL, Andreessen Horowitz, Angela Strange, Slack, Google Cloud, UnionPay, BigCommerce, Jamie Dimon, Dodd-Frank, Durbin Amendment, JP Morgan Chase, Debit Cards, Credit Cards, Chime, Branch, Earnin', US Department of Justice, Intuit, Starbucks, Amazon, Walmart
categories: Non-Fiction
 

October 2020 - Working in Public: The Making and Maintenance of Open Source Software by Nadia Eghbal

This month we covered Nadia Eghbal’s instant classic about open-source software. Open-source software has been around since the late seventies but only recently it has gained significant public and business attention.

Tech Themes

The four types of open source communities described in Working in Public

The four types of open source communities described in Working in Public

  1. Misunderstood Communities. Open source is frequently viewed as an overwhelmingly positive force for good - taking software and making it free for everyone to use. Many think of open source as community-driven, where everyone participates and contributes to making the software better. The theory is that so many eyeballs and contributors to the software improves security, improves reliability, and increases distribution. In reality, open-source communities take the shape of the “90-9-1” rule and act more like social media than you could think. According to Wikipedia, the "90–9–1” rule states that for websites where users can both create and edit content, 1% of people create content, 9% edit or modify that content, and 90% view the content without contributing. To show how this applies to open source communities, Eghbal cites a study by North Carolina State Researchers: “One study found that in more than 85% of open source projects the research examined on Github, less than 5% of developers were responsible for 95% of code and social interactions.” These creators, contributors, and maintainers are developer influencers: “Each of these developers commands a large audience of people who follow them personally; they have the attention of thousands of developers.” Unlike Instagram and Twitch influencers, who often actively try to build their audiences, open-source developer influencers sometimes find the attention off-putting - they simply published something to help others and suddenly found themselves with actual influence. The challenging truth of open source is that core contributors and maintainers give significant amounts of their time and attention to their communities - often spending hours at a time responding to pull requests (requests for changes / new features) on Github. Evan Czaplicki’s insightful talk entitled “The Hard Parts of Open Source,” speaks to this challenging dynamic. Evan created the open-source project, Elm, a functional programming language that compiles Javascript, because he wanted to make functional programming more accessible to developers. As one of its core maintainers, he has repeatedly been hit with requests of “Why don’t you just…” from non-contributing developers angrily asking why a feature wasn’t included in the latest release. As fastlane creator, Felix Krause put it, “The bigger your project becomes, the harder it is to keep the innovation you had in the beginning of your project. Suddenly you have to consider hundreds of different use cases…Once you pass a few thousand active users, you’ll notice that helping your users takes more time than actually working on your project. People submit all kinds of issues, most of them aren’t actually issues, but feature requests or questions.” When you use open-source software, remember who is contributing and maintaining it - and the days and years poured into the project for the sole goal of increasing its utility for the masses.

  2. Git it? Git was created by Linus Torvalds in 2005. We talked about Torvalds last month, who also created the most famous open-source operating system, Linux. Git was born in response to a skirmish with Larry McAvoy, the head of proprietary tool BitKeeper, over the potential misuse of his product. Torvalds went on vacation for a week and hammered out the most dominant version control system today - git. Version control systems allow developers to work simultaneously on projects, committing any changes to a centralized branch of code. It also allows for any changes to be rolled back to earlier versions which can be enormously helpful if a bug is found in the main branch. Git ushered in a new wave of version control, but the open-source version was somewhat difficult to use for the untrained developer. Enter Github and GitLab - two companies built around the idea of making the git version control system easier for developers to use. Github came first, in 2007, offering a platform to host and share projects. The Github platform was free, but not open source - developers couldn’t build onto their hosting platform - only use it. GitLab started in 2014 to offer an alternative, fully-open sourced platform that allowed individuals to self-host a Github-like tracking program, providing improved security and control. Because of Github’s first mover advantage, however, it has become the dominant platform upon which developers build: “Github is still by far the dominant market player: while it’s hard to find public numbers on GitLab’s adoption, its website claims more than 100,000 organizations use its product, whereas GitHub claims more than 2.9 million organizations.” Developers find GitHub incredibly easy to use, creating an enormous wave of open source projects and code-sharing. The company added 10 million new users in 2019 alone - bringing the total to over 40 million worldwide. This growth prompted Microsoft to buy GitHub in 2018 for $7.5B. We are in the early stages of this development explosion, and it will be interesting to see how increased code accessibility changes the world over the next ten years.

  3. Developing and Maintaining an Ecosystem Forever. Open source communities are unique and complex - with different user and contributor dynamics. Eghbal tries to segment the different types of open source communities into four buckets - federations, clubs, stadiums, and toys - characterized below in the two by two matrix - based on contributor growth and user growth. Federations are the pinnacle of open source software development - many contributors and many users, creating a vibrant ecosystem of innovative development. Clubs represent more niche and focused communities, including vertical-specific tools like astronomy package, Astropy. Stadiums are highly centralized but large communities - this typically means only a few contributors but a significant user base. It is up to these core contributors to lead the ecosystem as opposed to decentralized federations that have so many contributors they can go in all directions. Lastly, there are toys, which have low user growth and low contributor growth but may actually be very useful projects. Interestingly, projects can shift in and out of these community types as they become more or less relevant. For example, developers from Yahoo open-sourced their Hadoop project based on Google’s File System and Map Reduce papers. The initial project slowly became huge, moving from a stadium to a federation, and formed subprojects around it, like Apache Spark. What’s interesting, is that projects mature and change, and code can remain in production for a number of years after the project’s day in the spotlight is gone. According to Eghbal, “Some of the oldest code ever written is still running in production today. Fortran, which was first developed in 1957 at IBM, is still widely used in aerospace, weather forecasting, and other computational industries.” These ecosystems can exist forever, but the costs of these ecosystems (creation, distribution, and maintenance) are often hidden, especially the maintenance aspect. The cost of creation and distribution has dropped significantly in the past ten years - with many of the world’s developers all working in the same ecosystem on GitHub - but it has also increased the total cost of maintenance, and that maintenance cost can be significant. Bootstrap co-creator Jacob Thornton likens maintenance costs to caring for an old dog: “I’ve created endlessly more and more projects that have now turned [from puppies] into dogs. Almost every project I release will get 2,000, 3,000 watchers, which is enough to have this guilt, which is essentially like ‘I need to maintain this, I need to take care of this dog.” Communities change from toys to clubs to stadiums to federations but they may also change back as new tools are developed. Old projects still need to be maintained and that code and maintenance comes down to committed developers.

Business Themes

1_c7udbm7fJtdkZEE6tl1mWQ.png
  1. Revenue Model Matching. One of the earliest code-hosting platforms was SourceForge, a company founded in 1999. The Company pioneered the idea of code-hosting - letting developers publish their code for easy download. It became famous for letting open-source developers use the platform free of charge. SourceForge was created by VA Software, an internet bubble darling that saw its stock price decimated when the bubble finally burst. The challenge with scaling SourceForge was a revenue model mismatch - VA Software made money with paid advertising, which allowed it to offer its tools to developers for free, but meant its revenue model was highly variable. When the company went public, it was still a small and unproven business, posting $17M in revenue and $31M in costs. The revenue model mismatch is starting to rear its head again, with traditional software as a service (SaaS) recurring subscription models catching some heat. Many cloud service and API companies are pricing by usage rather than a fixed, high margin subscription fee. This is the classic electric utility model - you only pay for what you use. Snowflake CEO Frank Slootman (who formerly ran SaaS pioneer ServiceNow) commented: “I also did not like SaaS that much as a business model, felt it not equitable for customers.” Snowflake instead charges based on credits which pay for usage. The issue with usage-based billing has traditionally been price transparency, which can be obfuscated with customer credit systems and incalculable pricing, like Amazon Web Services. This revenue model mismatch was just one problem for SourceForge. As git became the dominant version control system, SourceForge was reluctant to support it - opting for its traditional tools instead. Pricing norms change, and new technology comes out every day, it’s imperative that businesses have a strong grasp of the value they provide to their customers and align their revenue model with customers, so a fair trade-off is created.

  2. Open Core Model. There has been enormous growth in open source businesses in the past few years, which typically operate on an open core model. The open core model means the Company offers a free, normally feature limited, version of its software and also a proprietary, enterprise version with additional features. Developers might adopt the free version but hit usage limits or feature constraints, causing them to purchase the paid version. The open-source “core” is often just that - freely available for anyone to download and modify; the core's actual source code is normally published on GitHub, and developers can fork the project or do whatever they wish with that open core. The commercial product is normally closed source and not available for modification, providing the business a product. Joseph Jacks, who runs Open Source Software (OSS) Capital, an investment firm focused on open source, displays four types of open core business model (pictured above). The business models differ based on how much of the software is open source. Github, interestingly, employs the “thick” model of being mostly proprietary, with only 10% of its software truly open-sourced. Its funny that the site that hosts and facilitates the most open source development is proprietary. Jacks nails the most important question in the open core model: “How much stays open vs. How much stays closed?” The consequences can be dire to a business - open source too much and all of a sudden other companies can quickly recreate your tool. Many DevOps tools have experienced the perils of open source, with some companies losing control of the project it was supposed to facilitate. On the flip side, keeping more of the software closed source goes against the open-source ethos, which can be viewed as organizations selling out. The continuous delivery pipeline project Jenkins has struggled to satiate its growing user base, leading to the CEO of the Jenkins company, CloudBees, posting the blog post entitled, “Shifting Gears”: “But at the same time, the incremental, autonomous nature of our community made us demonstrably unable to solve certain kinds of problems. And after 10+ years, these unsolved problems are getting more pronounced, and they are taking a toll — segments of users correctly feel that the community doesn’t get them, because we have shown an inability to address some of their greatest difficulties in using Jenkins. And I know some of those problems, such as service instability, matter to all of us.” Striking this balance is incredibly tough, especially in a world of competing projects and finite development time and money in a commercial setting. Furthermore, large companies like AWS are taking open core tools like Elastic and MongoDB and recreating them in proprietary fashions (Elasticsearch Service and DocumentDB) prompting company CEO’s to appropriately lash out. Commercializing open source software is a never-ending battle against proprietary players and yourself.

  3. Compensation for Open Source. Eghabl characterizes two types of funders of open-source - institutions (companies, governments, universities) and individuals (usually developers who are direct users). Companies like to fund improved code quality, influence, and access to core projects. The largest groups of contributors to open source projects are mainly corporations like Microsoft, Google, Red Hat, IBM, and Intel. These corporations are big enough and profitable enough to hire individuals and allow them to strike a comfortable balance between time spent on commercial software and time spent on open source. This also functions as a marketing expense for the big corporations; big companies like having influencer developers on payroll to get the company’s name out into the ecosystem. Evan You, who authored Vue.js, a javascript framework described company backed open-source projects: “The thing about company-backed open-source projects is that in a lot of cases… they want to make it sort of an open standard for a certain industry, or sometimes they simply open-source it to serve as some sort of publicity improvement to help with recruiting… If this project no longer serves that purpose, then most companies will probably just cut it, or (in other terms) just give it to the community and let the community drive it.” In contrast to company-funded projects, developer-funded projects are often donation based. With the rise of online tools for encouraging payments like Stripe and Patreon, more and more funding is being directed to individual open source developers. Unfortunately though, it is still hard for many open source developers to pursue open source on individual contributions, especially if they work on multiple projects at the same time. Open source developer Sindre Sorhus explains: “It’s a lot harder to attract company sponsors when you maintain a lot of projects of varying sizes instead of just one large popular project like Babel, even if many of those projects are the backbone of the Node.js ecosystem.” Whether working in a company or as an individual developer, building and maintaining open source software takes significant time and effort and rarely leads to significant monetary compensation.

Dig Deeper

  • List of Commercial Open Source Software Businesses by OSS Capital

  • How to Build an Open Source Business by Peter Levine (General Partner at Andreessen Horowitz)

  • The Mind Behind Linux (a talk by Linus Torvalds)

  • What is open source - a blog post by Red Hat

  • Why Open Source is Hard by PHP Developer Jose Diaz Gonzalez

  • The Complicated Economy of Open Source

tags: Github, Gitlab, Google, Twitch, Instagram, E;, Elm, Javascript, Open Source, Git, Linus Torvalds, Linux, Microsoft, MapReduce, IBM, Fortran, Node, Vue, SourceForge, VA Software, Snowflake, Frank Slootman, ServiceNow, SaaS, AWS, DevOps, CloudBees, Jenkins, Intel, Red Hat, batch2
categories: Non-Fiction
 

October 2019 - The Design of Everyday Things by Don Norman

Psychologist Don Norman takes us through an exploratory journey of the basics in functional design. As the consumerization of software grows, this book’s key principles will become increasingly important.

Tech Themes

  1. Discoverability and Understanding. Discoverability and Understanding are two of the most key principles in design. Discoverability answers the questions of, “Is it possible to figure out what actions are possible and where and how to perform them?” Discoverability is absolutely crucial for first time application users because poor discovery of actions leads to low likelihood of repeat use. In terms of Discoverability, Scott Berkun notes that designers should prioritize what can be discovered easily: “Things that most people do, most often, should be prioritized first. Things that some people do, somewhat often, should come second. Things that few people do, infrequently, should come last.” Understanding answers the questions of: “What does it all mean? How is the product supposed to be used? What do all the different controls and settings mean?” We have all seen and used applications where features and complications dominate the settings and layout of the app. Understanding is simply about allowing the user to make sense of what is going on in the application. Together, Discoverability and Understanding lay the ground work for successful task completion before a user is familiar with an application.

  2. Affordances, Signifiers and Mappings. Affordances represent the set of possible actions that are possible; signifiers communicate the correct action that should take place. If we think about a door, depending on the design, possible affordances could be: push, slide, pull, twist the knob, etc. Signifiers represent the correct action or the action the designer would like you to perform. In the context of a door, a signifier might be a metal plate that makes it obvious that the door must be pushed. Mappings provide straightforward correspondence between two sets of objects. For example, when setting the brightness on an iPhone, swiping up increases brightness and swiping down decreases brightness, as would be expected by a new user. Design issues occur when there is a mismatch in affordances, signifiers and mappings. Doors provide another great example of poor coordination between affordances, signifiers and mappings - everyone has encountered a door with a handle that says push over it. This normally followed by an uncomfortable pushing and pulling motion to discover the actions possible with the door. Why are there handles if I am supposed to push? Good design and alignment between affordances, signifiers and mappings make life easier for everyone.

  3. The Seven Stages of Action. Norman lay outs the psychology underpinning user decisions in seven stages - Goal, Plan, Specify, Perform, Perceive, Interpret, Compare. The first three (Goal, Plan, Specify) represent the clarification of an action to be taken on the World. Once the action is Performed, the final three steps (Perceive, Interpret, Compare) are trying to make sense of the new state of the World. The seven stages of action help generalize the typical user’s interactions with the World. With these stages in mind, designers can understand potential breakdowns in discoverability, understanding, affordances, signifiers, and mappings. As users perform actions within applications, understanding each part of the customer journey allows designers to prioritize feature development and discoverability.

Business Themes

Normans-seven-stages-of-action-Redrawn-from-Norman-2001.png
  1. The best product does not always win, but... If the best product always won out, large entrenched incumbents across the software ecosystem like IBM, Microsoft, Google, SAP, and Oracle would be much smaller companies. Why are there so many large behemoths that won’t fall? Each company has made deliberate design decisions to reduce the amount of customer churn. While most of the large enterprise software providers suffer from Feature Creep, the product and deployment complexity can often be a deterrent to churn. For example, Enterprise CIOs do not want to spend budget to re-platform from AWS to Azure, unless there was a major incident or continued frustration with ease of use. Interestingly enough though, as we’ve discussed, the transition from license-maintenance software to SaaS, as well as the consumerization of the enterprise, are changing the necessity of good design and user experience. If we look at Oracle for example. The business has made several acquisitions of applications to be built on Oracle Databases. But the poor user experience and complexity of the applications is starting to push Oracle out of businesses.

  2. Shipping products on time and on budget. “The day a product development process starts, it is behind schedule and above budget.” The product design process is often long and complex because there is a wide array of disciplines involved in the process. Each discipline thinks they are the most important part of the process and may have different reasons for including a singular feature, which may conflict with good design. To alleviate some of that complexity, Norman suggests hiring design researchers that are separate from the product development focus. These researchers focus on how users are working in the field and are coming up with additional use cases / designs all the time. When the development process kicks off, target features and functionality have already been suggested.

  3. Why should business leaders care about good design? We have already discussed how product design can act as a deterrent to churn. If processes and applications become integral to Company function, then there is a low chance of churn, unless there is continued frustration with ease of use. Measuring product market fit is difficult but from a metrics perspective; companies can look at gross churn ($ or customer amount that left / beginning ARR or beginning customers) or NPS to judge how well their product is being received. Good design is a direct contributor to improved NPS and better retention. When you complement good design with several hooks into the customers, churn reduces.

Dig Deeper

  • UX Fundamentals from General Assembly

  • Why game design is crucial for preventing churn

  • Figma and InVision - the latest product development tools

  • Examples of bad user experience design

  • Introduction to Software Usage Analytics

tags: Internet, UX, UI, Design, Apple, App Store, AWS, Azure, Amazon, Microsoft, Oracle, batch2
categories: Non-Fiction
 

About Contact Us | Recommend a Book Disclaimer