Our previous post introduced the concept of growth decay in recurring-revenue companies. We concluded, first, that there is a substantial persistence of revenue growth rates from one year to the next, and second, that growth rates decline relatively predictably as companies mature. We turn now to relating these two concepts to the valuation of a fast-growing, private SaaS company.
Investors usually take one of two broad approaches to valuing a company with current negative cash flows. Either a multiple of a point-in-time metric such as Price/Sales is used; or a full discounted cash flow model is constructed to determine the net present value of the future cash flows. Using a Price/Sales multiple is problematic because it doesn’t account for the future expected growth. Building a discounted cash flow is tedious and anchoring a valuation to positive cash flow in the future is too sensitive to model assumptions.
Using a growth rate decay trajectory is a compromise between the complexity of a time-based, cash flow approach and the simplicity of a point-in-time sales multiple. It uses two known inputs, current revenue and trailing revenue growth, and one assumption, namely the growth rate decay trajectory. It also fixes the time-frame of interest to a point five years in the future. We chose five years because that is a typical venture investor’s holding period from the initial investment.
For any given growth rate we postulate three possible growth paths forward:
While the difference in gradient between these three paths appears to be slight, the final column in the table shows that five years later the change in revenue altitude can be dramatic. Take, for example, a company at a $2M annual revenue run rate that is currently growing 100% YoY (the final three rows in the table). If the company is able to follow the high road of growth they will end up five years later at a $33M run rate; but if growth follows the low road, revenue will have expanded to a more modest $12M run rate.
Sorting the table by the 5-year revenue growth is revealing. For any SaaS company to be more than 10x larger five years from now, it must be growing revenue at a CAGR higher than 80% today and the growth rate must decay no faster than the norm.
Once we have an expectation for the expected growth trajectory, the next question is how one should value the company. While any approach is bound to be as much art as science, we’ve often observed that venture investors have a tendency to apply one methodology when investing and a different one when planning for an exit. There are times when applying a 12x forward sales multiple to calculate a pre-money valuation make sense. It is unlikely that an acquirer will apply the same multiple 5 years later when the growth rate has declined by 50%. It is important to use a consistent approach over time.
So what revenue multiple makes sense? Our approach is to simply use the expected 5-year revenue expansion and apply that as a sales multiple to the current revenue. We also apply a 25% discount at the time of investment to account for dilution from future capital requirements and the risk inherent in smaller companies.
What is clear is that maintaining a high growth rate is key to getting a venture return. In order to achieve a 5-10x return for investors over 5 years, a company must both start with, and substantially maintain, a high growth rate. Even starting with a 100% growth rate is insufficient if the growth rate declines at the typical 15% each year. The final CAGR is particularly determinant of the investment return for SaaS companies.
For both entrepreneurs and investors in recurring revenue business the conclusions are stark: in order to build a meaningful business and secure venture returns, it is vital to maintain exceptionally high growth rates for an extended period of time. This is only achieved by brilliant execution as an early mover in a massive market.
The Child is Father of the Man
- The Rainbow, Wordsworth
As fall ends and winter approaches, management teams turn to the task of planning for the year ahead. In discussions with our portfolio CEO’s, a question that is frequently asked is: “How fast can, or should, we grow?”
To answer the question, we looked at a wide variety of SaaS companies, both private and public, and found that the growth rate in any given year is highly predictive of the growth rate in the next.
Looking at the scatter plot below, three points become clear:
The solid line on the chart represents the best fit to our data set. The slope of the best-fit line relates the current year’s growth rate to next year’s likely growth. As a rule of thumb, therefore, next year’s growth rate is likely to be 85% of what this year’s was. As is apparent from the relatively tight cluster of points around the line, growth rates decay within a fairly predictable range as companies mature.
There are two corner cases worth exploring: large companies growing quickly and small companies growing slowly. Let’s look at Salesforce.com as an example of the former. In the three years during which Salesforce grew from $100M to $500M (2005-2007) its annual growth rates were 84%, 76%, and 60%. A year later, it was still growing at 50% CAGR. Even as Salesforce approached $1B in revenue, it was able to maintain its growth rate from year to year.
The second corner case is that of small companies growing slowly. All venture portfolios have a few companies in this category. While there is some satisfaction to be had from the annual increase in revenue, management teams are tempted to hope for (and often forecast) a sudden increase in growth rate. Our experience, though, is that low growth rates are frequently systemic to markets and are more suggestive of a constrained market size (or some other unfavorable dynamic) than a dysfunctional sales organization. In these cases, capital efficient growth is crucial to a good outcome.
In our next post on SaaS valuations we will look at how current growth rate and the expected ‘growth decay trajectory’ relate to future outcomes and, consequently, valuation multiples today. More importantly, we will discuss what small, fast growing companies can do to maintain high growth rates.
Ah, ’tis not what in youth we dreamed ‘twould be!
- Growing Old, Matthew Arnold
Hadoop was conceived in 2004 by two Google engineers as the combination of a functional programming algorithm, Map-Reduce, and a file system that could be distributed across a cluster of servers.
By 2007 Hadoop was baptized into the Apache foundation and within a couple of years was powering some of the world’s largest websites. These pioneers in the big data market were information-centric web companies who had the necessary engineering talent to manage a cluster and, in most cases, derived utility from closing the loop with their production systems.
But is Hadoop really ready for the enterprise? Early adopters aside, can Hadoop grow up and amass the production level to truly make it the de facto standard?
As the mainstream enterprises reach the end of their proof of concept trial, our due diligence has frequently uncovered a growing realization that Hadoop is still immature. The promise of linearly scaling costs is often marred by the reality of a technology still lacking the characteristics necessary for enterprise adoption. Commonly cited obstacles encountered as enterprise have built out their Hadoop clusters include:
Furthermore, the batch-processed nature of Hadoop has always been discordant with the progression towards real-time communication. The new world of business intelligence demands real-time metrics on everything from inventory to consumer demand.
There is a host of new technologies that marry big data and real-time analysis. Twitter recently open-sourced Storm which is able to continuously query streams of data and updates clients in real-time. A Storm cluster is superficially similar to an Hadoop cluster but operates on incoming streams of data. Once enterprises have mined their historical data for insight, the next logical step will be to analyze the present.
It’s hard to see how Hadoop can simultaneously mature into a buttoned-down enterprise product and adapt to a real-time, agile world. But Hadoop’s lumbering nature could lead to his extinction if he doesn’t get into shape soon.
Recently we invested in Boundary, a cloud-based application performance monitoring solution for distributed application environments. Boundary is a company at the forefront of the next wave of disruption hitting the systems management market.
Our investment in Boundary is predicated on the thesis that a world in which virtually the entire application stack is outsourced requires a new monitoring application. As the underlying infrastructure (servers, switches, routers, and storage) used by modern web applications has become a commodity, the need for custom-built monitoring has disappeared. Companies should be spending their capital building a differentiated product, not writing Nagios scripts to diagnose problems in a continuously-changing environment.
As the stampede to the public cloud gathered momentum, it became clear to us that there was a need for a cloud-based network monitoring offering. As we spoke to companies using the public cloud, three themes emerged:
The era of the simple 3-tier web stack (MySQL – App Server – Apache) will soon be behind us. It was a model suited to websites with mostly static content accessed over a relatively slow wide-area network from a PC.
Modern web applications are now approaching the performance of desktop applications. They frequently combine Hadoop clusters, NoSQL databases, and a variety of web services making API calls all over the globe. These days it is difficult to pinpoint choke points and problem areas under different load conditions. When problems do occur, there is a mountain of diagnostic data that is often lost by the time the problem is identified. This problem is compounded when applications are distributed across different data centers and dispersed geographies.
As companies have migrated their infrastructure to public cloud hosting providers, an often-heard complaint from operations teams has been the lack of visibility into the network and unpredictable application performance. While there is a plethora of open-source network monitoring tools, with Nagios being the clear leader, there are few that combine both rich data and clear, graphical presentation. Even fewer perform well in an environment such as Amazon Web Services, where the network switches and routers connecting the servers are off-limits to the devops team.
A related problem is that the web is often a hostile environment. Bad bots, comment spam, and denial-of-service attacks are a source of frustration for all web properties. While there are methods of guarding against malicious traffic, it is often undetected and costly.
Agile development cycles from engineering teams are resulting in web applications that are in flux as new features are released with a tempo that destroys the static, predictable calm beloved by old-school operations teams. Boundary’s ability to process staggering amounts of data in real-time is essential to rapidly isolating problems.
When I first met Boundary, it was apparent that their solution resonated perfectly with the problems companies were experiencing as they architected dynamic web applications on public cloud infrastructure. Real-time, second-by-second visibility into every aspect of a modern app is no longer a luxury; it’s essential.
Last week saw overwhelming bipartisan support from the House of Representatives for the JOBS Act, a bill that would help small businesses raise capital. The bill passed 390-23 in the House and is now being debated in the Senate for a vote tomorrow. Unfortunately, the momentum coming out of the House is slowing as activists and union leaders rally support against the bill.
The JOBS Act includes two major elements:
Both of these reforms are critical to encourage more capital to flow into young, innovative companies. Importantly, the JOBS Act accomplishes this within the context of the current regulatory structure, not outside it. The fundamental premise of the JOBS Act, as Harvard Business School professor, Bill Sahlman, has explained, is that the benefit of having more new companies formed and able to go public outweighs the potential costs of isolated bad behavior.
Please get involved and communicate your support for the JOBS Act today. The National Venture Capital Association has a page which allows you to sign a letter to the Senator of one of ten states where a senator’s vote could make a difference in the bill’s outcome.
Trends that grow exponentially for an extended period of time are always important. One trend that we have been following is the rapid increase in the number of API’s (application programmer interfaces) over the past decade. It took eight years to reach the first 1,000 API’s. The web is now adding 1,000 API’s every four months! It is likely that the next 1,000 API’s will be added in under three months.
An API is a description of the interface that two software components use to talk to each other. As browser applications have become more dynamic and interactive, the need for more complex interactions has grown. In most cases, API’s define the connective tissue of the web and facilitate the exchange of data between the various components of an application.
Like many trends, the initial euphoria around ‘mashups’ (applications comprising a variety of web services) flared up in 2008 but then petered out. Enthusiasm was dampened, in part, by unresolved technology issues of whether to use REST or SOAP for integration points between websites, whether the response should be formatted in XML or JSON, and a host of other arcane issues.
Developers have been quick to realize the power of API’s. Instashirt provides a good example: by connecting Instagram’s photo API to Zazzle’s custom product design API, users can order T-shirts emblazoned with their favorite Instagram photos. By allowing unaffiliated developers access to their API, both Zazzle and Instagram have benefited.
Yesterday, Nike announced that it is opening an API for NikeFuel, the metric for tracking physical activity. The API will allow third-party music developers to add NikeFuel features into their apps. And now you can share your workouts with the mobile app, Path via their API. Will all these be successful? It’s hard to tell but allowing other developers to experiment with and add value to the FuelBand is bound to lead to success that the original designers probably never envisioned.
A well-designed, accessible API can be a tremendous point of leverage for startups. By harnessing the passion of developers startups can rapidly add value for their customers. This has become even more important in a post-web world where services are increasingly accessed via a multitude of tablets and smart phones. Mobile applications have been the fuel on the API fire. Looking at the Zappos metrics, it’s clear that mobile devices have been a significant component (40%) of their API traffic.
The needs of mobile developers differ in some important ways, though. Mobile applications tend to make many more requests but with a smaller data payload. As these needs collide with API’s designed for browser-based access, API’s will have to become more complex, allowing developers to filter data more efficiently on the server.
API analytics will eventually give enterprises an important view of their data. While web analytics gave enterprises insight into user behavior, API analytics will allow them to intersect their users’ behavior with the value of their own data. Organizations will gradually begin to distinguish between strategic and incidental data. Netflix, for instance, likely values their database of user movie ratings far higher than their database of actors and movie titles. The former will remain strategic to Netflix but the latter will be open-sourced.
Tom Mornini of Engine Yard described the significance of API’s as the final realization of the dream of reusable code. It may be that open source data is an even bigger side effect.
Companies these days have a variety of hosting options as the industry continues a tectonic shift from proprietary data centers to cloud-friendly platforms. Maciej Ceglowski, always an entertaining writer, has an excellent post comparing the various web hosting options to houses. Below are the five stages he outlines and some quotes from his post:
You never interact with the computer directly, but upload your code to the platform with the proper incantations and it runs. The orders vary in strictness, with Google App Engine requiring that you purify yourself of all worldly design habits before writing your app, Azure insisting you renounce the demon Unix, and Heroku somewhat more welcoming to the fallen.
Sometimes you will REALLY notice the neighbors, and can’t do anything about it. I/O performance in particular can be awful. Your operating system will lie to you about performance because it lives in the Matrix and can’t see all the way down to the hardware. And if you test the boundaries, you’ll discover you can’t actually do whatever you want. Deviate too far from expected behavior (by churning through millions of files, for example) and the R.A. will come knocking.
…the hosting company takes care of general housekeeping and holds your hand if you get scared. Think of it as renting a basement apartment from your parents.
The hosting company provides electricity, cooling, physical security, and some minimal “remote hands” service if you need someone to press a button or look at your blinkenlights. But ask not for whom the pager beeps — for sysadmin, it beeps for thee.
Good: No need to take hosting advice from blog posts.
Bad: God help you.
Over the past year most of the abbots and priors have decided to move their monks into the upper floors of the dorm. That arrangement appears to be acceptable to all. The chancellor of Amazon University has decided that the ecclesiastical orders should continue to do what they do best, think about application management and code control; the students and faculty will concern themselves with the lower level infrastructure.
While the past few months have seen some question whether PaaS uptake is accelerating, our view is that most applications will eventually reside in either a Dorm Room or a Monastery. As the price of computing continues to decline, the convenience of the outsourced platforms will gradually dominate the hosting decision. Cleaning a stately manor is too much like hard work for most.
GitHub launched in 2008 and over the past three years it has become the people’s choice code repository. Over time, we believe that it will infiltrate the enterprise software development environment as developers insist on the combination of convenience and productivity that GitHub enables. As this chart shows, despite being less than half the age of the other popular forges, GitHub already has significantly more commits.
Looking at the graph below, the future is even brighter for GitHub. Beginning in early 2010 and gaining pace in 2011, the number of git installations has surpassed all the other version control systems. The chart represents installations on Debian Linux only but we believe that it is representative of an across-the-board trend: developers are switching en masse Subversion, CVS and Mercurial to Git.
In retrospect, the phenomenon that is GitHub seems inevitable and one might ask why it took a half century to reach the current angle of repose. We believe that GitHub sits at the intersection of four long term trends that have now reached full fruition, with the result that software development has been completely changed.
GitHub has fully exploited all the advantages of a young code base. Its user interface regularly delights even the savvy developers it is serving. By making use of the latest HTML5 API, activities like navigating through a code directory structure are greatly enhanced.
As the system of record for software, it is natural that the rest of the software development ecosystem (bug trackers, project management, continuous integration and testing tools) are now scrambling to integrate with GitHub. This will continue to reinforce their leadership position and make life easier for developers.
Travis CI gives a glimpse of what the future holds. Checking code into GitHub automatically triggers a test and integration build on Travis CI, a framework running on Heroku. This relieves developers of the tedium of integration testing and will result in better quality software.
Prior to GitHub, the accepted way of contributing to an open-source project was to write a message to the mailing list for that project asking for feedback on a proposed feature. GitHub introduced the idea of “pull requests” which are the best idea since the advent of open-source software. Pull requests have completely changed developer workflow and unleashed the full power of crowdsourcing in the developer marketplace.
“Pull request” is a slight misnomer because the real concept is that new features and bug fixes are now very easily pushed into a repository via a much simplified approval process. By cleverly integrating all the necessary communication tools to build a community, GitHub has made collaboration effective.
As developers increasingly gravitated towards GitHub for their personal, open-source projects, it was inevitable that over time they would demand to use Git at work. Clearly few corporation can afford to expose their source code to the world and so the paid model made natural sense.
Freemium business models always work best when a well-designed product with a great UI can be frictionlessly adopted by individuals and then be upsold to business buyers willing to pay for enterprise features such as privacy and security.
Underpinning the freemium business model is the fact that as the cost of compute, storage, and (most importantly) bandwidth has fallen, GitHub has been able to offer free project hosting to millions of users. The marginal cost of each new project is likely less than $1 per year.
By completely dominating the landscape for open-source software, GitHub fast became the default code repository for most developers and their personal projects. Over time, GitHub will infiltrate enterprise software development with a sustainable business model built on the love of individual users.
Two of the internet success stories of recent years, Zynga and Netflix, have both come to rely on cloud technology to run their business. Each, though, has taken a unique approach dictated by their different business models. Netflix, realizing that a video streaming platform will eventually be a commodity asset, has built its entire architecture on Amazon’s public cloud. Zynga has adopted a hybrid approach, electing to build out its own high-performance private cloud optimized for games and use Amazon’s cloud only when it needs additional capacity to scale a popular game quickly.
In 2007, Netflix was locked in battle with Blockbuster. A small, but growing number of customers, were streaming movies to their PC’s but the goal for 2008 was to stream content directly to the 65″ TV’s in consumer living rooms. With the launch of Roku, the Netflix team realized that they were facing a massive need for both bandwidth and compute capacity.
By 2008, after another round of costly data center upgrades, Netflix’s operations group faced the decision of whether or not to become experts at building infrastructure on a global scale. A year later, streaming content overtook the DVD delivery business as Netflix began to deliver content to web-connected TV’s; consumers clearly preferred the instantaneous gratification offered by on-demand streaming.
While Netflix management believed wholeheartedly that streaming video was the future of their business, building a brand new data center was ruled out because the future was simply too murky and the risk of going down a dead-end technology path too real. So Netflix pivoted at a multi-million dollar run rate.
2010 saw the entire consumer-facing front end move to Amazon’s public cloud, AWS. 2011 saw the rest of the back-end following onto Amazon Web Services. Today, all that remains in Netflix’s own data center is corporate IT and the DVD business, both dinosaurs in their own right. With 10,000 servers on AWS, Netflix is one of Amazon’s largest customers but is still only a single digit percentage of their capacity. While Netflix has found out that they can’t stretch the elastic service too much, they routinely add 1,000 servers in a single day.
The launch of Farmville in 2009 was the inflection point that necessitated a public cloud strategy to support the rapid growth. Farmville reached 1 million users in the first five days and eventually peaked at 80 million users. At that time, Zynga was 100% architected on Amazon’s public cloud.
In 2010, Zynga decided to convert opex to capex and replicated AWS in their own data center. Today, they use AWS for bursting when they need additional capacity. Over time, they have learned how to ‘own the base and rent the spike’. By adding solid-state drives, tons of memory, and dual 10G networking cards to their own servers, they can stay on the bleeding edge of performance, while still leveraging Amazon’s ability to rapidly scale.
Why do these two different approaches make sense?
The component of the cost structure represented by the underlying infrastructure is substantially different in each case. 50% of Netflix’s costs are content fees and a further 25% are for postage. As the cost of both compute and bandwidth continue to fall, it is likely that content costs as a percentage of revenue will rise. For Netflix, squeezing out the last cent of cost on their delivery infrastructure is simply not worth it and they are content to piggyback on Amazon’s staggering capital investments.
Zynga, on the other hand, needs to reach multiple geographies and devices with games that each put a unique stress on the infrastructure. While public clouds are a ‘great four-door sedan’, each game often needs something different. Owning their own data center means they can architect their private cloud to perform better for games.
Netflix and Zynga are two very different companies. Neither could easily exist without the existence of Amazon’s public cloud offering. But both have used the public cloud in fundamentally different ways in order to remain competitive.
We have written previously about the outsourcing of the web stack. In this post, we will add more color on why the outsourcing of the entire web platform makes sense. While developers have gravitated en masse to offerings like Heroku, there is still a wider lack of appreciation for why platform-as-a-service (PaaS) is a major trend.
In this post we are going to set aside the wider question of the economics of running your application on a PaaS versus hosting and maintaining your own servers. Our aim is to describe what constitutes a PaaS and how it differs from IaaS (such as Amazon Web Services) and other SaaS offerings like Salesforce.com.
There are a few other characteristics of the new breed of PaaS services which we would regard as optional components of a platform but which greatly enhance its utility. By integrating other components into the web stack and constraining these to a few well-curated and proven bundles, a PaaS offering can both consolidate services into a single bill but, perhaps more importantly from a developer’s point of view, ensure inter-operability and maintain a best-of-breed library. Heroku has done a great job of facilitating easy deployment of application add-ons such as log file management, error tracking, and performance monitoring and many of the newer vendors like DotCloud and Engine Yard appear to be tackling an even deeper integration.
There is often confusion as to the difference between PaaS and SaaS: a PaaS offering is an outsourced application stack sold to developers. A SaaS offering is a business application typically sold to business users. A SaaS offering replaces purchased, application-specific software with a hosted, service model. A PaaS replaces an organization’s internal data center.
The difference between PaaS and IaaS is more subtle and over time the dividing line is likely to blur. Today, the PaaS platforms begin where the IaaS services leave off: IaaS effects the outsourcing of the hardware components of the web stack. PaaS platforms effect the outsourcing of the middleware components of the web stack. It is the abstraction of the repetitive middleware configuration that has caught the imagination of developers. PaaS saves time, expedites time to market, and facilitates continuous deployments.