Weekly Update: When Money is Getting Expensive, Good News is Bad News
A few highlights from the week of 10/3
For the week, the WCLD cloud index moved up 1.9% to $27.17. Although it’s hard to argue the week was positive - the WCLD tumbled 5.9% yesterday in what was a bloody Friday that saw the NASDAQ fall 3.8%.
The “good news is bad news” moment was Friday’s pre-market-opening US jobs report, which showed unemployment has dropped to 3.5%. Good news, right?
Well, maybe not:


The economy is still running hot. This means inflationary fires are likely to keep burning unless the Fed continues to hike aggressively:

This implies the Fed funds rate will be 4.5% by December. Keep in mind, the rate was .25% as recently as this past March.
If I can get a guaranteed 4.5% return from a 1-year US Treasury bond, why would I own any assets with risks such as equities or bonds? At least, that’s the sentiment that drove the sell-off in both equities and bonds on Friday.
Why does this matter to SaaS startups or early-stage VCs?
Most tech startups rely on money from external sources to fund their growth
The price of money has rapidly escalated
Buyers & sellers of any asset are slow to digest a sudden market dislocation
Late stage venture capital has ground to a virtual halt as a buyers & sellers of venture capital (ie startups & VCs) recalibrate their expectations
Oddly, the seed stage VC market has been vibrant since Labor Day. The skeptical interpretation of this is that seed stage VCs are acting as though they have secret knowledge that current market conditions will soon unwind. Of course, they don’t.
The charitable interpretation is that seed stage VCs have a long-term time horizon & realize that today’s market conditions have little bearing on market conditions in 6-10 years, when today’s successful seed startups will be selling or IPO'ing.1
In any case, there is an odd disconnect between the seed stage & later stages of VC. If VC were a highly efficient market, some seed capital would shift to later stages. But nobody would accuse the VC market of being highly efficient!
Podcast recommendation of the week:
Iconiq’s Doug Pepper gave an insightful overview of current market conditions in VC on the Full Ratchet Podcast.
For me, the most important takeaway is the heightened importance of capital efficiency in today’s startup market. It’s a moment of rude awakening for tech startups that were routinely able to raise up-rounds while burning $5-10 for every $1 of new revenue. While there still some exceptions where high-burners can get funded, it’s quickly becoming the exception and not the norm.
Reading recommendation of the week:
A great overview of capital efficiency comes from a post written in April 2020 by Craft’s David Sacks.2
The key insight is that startups should be targeting a burn multiple of no more than 2x. The lesser, the better. Sacks provides a great explanation of why capital efficiency provides the clearest lens for understanding a startup’s quality:
The beauty of the Burn Multiple is that it’s a catch-all metric. Any serious problem will eventually impact the Burn Multiple by either increasing burn, decreasing net new ARR, or (most tricky) increasing both but at disproportionate rates.
While there have been attempts to popularize other tests of product-market-fit,3 ultimately there is nothing that cuts through the fog of startup metrics like capital efficiency. Product-market-fit without capital efficiency is not product-market-fit.
The problem with this scenario is that it overlooks that today’s seed stage startups will need to raise a Series A in the next year or two. If today’s seed stage valuation is the same as next year’s likely Series A valuation, that’ll be a problem.
Ironically, it reflected a moment of cautious VC behavior that quickly boomeranged into the Covid-Induced Bubble of 2021.
If you’re considering using the Superhuman test for PMF, please let me give you a few reasons why you shouldn’t:
If you have PMF, it should be slapping you in the face (no need to run a survey)
Fuzzy survey questions are great for hypothesis generation, not hypothesis testing
It invites making causal inference about product quality from random statistical noise
Survey responders are a biased sample, making the 40% threshold even more arbitrary