Dominating the news today is that Snowflake’s IPO is the largest tech stock in history raising $3.4 billion. As I’m writing this, some analysts are even predicting that the stock may double in price as soon as retail traders are allowed to buy.
What does Snowflake do and what can we learn about its pricing strategy?
Let’s take a quick step back first. Big companies have lots of data. So much that it would shock you. But as of today it’s often locked up in strange places. Let’s take a marketing example: companies like Patagonia send you emails about the newest clothes on offer. Patagonia has your email address, click history, purchase history, and even behavior on their website. Makes sense this would be all in one place right and easily accessible? Wrong.1
If Patagonia wants to create an app on their website that tracks if you click the same sweater 5 times in 5 days and sends you an email with a 10% discount this can be a non-trivial problem. That’s because the data and computing resources might occupy different software systems.
Hence, Snowflake.
Snowflake – and they are not the only company in this space by the way – is tearing down data silos by pulling data into a central repository. They do fancy stuff on top of this but that’s the basic idea2. Data lakes are a common term in the tech space. What Snowflake does that’s different is to pull this data and compute capacity in between a customer’s system and the underlying infrastructure (e.g. Amazon Web Services – or AWS) to manage security, load balancing, and control.
What’s interesting about their pricing is how it sits on top of the hyperscaler infrastructure of AWS, Google Cloud Platform (GCP), and Microsoft Azure.
Pricing model
Snowflake prices on a consumption model (all the rage these days). The three levers it uses to determine that consumption are storage, compute, and cloud services.

To manage these consumption pieces it uses a credit model. Credits have specific rules that trigger when a particular action happens. If you are running a compute workload, then 1 credit unit is consumed if that workload runs for an hour in an XS virtual warehouse (see example below). The benefit of this model is how quickly you can scale up or down your usage. And most useful to developers is the ability to balance and control costs of the business.

Another interesting piece of the model is the “cloud services” pricing lever. Snowflake tells customers that typical utilization of cloud services is included for free – up to 10% of daily usage credits. This means most customers should not expect to see incremental charges for cloud services usage.
Example: a customer buys on average 1,000 credits per day. Snowflake allocates an additional 100 credits per day to cloud services usage (so a customer gets 1,100 credits per day).
This is clearly a deliberate pricing and business strategy from Snowflake!
My guess is that 80%+ of customers do not overconsume cloud resources, meaning that Snowflake has designed this model to charge customers who add excess costs/load to Snowflake while giving the majority of customers a great deal.
When companies make the decision on what to monetize – in this case storage and compute vs. passing through AWS/GCP/Azure costs – they can simplify the purchasing process and build very valuable market caps.
1 This is a stylized example. I genuinely don’t know how Patagonia does things.
2 Snowflake also has a piping technology that makes it easier to move data around, an analytics app to check problems or identify improvement opportunities, and a data mart where customers can securely buy/sell 3rd party data streams