If you like this post, please forward it on and/or share on X :)
Compensation is one of the dearest things to my heart; it is the fundamental currency of work, enables human potential and achievement, sets incentives, and creates wealth. Every role I’ve had over my 15-year career has deeply touched people operations, org design, and compensation:
I spent years assisting multiple Fortune 500 companies with org structuring and compensation redesign
I reviewed 1,000+ offer letters from employees of elite unicorn companies, as COO of the largest platform for private company secondaries (Forge)
We sent out 10,000+ offer letters for 150+ early stage startups from AbstractOps, the company I previously founded
That’s put me in the lucky and unique position of having courtside seats to how companies compensate — ranging from 10 or 100 or 1,000 or 10,000 people.
Along the way, I’ve been tinkering with a universal leveling guide and compensation framework, and I’ve been pretty shocked by what I’ve discovered. I expected to find some patterns that might make it easier to compensate people, but...
It turns out that it is entirely possible to algorithmically predict compensation for any title, at any stage of company, anywhere in the world.
In this 3-part series published today, tomorrow, and the day after, we’ll cover:
I: why we need a better approach than our way today
II: how to estimate total rewards for any title & stage of company (and share a link to an easy calculator with the methodology)
III: how to set cash vs. equity compensation and apply location-based adjustments, if desired
A brief allegory about potato chips
Imagine you’re running a business making potato chips. Let’s call your company Tater potato chips (I know, that’s a singularly unimaginative name).
Your team is gathered in a conference room in December, trying to decide your bulk pricing quote for Target’s Northeast division, for 12oz bags.
How do you have this discussion?
What are Target’s alternatives for an apples-to-apples quote? Maybe Target let slip that Spuds, your biggest “healthy potato chip” competitor, quoted $1.20, so you want to be pretty close to that. But maybe you know Target likes your quality a lot more so you take that into account.
You might try to analyze Spuds’ sales contracts to figure out what they’re quoting Target, and Whole Foods, and Trader Joe’s. But this is probably impossible to get — it’s confidential information to Spuds. Plus, there’s no reason to believe the people at Spuds are perfect, either… they’re human, too.
You might look at your cost structure (it costs $1 to make) to optimize your sales or profit; if we price it at $1.25, Target might buy 100,000 bags, but if we price it at $1.40 they might buy 50,000 bags. But maybe you make the same profit either way… so you’d compare the tradeoffs of sales targets vs. profit targets to do what’s right for the business.
You might look at what you quoted Target last year. If you quoted $1.22, a $1.25 quote might work this year, but $1.40 might piss them off.
But really, best practice is to adopt an internal pricing strategy:
We should sell our bags for $1.20 - 1.30. We’ve done our focus groups and research and this is the fair market range to ensure that supermarkets can sell it for $2.00 after shipping, slippage, and their margin.
We should have breakpoints for volume (e.g., 50k bags = 5% discount and 150k bags = 10% discount).
We have some levers you can pull in a negotiation: long-term contracts, cross-sell discounts, and so on.
What you don’t do is to call an analyst at Gartner and have this conversation:
“Can you give me the average potato chip 12oz bag sale price for Target in the Northeast? Oh, you don’t have just 12oz, you group all single serving bags of 8-14 oz? That’s fine. Which stores is this for? Target and Walmart? Should be fine. You don’t have Northeast, just the Eastern seaboard? I suppose that works. Do you cut the data by healthy potato chips? No?”
So... why do we approach compensation this way?
So what?
It’s clear why a flawed approach to compensation is harmful: it’s literally peoples’ livelihoods; disparities can cause distraction, cultural problems, attrition, and more.
It is possible to solve compensation thoroughly — as we’ll show below — with an algorithmic approach to building a framework.
But in the absence of that, using benchmarks in a vacuum results in the following flaws in tackling compensation:
Wide ranges. It’s common to find situations where a salary range is $61,000 - 97,000 for a given role — for the same role at the same stage of company. Those are very different.
Ambiguous definitions of “company stage”. It’s common for companies to pick a “Series B” company peer group, except that peer group is defined as “companies that have raised $20 - 100M.” If you’ve raised $25M, you have almost nothing in common with a company that’s raised $95M.
Lack of real-time data. Real-time data for compensation is a serious challenge, and there’s no easy solution to update data for hundreds of across dozens of stages of company in hundreds of locations — real-time — except via an algorithmic approach.
Very bad coverage on equity data. Companies are rapidly approaching a “Total Rewards” approach to compensation, which combines base salary, target incentive compensation, and target equity value vested per year. This is the most intellectually honest way to provide a holistic compensation package for startups. However, 10,000 shares at company A is not the same as 10,000 shares at company B, even if they’re the same stage of company — you have to account for the preferred share price, strike price, outstanding share count, and so on. Because of this complexity, both recruiting teams and candidates handwave or write-off equity, which is a lose/lose proposition.
Even worse coverage in less-common locations. There’s lots of data for San Francisco, New York City, and Los Angeles. Most other metro areas are spotty. Trying to find a Staff Engineer salary for a Series B company in Memphis? Nearly impossible. Trying to compensate someone in Munster, Indiana? Forget about it. Ankara, Turkey? Umm... good luck.
Cherry-picking benchmark data causes bias & disparities. When you look up benchmarks to offer candidate A who has an offer for $200K (or simply negotiates harder) vs. a comparable candidate B who doesn’t have offers, guess what happens? You cherry-pick data to justify a $200K offer for candidate A (”we can go to the 90th percentile to win this candidate!”) vs. a $170K offer for candidate A (”as a default, we make offers at the 75th percentile”).
Candidates and employees don’t understand compensation. Without a systematic approach, it seems like offers and compensation are set without rhyme or reason. The most common reason is “this is market,” which usually ends the discussion but isn’t exactly reassuring or satisfying.
I could keep going. Even writing it out, this honestly sounds like an impossibly complex problem; but there’s light at the end of the tunnel, I promise.
Benchmarks and Frameworks as Yin & Yang
Compensation benchmarks are very valuable — even essential — as an input and sanity check in a well-run compensation process. In addition, they are a valuable resource for real-time trends in compensation, particularly when the market is shifting rapidly.
(Going back to potato chips, the Tater team should be analyzing potato prices, reviewing financials for Frito-Lay, etc. to stay informed on the market, and use that to inform the pricing strategy. But the pricing team shouldn’t use industry reports in a vacuum to generate quotes day-to-day. This also overcomes the biggest weakness of benchmarks. There isn’t perfect data on every title, but there is absolutely great data on a handful of roles in each function, which can be used for sanity checks and trend analysis.)
In other words, benchmarks are useful, but are not a substitute for a compensation framework (or philosophy, as expert leaders in HR & Total Rewards might call it), which should include:
Adopt a leveling guide
Set total rewards for each level
Validate against your existing compensation (to mitigate anomalies)
Sanity check against most recent benchmarking data
Fine-tune based on the information gleaned in steps #3 and #4
Determine cash/equity split for each level
#2 above — filling out total rewards for each level & title — is the hardest part, and that’s where we’ll start. How would one design a compensation framework without starting from benchmarks?
With math, of course.
The Surprising Opportunity, Tucked Away in the Data
Let’s take the example of a Series B company, making an offer to a Staff Software Engineer in Memphis. Attempting to find benchmarks for this is likely to turn up an error of “not enough data!”, or worse, provide a skewed or outdated estimate based on a very small sample size.
But… there are plenty of data points for Staff Engineers in New York City at Series B companies, and adjust it by a cost-of-living / location adjustment “index” for Memphis vs. New York City (this is approximately20-25%, by the way).
No one does this.
This is because — it turns out — data providers have not taken a data science approach to compensation.
Today, we’re bringing (data) science to the art of compensation.
It turns out that using a combination of mathematical tools (a combination of simple regression models, data science, and AI), it is possibly create what is essentially a massive “solver” function for compensation.
It turns out that when you isolate the impact of key variables, there are a set of simple functions and “golden ratios” that explain millions of combinations for compensation: for any title, any stage, anywhere.
Instead of handwaving it, we want people — ranging from HR & compensation leaders, to candidates & employees, to founders & executives — to review, provide feedback on, and collectively improve the methodology. So we’re open-sourcing the Fair Offer Algorithm under a Creative Commons Attribution Non-Commercial license.
How to think about Fair Offer
Fair Offer is not a benchmark, it is an algorithm. You can’t see the specific data points underlying the 2+ million combinations of compensation it generates, because each individual output isn’t backed by data; rather, the entire algorithm reflects a pattern of compensation that we see in the market — including by filling in the gaps of data. That said, it has certainly been informed by thousands of data points, and on an ongoing basis it will be subject to influence by benchmarks and trends. For example, we expect that the inputs and weights will be adjusted over time due to inflation, cost of living / labor across cities, increase or decrease in demand for specific roles.
This is the first “public” version — 2024a — and will evolve & improve over time. While the algorithm has been iterated upon since 2017, the reason we’re open-sourcing it is so that it can keep improving. Please provide feedback! Expect a new version of the Fair Offer Standard published every 3-6 months.
It is meant to be a “template” that companies customize & implement as a framework. While it might be a useful estimate for a role in a pinch, the best way to run compensation is to implement a leveling guide. Think of it like a “benefits policy” — just like you might look at an industry survey for employee health benefits to set your policy, Fair Offer + benchmarks can be used to inform your leveling & compensation policy process.
It is heavily tailored towards startups & tech companies. This is because these companies allocate a significant portion of compensation to equity — which increases complexity, creating the greatest room for improvement. While there are probably parallel algorithms for investment banking, or electricians, or retail workers, the scope of the current Fair Offer version is to cater to companies that consider equity to be a significant component of compensation.
Coming Next
In the next post (tomorrow), we’ll provide the actual algorithm — the logic and the surprisingly-simply formulas — and link to a calculator that anyone can use to estimate fair compensation.
Super interesting (I mean that as someone who has been nerdy about comp for 22 years now).