HashiCorp strives to make data the first thing we turn to in order to evaluate our efforts, support our customers, and inform our decisions. Leading with data sets shared expectations for what success looks like, and enables team members to contribute towards that success in clear ways.
The push to emphasize quantitative analysis is especially timely for HashiCorp: The recent launch of our cloud-based products has given us the opportunity to gather and apply product usage data. Our original open source and commercial products were (and still are) predominantly downloadable software, so we had no visibility into how many people used our software, or what features they were and weren’t using. We are now able to incorporate insights from product usage to inform our ongoing strategy and operations, and we have a responsibility to our customers to do so.
As we start to gather more data, we want to approach leading with data thoughtfully. There are three steps for using data to empower teams. First, use data to gather context about what is and isn’t working well. Second, define goals based on that context to set expectations for success across a team or many teams. Finally, derive insights from data to honestly reflect on what can be optimized to reach our goals.
To figure out what is working well and what isn’t, start by asking questions. Broadly speaking, we define data as quantitative information used to answer the question, “How does your team measure improvement?” Putting time and thought into refining your initial inquiries is essential to understanding what data you need.
A lot of people don’t spend enough time asking the right question. And then they start gathering all this information and don’t understand how it adds up. Starting with the right question is important.
Asking the right question enables us to align on shared truth, establish baselines, and measure improvements from there. It also allows us to prioritize resources to a specific initiative for measurable impact and increase feedback for faster decision making.
Here are some questions to help you gather context:
What questions are frequently asked of your team? Can these be answered with data?
How is success measured for this team/department/project? What are the top 3 metrics to assess success?
What company North Stars or objectives do these metrics align to?
Who, besides your team, will care most about this metric?
What levels of accuracy and precision are required?
What frequency should the metrics be gathered and/or reviewed to be most useful?
We often have to make a trade-off between time and accuracy when assessing metrics, so that last question is meant to ascertain whether data now is more useful than data later. Some metrics require high levels of accuracy while others don't need as much specificity. For example, because a product’s Monthly Active Users (MAUs) are often reported to the executive team and board, this metric should be extremely accurate—we should have a clear definition of what counts as an MAU and make sure that each user is unique. Sometimes the data you collect may not be complete, or highly exacting, but will still be sufficient to provide direction. For example, if a manager wishes to understand whether their team members are participating in company learning programs, a highly precise accounting of every course taken, every course completed, and the amount of time spent in each course is not a good use of effort when a simple participation percentage rate across all courses will provide the information the manager needs.
When identifying what to measure, my team and I identify what question or problem we’re trying to answer with the data, then we develop a problem statement and figure out which area of the data is important for us.
Setting Goals and Tracking Progress
Managers should define specific, quantitative goals and expectations for their teams and then use data to regularly assess progress. These goals can be defined in a scorecard as part of our company operating cadence. If data-driven goals aren’t in place, we run the risk of managers dictating tasks rather than supporting their team’s efforts. Ingraining quantitative goals in the minds of team members and giving them the tools and information to assess their own progress supports HashiCorp’s decentralized decision-making model and increases employees’ autonomy.
Once a team’s broader goals are established, managers should identify what metrics need to trend upward to meet expectations. Generally, leading indicators—such as customers’ onboarding progress—are more actionable than lagging indicators (renewal rate). For example, if a team’s top-line goal is to increase renewal rates, then managers should identify the preliminary steps a customer takes before renewing, such as installing, onboarding, and adopting the software. In this instance, onboarding would be a leading indicator, because we know if customers aren’t completing onboarding, they are not likely to renew. And if, for example, customers are installing the software but not using it, that information also helps team members know where to improve the product.
One of the progress indicators for our deliverables is variance from the plan of record. Monitoring and controlling these variances helps us manage risks associated with meeting our milestones and helps us identify necessary course corrections.
The Terraform team has a regular focus on activation and retention numbers. We use transparency so that everyone knows where to prioritize their effort.
Tracking progress (or lack thereof) toward a measurable goal is a clear way to assess team success. Managers and team members should use the data related to their leading indicators to evaluate progress on a weekly, monthly, or quarterly cadence. Use these reviews to keep the team on track and as an opportunity to reflect on potential improvements.
Data helps us ensure our processes are working. We often build operational reports and alerts that help us stay informed on potential issues or gaps we need to address.
Deriving insights from data informs our iterative processes. When deriving insights, we dig further into a metric or set of metrics to find the root cause of a particular trend—this leads us to take action and change behavior. We have some recommended practices when deriving insights:
Optimize for the answer, not the visual. A fancy graph doesn’t always give a clear answer. Tables do the trick most of the time. Start simple, then move toward information density with more complex visualizations.
Trends over absolutes. Goals aren’t achieved in a day; success takes consistent effort and compounding results. In general, we aim to increase the trajectory of positive trends and alter the trajectory of negative ones—the focus is on trending upward, not on reaching absolute goals. Properly assessing progress requires looking at the bigger picture and identifying whether dips and spikes are part of a larger trend or merely aberrations. View data in the context of its trend rather than in absolute form. A data point might be positive for one day but negative overall, and you need the trend context to determine what is actionable and what is not.
Over Christmas, our MAUs dip. So, if we looked at our data at that point, we might panic. But MAUs tend to normalize after the holidays. Those are the sorts of ups and downs we want to make sure we don’t make rush decisions over.
Segmentation, always. Looking only at averages can hide true insight. For example, we segment renewal rate by product. Looking at just the overall renewal rate could hide the behavior of individual products. For example, if product A's renewal rate is 95% and product B's is renewal rate is 85%, the average is 90%. But the true insight is that product B's renewal rate is lower than product A's renewal rate. When deriving insights, always use segmentation to understand if there are multiple groups within the data set and whether those groups have different patterns, which is known as Simpson’s Paradox.
We use market and account segmentation to understand trends and growth opportunities.
Segmentation is critical for the Go To Market org. It helps us understand how each of our theaters, products, segments, and channels are performing. Segmentation also allows our marketing teams to better target our messaging to the right audiences based on attributes such as company size, technographics, and job titles.
Review cadence is key. A metric defined but not reviewed is useless. The more regularly data is reviewed, the more accurate and actionable it becomes. The cadence itself, whether daily, weekly, monthly, or quarterly, depends on how rapidly the data itself changes and how actionable the insights are. Choose the right cadence for reviewing the data.
Data is both a powerful tool and a distracting force. Used well, data keeps teams aligned and encourages consistent improvement. Used poorly, it can lead to incorrect conclusions and misdirection. To use data as a rallying force, start with gathering context and asking the right questions. Then define goals and derive insights to achieve those goals.