Agile Impact Measurement (There really can be such a thing!)

In her debut piece in this series about the challenges of Lean Impact Ann Mei Chang identifies impact measurement as one of the top 10 challenges for social innovation. “Quantifying impact” she says, “is much harder than counting e-commerce transactions.”

And she’s right.

No doubt there are added complications to impact measurement, but at the same time does it have to feel like pulling teeth?

Perhaps, the main problem with measuring impact is that we’ve allowed ourselves to believe that understanding social performance must be a complex, academic exercise separate from business operations. It needn’t be.

It’s true, the kind of data we need for assessing impact is distinct from commonplace customer feedback; things like product-market fit, customer loyalty, and sales funnels. Like any other business, successful social enterprises must also capture information about these customer “wants.” Additionally, they  should laser in on what customers “need” from a social perspective, e.g. what positive externalities might have been created, and whether more marginalized groups can access their product or service.

So, what’s stopping a fledgling social enterprise (or an established one for that matter) from getting and using the kind of data we need to understand impact?

At Lean DataSM we’ve been working on how the tools of impact measurement can be pivoted for the social enterprise and impact investment sectors. If you’re starting with a new idea, project or business to create social change, and want to start measuring from the offset, here are some practical considerations.

Perhaps, the main problem with measuring impact is that we’ve allowed ourselves to believe that understanding social performance must be an academic, complex exercise separate from business operations. -Tom Adams, @acumen Click To Tweet

Start with listening not metrics.

There’s a tendency in impact measurement to jump straight to defining a bunch of metrics for performance. One reason this is tempting is that we are taught to do this for financial and operational performance. But impact is different. At the beginning of your measurement journey embrace qualitative approaches, asking open ended questions about people’s lived experience.

Example in practice: A question we love to ask of customers at the beginning of any engagement is simply “Has your quality of life changed because of [product/service]?” with the five answer options ranging from “got much worse” to “very much improved.” We follow this with an open-ended question asking how and why the customer has come to that view. So simple, yet so powerful. From this question you can start to get a sense of whether the impact you think you’re making is what customers themselves are experiencing. Keystone accountability have compiled a list of good alternative feedback style questions about impact.

A Theory of Change is your frenemy.

Theories of change are all the rage, it seems that everyone has a nice flow chart which describes the end impact they will make based on the inputs they use. If they’re used well, they’re great, if used poorly (as they frequently are) they give us false comfort that the theory holds, without the need for actual testing.

Example in practice: In your early days rather than a full-blown theory, start with a range of simple testable hypotheses, e.g. if I use sales channel X, my penetration into marginalized group Y, will be higher than sales channel Y. And when you’re ready for a theory rather than spend ages coming up with your own template, why not borrow DIY toolkit’s ready made one. There’s a bunch of other useful stuff there too.

Keep your impact measurement light and inexpensive.

Perhaps most important of all is to design short and snappy surveys. There is an enduring temptation in research to add just one more question, and impact measurement is no exception. Resist it.

Example in practice: The Poverty Probability Index is a great example of a snappy short questionnaire designed to be light on the respondent but able to unpack one of the most complex issues in development: income poverty. We use it all the time.

Perhaps most important of all is to design short and snappy surveys. There is an enduring temptation in research to add just one more question. Resist it. -Tom Adams, @acumen Click To Tweet

Worry about bias, but give yourself a break on statistical significance.

When you’re first collecting data, bias should be your biggest concern. Common reasons for bias are designing leading questions, or inadvertently surveying an unrepresentative sample. But you’re not writing a PhD, so you needn’t worry too much about achieving 95% statistical significance. Some data is typically better than no data.

Example in practice: In general, so long as there is no systematic bias in the group your surveying that make them different from your average customer (i.e. they’re all your newest customers), you’ll probably only need a sample of 200-300 surveys to get some robust data. Sadly, we couldn’t possibly suggest all the dos and don’ts of avoiding leading questions in this blog, and often it just takes experience. But this is a useful little blog with some excellent advice to start you off.

Don’t tie yourself up in complex methodology, but do limit your claims of impact to what is sensible.

Impact measurement can often seem scary. People throw big words around like causality, attribution, and additionality. You don’t need an intimate understanding of these terms before you start to learn if you’re doing any good. Take causality for example. Technically, causality can be understood in four ways from low to high certainty (with respect to whether what you did resulted in the change you observe): assumption, logic, statistical methods, and controlled experiments. If you’re cautious with what you claim you can start to measure impact absent of a formal control group.

Example in practice: Suppose you’re selling a solar lantern and are keen to know about the impact you create. Absent a major change in the price of fuel, a switch away from harmful kerosene following purchase of your product can logically be deduced as indication of causal impact without the need for a more involved methodology. But if you’re hoping to say something about educational performance, because you think your customers might increase their hours of study at night, you’re going to need much more involved research. If you fancy geeking out on all this – trust us it’s worth it – please see our series on the impact of energy.

These five tips are intended to give you the confidence that you don’t need a PhD to start asking the people, whose lives you might be lucky enough to impact, what they think about your work. Anyone with the curiosity to ask can undertake some-kind of agile impact measurement. In our work, we’ve seen time and time again the organizations that ask open questions about impact, and are brave enough to hear both positive feedback and critiques, are those that are most successful at creating more of it. Which is, ultimately, what we’re all here for.

Anyone with the curiosity to ask can undertake some-kind of agile impact measurement. -Tom Adams, @acumen Click To Tweet

Lean Impact RSS

Thanks to Tom Adams for contributing this piece. He is Acumen’s Chief Impact Officer, and co-lead of its Lean Data Approach. You can take a short, free course on Lean Data at +Acumen, the World’s School for Social Change.