Whose perspective counts in the game of life?

This post was originally published here.

In the ongoing battle for resources in our communities, across the nation, and worldwide, a broader question begs to be answered – in each of these decisions, whose perspective counts the most?

I’ve seen it play out time and time again across social media platforms, splashed across various news outlets, and at community engagement sessions. In the end, it seems to more often than not come down to one of a few variables: who you are and how loudly you yell. And even then, sometimes the prioritization of whose perspective counts in a decision skews based on these two variables. Stated future vision can often come into play – but not always. 

The question that begs asking is this: should decisions be made based on who is yelling the loudest or organizes the best? 

IMHO (In my humble opinion, for those of you who have yet to cross even the GenX internet lingo chasm), this is a tough problem to solve. We want to hear all perspectives, but no one possesses the funding to go knocking on every door for every question at hand. So, a deluge of surveys via phone, text, and email go out, community engagement sessions get put on the calendar, and appointed surrogates go out into the community to try and gauge where people stand. 

Most reasonable analysts will tell you that as soon as you have a community dataset (from a reasonably diverse demographic and firmographic pool) of responses, you’re looking at a magic number of either 30, 50, or 400 depending on which analyst you’re talking to. Depending on what it is that’s being tested, the sample size can be as small as five in determining whether or not a proposal is effective. 

With more time and fiscal constraints than anyone cares to discuss, it’s much less uncommon these days for surrogates to be representing others who cannot make it out to city council meetings or community engagement events. More often we are seeing letters with consolidated signatures from community members co-signing to a pledge or statement. Our reality is that we are simply very busy humans. We have jobs, we have kids, we have a 35 deep task list of very important things that need to be taken care of on any given day. These days, too many people do not have the mental space to think about policies being considered at the broader community, state, or national level because they are simply focusing on trying to survive their jobs so they can afford eggs. 

So in this context, how do you effectively get the real data that represents the real people that matter in the context of the question at hand? 

Better yet, who are the people who should be included in the dataset? Is it the people directly impacted by today’s decision, is it people who will inherit the decision a decade from now, or “experts” that get brought in? 

Is it the world we live in today, or is it the world we must work to create that matters the most in prioritization? 

Any person making decisions on public policy will tell you that these binaries are not mutually exclusive. It is, in every decision, both that matter. And this in there lies the challenge – how do you make everyone happy when the positions are polar opposites? You simply cannot.

What policymakers can do, however, is create a process where a material percentage of constituents are represented in the numbers. Perhaps not every door gets knocked on, but enough from a demographic and firmographic standpoint are represented so the pool is effectively statistically significant. Here we go again with statistical significance. Can 400 people correctly represent 10,000 or more people in a survey? The answer is – historically and usually, depending on the method of data collection and purpose, yes. Four hundred survey respondents will produce an approximate 5% margin of error and a 95% confidence level for a population size over 10,000. Now if you wanted to bring it down to 3% margin of error, you’re looking at just over 1,000 respondents. For most use cases however, the cost savings of going from 1,000 down to 400 makes the 2% improvement negligible. 

We all know that we live in the real world where cost constraints need to be balanced against ideals. And because of that, a 95% confidence level with a 5% deviation seems pretty reasonable in most common use cases when it comes to broad community outreach. More times than not, 400 people or less will be driving the decision for everyone.

All this to say, those emails you get asking for your feedback? Take them. The community events? Find out who is going and chat with them. Your opinion matters, 100% of the time with zero deviation.

Comment On This Post: