Determining How To Measure 

This section of the Toolkit focuses on how to measure and track the indicators selected in Step  2:

  • Determine the constructs to measure and potential sources of data;
  • Identify the methods and measurement tools that will be used to capture the data;
  • Determine how to organize, manage, and store the data in a designated performance measurement data base; and
  • Identify the formats and processes that will be used to track data regularly once the performance measurement system is in place.

Data Sources

Data for indicators are typically drawn from:

  • Existing data sets (designed for program administration or other purposes), including: 
    • program participant intake, service utilization, exit and case note data;
    • organizational budget and planning data;
    • grant and contract management, expenditure, and other financial data.
  • New data collection (designed specifically to collect information for the selected indicators), including:
    • organization or program documents;
    • on-site observation;
    • individual interviews with stakeholders, service provider staff, program participants;
    • focus groups with stakeholders, service provider staff, program participants;
    • surveys of stakeholders, service provider staff, program participants;
    • event (e.g., training) attendance logs and evaluations;
    • Web site traffic and usage analytics to track public awareness efforts, and Web messaging tools to track communications efforts.

If data are already being collected for the selected indicators, provisions should be made to extract, compile, edit, and transfer that data into the electronic database set up specifically for the organization’s performance measurement system (see next section). This step can also be used as an opportunity to audit and streamline measurement practices for existing data, and to assure existing data are meeting the purpose and needs of the performance measurement (PM) system.

Selecting and Preparing Measurement Tools 

Existing Data - If the data for an indicator are being drawn from an existing data system or original input documents develop a form and protocol to use in extracting and recording the data.

New Data: Surveys

Surveys can be used to collect data for a range of indicators. For example, they can be used to measure beneficiary or staff satisfaction, assess progress toward a particular short-term goal, or ask about targeted outcomes or results. In addition to traditional written/paper survey forms, a number of online survey tools make it simple to develop and distribute the survey instrument, and to collect and analyze the responses.

Guidelines for creating an effective survey:

  • Include  members of the target audience in the process of survey development. The more active role these stakeholders play in the process, the more informative the data that are generated. 
  • Identify the information (constructs) that will underlie the items or questions that will be included in the survey. What indicators will be tracked  with this survey data? What questions about the organization, its programs, or stakeholders is the survey intended to answer?
  • Specify the target audience of the survey. Is the survey seeking information from agency staff, decision-makers, program participants, and/or other external stakeholders? 
  • Develop questions that clearly address the audience and elicit the information needed. Refer back to the indicators the survey is seeking to track while developing the survey questions. The question structure should be as simple as possible. Avoid including  alternatives or more than one construct within a question. Consider including a few open-ended questions that can help to add depth and context to the quantitative results. Keep the total number of questions and overall length of the survey as brief as possible. Do not overwhelm respondents with too many questions.
  • Determine the best way to reach the target audience. Many factors will go into determining whether to collect survey data online, by mail, by phone, or in person, including: volume/number of respondents, ease of access to respondents, reliability, cost, and staff resources/level of effort available.

Determine the frequency of survey administration. This will depend on factors such as:  the purpose of the survey, the structure of the program, the way clients participate in services, the ease, accessibility and reliability of contacting survey respondents, and other internal and external reporting requirements. For example, service organizations may choose to survey program participants at the end of their program cycles or at fixed intervals. Avoid overloading the target audience with frequent survey requests.

Test the survey. Pilot testing the survey first on a small sample group will help identify ambiguities or confusion about the meaning of items or response scales used, question structure, length, etc.

If an indicator proves difficult to measure, look for a proxy that can be tested and refined over time.  For example, an agency wanted to know the number of low-income older adults enrolled in its classes. It did not want to require its participants to disclose household income information, so the agency found an alternative: gathering household zip codes from each person who registered for a class as a proxy to identify those who lived in neighborhoods that qualify as low income. Using zip codes allowed the agency to estimate participant income from readily available census data. Since the agency sought to increase the number of low-income residents served by its programs, one of its indicators became “the number of program participants living in poverty-area zip codes.”

New Data: Individual Interviews and Focus Groups

Keeping in mind the guidelines for construction of survey questions above, develop 3-6 open-ended questions for individuals or small groups of 4-6. Keep sessions brief (approximately 30 minutes for interviews and 60 minutes or less for focus groups). If appropriate, ask respondents to provide examples to illustrate their responses.

Determine Where to Store Performance Measurement Data

It is important to compile, organize and store indicator data in a centralized place that is easy to access and update.  Below are three main data-storage options, listed from least to most robust:

1.   Spreadsheet software

2.   Standardized or packaged database, customer relationship management, accounting, or other software

3.   Custom-built database or other software.

Many organizations use a commercially-available electronic spreadsheet to track all of their indicators.  Databases offer a more robust way of collecting and organizing data. Standardized databases, such as Microsoft Access, are powerful data storage tools. Organizations with unique data needs might use or create customized software such as ServicePoint for housing programs.  

Assign Responsibilities 

  • Identify the individual/unit that will be responsible for tracking the organization’s indicators. If your organization has multiple programs, responsibility for indicators might be divided among program managers. However, one individual in each division should have final, overall responsibility for compiling and reporting indicator data.
  • Guidelines should also be established for when data should be collected and reported, depending on the frequency and timing established for each indicator. Some operational and program data will come in on an ongoing basis; surveys might be conducted annually or several times per year. Some measures require much more time - Data on outcomes in education for example may take an academic year.  
  • Make a list of all measurement tools, storage locations, and managers responsible for measuring each indicator.
  • Document the approach to collecting and storing the data for each indicator before proceeding to the next step. 
  • Link each performance measure to an area of performance, i.e. output, outcome, efficiency etc. 
  • Include any guidelines for calculating the performance measure so that the measure will be calculated consistently over time.  

Document these steps in the Master Indicator Template - a simple tool for documenting measures, data sources, reporting frequency and assigned staff. 

Root Cause (2009), Building a performance measurement system: Using Data to Accelerate Social Impact.