TIG Project Evaluation Plan: Identifying Evaluation Methods and Data Sets

Evaluation data are indispensible elements of effective program management and assessment; without viable data, one cannot effectively identify and demonstrate a project's accomplishments.  

The TIG Evaluation Plan should identify multiple evaluation methods and data sets for each objective (and, in some cases, for different strategies/activities).

  • Each specific data set will be used to demonstrate the project’s progress in achieving each objective (or accomplishing significant activities/strategies).
  • Some evaluation methods or data sets can be used to demonstrate progress towards more than one objective. In those cases, enter the name(s) of the pertinent evaluation methods or data sets in the row(s) of the objectives to which they apply.
  • Three broad types of data can be used to demonstrate the project’s achievements: administrative, survey, and qualitative.
  • The most effective evaluations will use a combination of these data types. 

Descriptions and examples of or guidance for using these types of data follow.  (Note that more detailed information about the use of these data is available from the TIG website, the LSC Tech website, and the links on each site.)

A.  Administrative data.
These data typically can be obtained and compiled easily and inexpensively because they are readily available from a variety of sources. Examples include:

  • CMS data re: the total number of clients served, clients served in particular regions, and individuals served from different population groups, such as the elderly, LEP individuals or those with limited literacy.
  • Descriptive data re: types of outreach conducted, the types of training provided, number and type of trainings archived.
  • Quantitative data re: number of trainings conducted, number of persons trained, training evaluations.
  • System beta test results
  • Timekeeping or budget data re: cost savings, staff productivity, etc.
  • Partners’ data such as: number of documents produced using HotDocs automated forms or Court data re: total number of pro se filings or the filings in particular legal areas. 
  • Data re: number and type of document page views and downloads, documents accessed by different client groups (e.g., LEP populations, rural populations).  

B.  Survey data.
Surveys can be tailored to different groups, such as clients, advocates, other program staff, and staff of partner organizations. Respondents can easily and quickly complete surveys and survey data are easy to compile and analyze.  (As noted in the detailed evaluation plan instructions, rather than statistically valid samples, project managers should develop "practical" samples that are comparable to the population affected by the project.) Examples include:

  • Surveys of automated form users re: usability and usefulness of forms
  • Surveys of staff attorneys and/or pro bono attorneys re: value of training and resource materials
  • Staff surveys re: impact on their efficiency and effectiveness of enhancements of case management systems, timekeeping systems, accounting software
  • Surveys of court staff re: effective of pro se forms or pro se clinics in improving quality of pro se court filings
  • Surveys of website users re: usability and usefulness of materials, identification of new materials, suggestions for improving site navigability. 

C. In-depth interviews, focus groups and other qualitative data.
In-depth interviews, focus groups, or direct observation of groups using or affected by the developed system can provide a fuller understanding of a system’s strengths and weaknesses than survey or administrative data.  When using these methods evaluators should consider the following factors:

  • In-depth interviews may cover the same topics as surveys, but they allow the interviewer to present follow-up questions or otherwise obtain more in-depth information than is provided by survey responses. 
  • Direct observation of users (e.g., clients, staff, court personnel) can provide insights into the challenges users confront and ways particular systems can be improved.  This method can be especially useful when combined with interviews. 
  • Focus groups can provide very useful information but they pose challenges with respect to how representative participants are of the user population and the extent to which information is obtained from all participants in the focus group.