Wednesday, December 3, 2008

Handout on Strengths Based Evaluation from November 18 meeting

Thanks to B.J. Tatro for providing this handout on Strengths Based Evaluation from the November 18 meeting

SLHI Health in a New Key Consultant Community
& Arizona Evaluation Network
“Strengths-based Approaches to Program Evaluation”
Gwen Relf, Jane Pearson, Chrystal Snyder, and B. J. Tatro
November 18, 2008


Designing the Evaluation
1. Were stakeholders, including those affected by and invested in the program, involved in visualizing what success would look like (defining the outcomes or desired results)?
2. Are the measures of success important to the stakeholders?
3. Are the measures of success stated in positive terms (what we want more of, not only what we want less of)?
4. Were stakeholders involved in designing the data collection tools and methods?
5. Are the data collection tools and methods focused on discovering what is right and why, not only what is wrong?
6. Are the data collection tools and methods culturally appropriate?
7. Are the data collection tools and methods respectful of participants and strengths-based?
8. Are the data collection tools and methods focused on learning?
9. Is the evaluation design consistent with the values underlying the program?

Conducting and Reporting
1. Who is doing the evaluation? Are they knowledgeable of and sensitive to the culture and context of the program?
2. Are stakeholders involved in the implementation of the evaluation? What are they involved in (e.g., data gathering, serving as a source of data, making sense of the data, preparing the recommendations)? How are they supported to have a successful experience?
3. Are reports provided that answer questions stakeholders care about?
4. Are reports user-friendly (e.g., language, format)?
5. Does the report focus on what is right, as well as what needs improvement?
6. Is the language in the report respectful?
7. Does the report promote learning?
8. Do recommendations build on participants’ strengths?

1. Does reporting promote utilization? Consider frequency, length, content, and format of reports.
2. Were stakeholders engaged in a review of the findings and recommendations and the development of an implementation plan?
3. Are recommendations practical?
4. Are recommendations consistent with the culture and context of the program?
5. Are successes celebrated and used to energize future improvement?
6. Is there opportunity to reflect on what has been learned and its meaning?
7. Are evaluation results tied to future planning? Used to guide decision making? Program improvement? Influence agency and public policy? Increase program visibility?
8. Is accountability internalized? (We are doing this because we want to be the best we can be.)

Prepared by B.J. Tatro

No comments: