Making data collection great again

Monitoring and evaluation of social programmes are vital and can be highly beneficial for learning, improved service delivery and communicating with funders. Unfortunately, the reality of applying ‘traditional’ academically rigorous evaluation methods comes with many challenges.

It is that time of the year again. Collecting baseline data from our participants. Data for internal purposes (to prove that what we do benefits the Waves for Change children and coaches) and data for external researchers who, through partnerships with universities, are looking at other ways our programme might affect our children.

And it happened again. The challenges we face every single time reared their heads. Whilst we continue to strive to make our internal methods more child- and field-friendly the external researchers remain bound by the constraints of maintaining academic rigour. This inflexibility gives rise to some or all of the following challenges:

  • Children are given long paper questionnaires (usually psychologically validated scales) which could cause a child who is already resistant to “testing” to become anxious, apathetic or dishonest.
  • This is enhanced by the children’s very low levels of literacy and concentration caused by factors such as poor maternal nutrition, drug abuse, stunting, malnutrition and repeat exposure to traumatic events, all of which cause lowered executive functioning.
  • The children are also vulnerable to response bias as they see the questionnaires as a “test” for participation in the programme and might provide socially acceptable answers instead of the truth.
  • The effects a white European experimenter/researcher has on impoverished black African children can also not be ignored. These children are rarely exposed to such type of person (physical, cultural, language differences) and it undoubtedly has an effect on the children that a local researcher might not.
  • The questions/items (even when translated) are not appropriate to the context and culture of the target population. Neither are Likert scales.

Any of the above can severely compromise the quality of the data. A perfect example of this occurred recently when one of our site managers accidentally got a child to complete the same questionnaire within 24 hours. Basically, we had pre-post test data for the same individual 24 hours apart. And when we looked at the answers roughly 70% of them were different.

This type of data wastes precious resources and perhaps most importantly, the children’s time. In an environment where we provide after-school projects/programmes aimed to uplift, this borders on being unethical.

What we need are appropriate methods that are appropriate to the context as well as provide reliable data. Also, methods that can be taught to local researchers in order to increase their capacity. Enough of Westernised researchers using vulnerable populations as a hunting ground for research. Simply taking what they need in order to get their degrees or publish their papers. We need research that matters.

To that end, Waves for Change is hosting a Monitoring, Evaluation and Learning Networknext week, 6-10 March. LaureusComic Relief and the World Childhood Fund are funding the project, which is essentially a sharing and learning community of practice. Representatives from various sport for development and peace organisations, as well as funders and researchers are coming together in Cape Town to share their knowledge of mechanisms of change, outcomes and most importantly perhaps, innovative and reliable data collection methods.

The Network

The Network

#sharedimpact

Watch this space!

Download the Evaluation Use PDF