Tuesday, December 2, 2014

The Need for Mentoring Program Evaluation

In the effort to justify that we have a results-producing mentoring program, we sometimes rely on anecdotes and beliefs. Hard data, sometimes even longitudinal data, is essential for improving programs, recruiting mentors, and securing funding. Note how Carol Travis and Elliot Aronson  gently shake us back into reality. Thanks to Jean Rhodes, Ph.D., for sharing in The Chronicle of Evidence-based Mentoring.

The problem of the benevolent dolphin: Implications for mentoring

Dolphin by Jean Rhodes
In their new book, “Mistakes Were Made, But Not by Me,” Carol Travis and Elliot Aronson discuss our how confirmatory biasespredispose us toward evidence that confirms perceptions that we already have. And, in the absence of rigorous, experimental studies, we can easily summon up “evidence” that supports our viewpoints while ignoring evidence that contradicts our beliefs. This has relevance for mentoring programs. If we take as a given that “mentoring works,” and don’t evaluate our efforts, we can easily find evidence to confirm this presumption. To illustrate this point, Travis and Elliot describe the “Problem of the Benevolent Dolphin” notes how, every once in a while, a new story appears about a shipwrecked sailor or swimmer who, on the verge of drowning, is nudged to safety by a dolphin. As they explain:
“It is tempting to conclude that dolphins must really like human beings, enough to save us from drowning. But wait—are dolphins aware that humans don’t swim as well as they do? Are they actually intending to be helpful? To answer that question, we would need to know how many shipwrecked sailors have been gently nudged further out to sea by dolphins, there to drown and never be heard from again. We don’t know about those cases because the swimmers don’t live to tell us about their evil-dolphin experiences. If we had that information, we might that dolphins are neither benevolent nor evil; they are just being playful.”
The authors then take a hard look at psychotherapists, who, in the absence of rigorous, experimental studies, can easily summon up “evidence” that their clients are improving and that their approaches are working. In much the same way, the field of mentoring can, in the absence of data, fall prey to the flawed reasoning of the benevolent dolphin problem.  Not that there are  evil-mentor experiences out there, but we need to hold open the possibility that some of our efforts, especially the more perfunctory approaches to relationships, at best, neutral. We don’t hear about these efforts because youth in matches that don’t click or fall apart prematurely fade quietly from programs, and the failed relationships that they represent are overshadowed by the more compelling success stories of their peers. Indeed, programs often demonstrate their success to funders not by providing decks of slides with evaluation results but by showcasing successful matches and the heartwarming stories they represent. Since youth whose mentors didn’t deliver on their promises typically blame themselves when relationships fail, the shortcomings of programs and the volunteers they recruit go largely unnoticed.
We believe that a close caring relationship improves the lives of youth. But for our theories of change to be scientific they must be stated in ways that it can be shown to be false as well as true. For beliefs to be a matter of science, and not just faith, we need to subject them to the scrutiny of rigorous evaluation. Otherwise, we are no better than the shipwrecked sailors who, having made it to shore, tell us a compelling story that confirm our implicit biases.
Ret. 12-2-14

No comments:

Post a Comment