M&E in Action: RefugeeMobile

Margaret Gibbon  |  April 10, 2018

This post is part of a series spotlighting M&E practices and learning among refugee service providers in the U.S., beginning with graduates of META’s FY17 certificate course. Today, META hears from Rachel Factor, RefugeeMobile Project Manager at Refugee Services of Texas. If you have M&E practices to share, we’d love to hear from you—email META@Rescue.org!

 

 Tell us about your program:

Refugee Services of Texas (RST), in partnership with Sparrow Mobile, launched the RefugeeMobile program in May 2016. Aiming to improve refugees’ social and emotional integration, we provided recently arrived refugees with smartphones; six months of free cell services, including data; and digital literacy training. The phones also came with pre-installed apps to help with topics like transportation, translation, and banking.

 

What key M&E lessons did you learn?
1. Good monitoring and evaluation (M&E) starts with sound program design

When I came to RST as the RefugeeMobile Project Manager, the team had already completed the initial M&E planning.  They developed a results framework and discussed and documented the elements of the theory of change behind it. Developing these tools at the program design stage enabled us to have a clear, effective vision for monitoring and evaluation throughout the project.  At the same time, the results framework was a bit of a living document, and I reviewed and revised it to meet evolving needs and realities as the project progressed.

Our program design also included a substantial research component. To understand the impact of RefugeeMobile on participants’ language acquisition, job placement, earnings, and public benefits receipt, RST partnered with Notre Dame’s Lab for Economic Opportunities (LEO) to conduct a randomized control trial evaluation. If we hadn’t prioritized evaluation at the program design stage, it would have been difficult to get these insights on impact and learn whether any changes in client outcomes resulted from our program and not outside factors.

2. Monitoring for program fidelity is important to make sure that the program is being implemented as expected (and if not, why not)

We can only reliably test an intervention if it is applied in the same manner across all sites. For example, a five-hour training program is not the same as a one-hour session. As RefugeeMobile is a part of a randomized controlled trial, we needed to pay special attention to monitoring fidelity of implementation to ensure consistency across our four sites. I created tools to guide enrollment and training at each site, so all trainers were following the same curriculum and guidelines. I also set up monitoring protocols and tools that I used during site visits, to see whether the trainers were following the training standards. The tool included items such as “trainer asks client questions and tests knowledge,” “client is actively engaged with hands-on learning,” and “trainer utilizes the correct slide deck and training guide.” These were critical components I wanted to ensure were consistent throughout the project.

In most cases, site visits led me to check these boxes, but I kept in mind the limitation that trainers may change their normal behavior while being observed. I also interviewed clients to learn about their training experience. While helpful, this method relies on clients’ memories of a time soon after arrival when they receive an overwhelming amount of information. So my advice is to take time to carefully develop your monitoring and evaluation tools (and pilot them where possible) to ensure they produce useful data, and triangulate whenever possible—that is, collect data from multiple sources!

3. When it comes to evaluation, big questions mean big partnerships!

RST’s partnership with LEO has enabled us to conduct a high-quality impact evaluation. Our research design included focus groups, a midpoint survey, and an end-of-project survey. Focus groups proved to be a great way to gather qualitative information about how clients perceived the program and about how it did or didn’t help them. The midpoint survey served as a valuable pilot to see how clients would respond to SMS-based surveys and provided preliminary information on their employment status and general well-being.

We recently concluded the end-of-project evaluation survey that will allow us to compare the outcomes of RefugeeMobile participants with the control group. Our partners at Notre Dame are analyzing the survey results to complete the impact evaluation. We expect to learn many valuable lessons and we won’t stop at producing a report. The most important step will be to ensure that we use what we learn to design evidence-based programs in the future, and share our learning with the resettlement community.

 

What M&E resources would you recommend?

Photo credits: Refugee Services of Texas                                                      

1 Comment

  1. Michee SAGARA

    I really appreciated this blog and I have drawn a lot of lessons. There is question of use ICT in M&E, using to monitor and evaluate programs and your case is relevant and very illustrative.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *