In August, along with announcing our Tech for M&E Diploma program, we kicked off a M&E Professionals Series, where we will be talking one-on-one with M&E professionals to give you the inside scoop on the industry.

For this second post in the series, we are featuring an interview that one of our alumni, Stephen Giddings conducted, with Janet Kerley, Senior Director, Monitoring and Evaluation Practice at Development and Training Services, Inc (dTS), a Virginia-based consulting organization that does considerable work with USAID.

Janet Kerley
Janet Kerley is a master evaluator and an accomplished trainer in evaluation and performance measurement. As Senior Evaluator in the Monitoring and Evaluation Practice at dTS/Palladium, she provides technical leadership for evaluations in the ME unit, provides technical direction on design and field methods, and supervises the preparation of the evaluation reports. As Chief of Evaluation, Research and Measurement for the Peace Corps, she established an impact evaluation system at Peace Corps.

Ms. Kerley was the Team Leader for Monitoring and Evaluation in the Office of the Director of Foreign Assistance, US Department of State, leading a 200-member inter-agency team to develop standard indicators for the 2007 Foreign Assistance Reform reporting tool. She worked at USAID in the Bureau for Policy and Program Coordination, CDIE and as the Monitoring and Evaluation Office in the Bureau for Africa and the Bureau for Europe and Eurasia. Prior to joining USAID, Ms. Kerley was a Senior Research Associate at Aguirre International. She has lived and worked in many countries in Latin America, Africa

S: How has technology changed the way M & E is conducted over the past decade in international development?
J: The change has been remarkable! A decade ago, most of the data gathering and analysis work was all paper-based, making it difficult, time-consuming, and costly. Especially in overseas environments, it took considerable time and effort to gather, transcribe (and often translate) and analyze the data. But today, the tech tools have made data collection and analysis more efficient and save time and money.
However, there is still a considerable “digital divide” between the much more tech savvy young people and the older professionals originally trained using SPSS (or even earlier)
technologies.

S: Does paper-based data collection still have a place in M&E today?
J: Yes — in certain circumstances paper-based data collection may be preferred.

In very rural areas where electricity may not be available, where batteries for electronic devices cannot be charged or where internet connections or mobile phone services is inconsistent or not available, paper-based data collection is still the best option.

Not everyone is comfortable with data collection using electronic devices, but they may be more open to paper-based questions.

S: What are some of the pitfalls of some of the popular tech-based data collection tools?
J: With so much tech available, it is easy to get carried away.

Some less experienced or less than fully trained data gatherers may lose sight of the fundamental questions the monitoring or evaluation is trying to get at. If evaluators lack sufficient training in sound principles of research, they may be tempted to substitute technology for sound reasoning and good judgment.

Some data collection tech tools may also have a tendency to collect too much data, some of which may be irrelevant to the task at hand. USAID, in particular, is burdened by data overload where data management systems fail to filter out data that is of little use and complicates the monitoring and evaluation practices.

S: What challenges have USAID Missions faced when integrating new technologies into their M& E functions?
J: By and large, USAID Missions have been quite open to technological improvements to M&E functions. That said, there is still a “digital divide” where younger employees (including local staff) who have grown up in the digital age are more comfortable with and more adept at using new technologies to enhance M & E. But more senior and older USAID staff seem generally open to embracing and appreciating the advantages that new technologies can bring to M & E while leaving the technical analysis and the new data gathering tools to younger techie staff. USAID staff have generally been very receptive to training in using new M & E technologies to their advantage.

S: Have new evidenced based technologies made decision making by senior USAID staff easier and more informed?
J: Most USAID Mission Directors recognize the value that good evidence on performance can bring to the achievement of program results, and the added clarity that good data and visually well-presented documentation can bring to decision making.

UNDP in Kigali, Rwanda (Creative Commons image)Photo Source: UNDP in Kigali, Rwanda

S: What are the advantages of mixed methods evaluations?
J: The most important starting point for an evaluation is doing the research required to understand what questions you want answered. Only then should you begin to look at evaluation methodologies to acquire necessary information.

When done at a proper scale, well executed quantitative data collection and analytical methods can bring statistical rigor and clarity. For example, the scale of some of the evaluations done for USAID’s food security (Feed the Future) programs has generally provided reliable data. Unfortunately, USAID Missions sometimes do not make available sufficient budget to assure that sample size for quantitative methods is sufficient to draw reliable conclusions. This is where qualitative methods can help to fill gaps.

Storytelling, an evaluation tool, is one of the most useful qualitative data collection methods. Sometimes quantitative data collection methods do not allow beneficiaries to open up and provide adequate and reliable information, but they react much more positively if they are allowed to tell a story. If you get enough good stories they can provide insights and nuances that purely quantitative methods cannot. Thus mixed method evaluations can provide more reliable evidence of performance than quantitative or qualitative methods.

S: Do you think there is a bias towards quantitative methods in international development because of a lack of free and easy to use qualitative tools?
J: Not at all. Many USAID evaluations make good use of qualitative methodologies. A
decade ago, there was an overuse of “the windshield wiper” approach (an evaluation that is not given time to do adequate field work and they report what they observe “through the windshield.”) to evaluations but more recently qualitative methodologies have become more sophisticated and reliable and can provide a lot of extremely useful information for decision makers.

S: What questions should we be asking to select the best technology for M & E?
J: Evaluation planning should begin with framing the research questions — what is it that we need to learn? The preferred technological solution should be one that can best answer the research questions and must also take into account cultural sensibilities. It is very crucial that technology be viewed as a tool, and not as a substitute, for knowing the basic principles of research.

StephenIDphoto
Stephen Giddings, a TechChange alum, has served for 25 years as a Foreign Service Officer with the USAID, retiring in late 2005. For most of his USAID career, he specialized in managing housing and urban development programs, serving in USAID offices in Panama, Kenya, Cote d’Ivoire, Russia and Rwanda, as well as Washington, D.C. During his last four years with USAID he was the Chief of the Policy Division for USAID’s Africa Bureau.

For the past ten years Mr. Giddings has been an independent consultant providing assistance to the Development Assistance Committee (DAC) of the OECD, and consulted with USAID, the International Real Property Foundation (IRPF), among other international development organizations. He serves on the Development Issues Committee of the USAID Alumni Association and is Co-Chair of the Africa Work Group in the Society for International Development’s Washington, D.C. Chapter (SID-Washington). Prior to his USAID career, Mr. Giddings managed low-income housing development programs at the U.S. Department of Housing and Urban Development and was Director of Planning and Development at the Boston Housing Authority. Mr. Giddings received a BA in political science from Wesleyan University and an MPA degree from the Maxwell School of Citizenship and Public Affairs at Syracuse University.

——-

Hope you enjoyed our second installment of our M&E Professionals Series! Don’t forget to follow our blog for the next post in the series!

Interested in engaging in similar conversations with M&E professionals like Stephen and Janet? Join us in our upcoming course TC211: Technology for Data Collection and Survey Design that starts on October 19. If you want the whole package, you can join our second session of our Tech for M&E Diploma program

Do you use digital currency? If you use a credit card, PayPal, or use a mobile money app to sell or buy things, then the answer is yes! There is a lot of innovation happening in the digital currency field right now and it is especially important for the global development sector. While it will be easier for more advanced financial institutions to adopt digital currency, the benefits of its adopting extend far beyond that sector.

Here is why the digital currency innovation matters to global development:

Puts power back in the hands of the people

Today, many people in developing countries rely on remittances from family members abroad. But wire transfers charge a lot in transaction fees (up to 6 – 10% for $200). Digital currency can facilitate a faster and cheaper bank transfer, removing the middlemen. You can see a difference even in domestic money transfers; In 2012, when the Afghanistan national police switched to a mobile payment service, M-Paisa, the employees thought that they had received a 30% raise. When they received their salaries in cash, 30% used to be “taken off the top” so, mobile money left no room for corruption and delivered their complete salaries.
For small businesses competing in the global market, digital currencies even the playing field between currency conversion rates, commission fees, and transfer limitations that come into play with traditional monetary systems.

Makes money safer

In the developing world, monetary transactions are usually in cash, which means money is also stored in cash. Holding your savings or spending money in cash can put you at risk for robbery risking the welfare of you and your family. Digital currency can take that worry away since you are not holding the cash physically. And with a digital currency like bitcoin, a unique digital signature protects every exchange so that there is no risk of fraud, chargebacks, or identity theft.

We have already seen the benefits of using digital currencies

Digital currencies are not necessarily new. M-Pesa, a mobile money service is a huge success in Kenya, with 83% of the population currently using the service. Bitcoins made their entry in the field recently, however, some organizations are already exploring its potential for social good. Organizations like BitGive are already launching initiatives to leverage bitcoin’s technology to benefit charitable organizations around the world. MIT’s Media lab launched the Digital Currency Initiative to bring together global experts in areas ranging from cryptography, to economics, to privacy, to distributed systems, to address key issues that digital currencies like bitcoin are poised to make on people’s lives in the next decade.

There are many more examples of how digital currencies are impacting global development, so we will be discussing them with some of the organizations leading this effort in our TechTalk tomorrow. Register now and join the conversation here.

At TechChange, we pride ourselves in teaching our participants the crucial skills needed for a career in social good. And how do we find out what those skills are? We go straight to the source! In our new M&E Professionals series, we’ll be talking one-on-one with the pros who are recruiting for these very positions.

2015 was marked as the International Year of Evaluation, so it’s no wonder that M&E is increasingly becoming a sought after skill in many organizations today. We spoke with Michael Klein to learn more about the ideal skills a M&E professional should have and how to get them:

Mike Klein Headshot

Michael Klein is the director of International Solutions Group (ISG), a company that works with governments, U.N. agencies, international organizations, NGOs and other companies to improve the implementation of humanitarian aid and development programming projects.

Mike’s work is at the self-described intersection of ‘old-school’ M&E and ‘new-school’ ICT, working with partners to build on established M&E strategies, streamline data flows and analysis systems, facilitate access to key information, knowledge, reporting and data in a fast, reliable and secure manner.

Q: It’s an exciting time to be an M&E professional. Do you see a big need for young professionals who are trained to take on this kind of work?

M: Yes, it is definitely an exciting time to be in my field. Just looking at the types of conferences and forums being held on the subject, it’s really clear that M&E is a rapidly developing focus in our field. Specifically, I see M&E growing in two separate, but overlapping, areas.

Standard M&E careers: If you were to search job opportunities listed on Devex or Idealist, they are the typical of M&E positions you would find, ones that take a traditional approach (i.e M&E personnel are used to provide managers the with analysis and data they request.) These opportunities are certainly growing, as organizations will always need highly trained staff help to address their M&E.

Beyond the standard label of M&E: Just as in other sectors, skillsets such as analytics, knowledge management, data collection, and information sharing are highly valued in M&E, and the field is increasingly embracing individuals who have these skill sets. This is especially true for people who understand analytics, and how data can be collected, used, and analyzed.

As more and more players in the field are using new technologies and tools for data collection and analysis, a college graduate or young professional entering the field of M&E has a great opportunity to make his or her mark on the industry by leveraging his or her digital knowledge to provide guidance to some of the most prominent development organizations.

Q: What do you look for when hiring?

M: First off, I think it’s important to have a passion, an area of expertise that you enjoy. As a profession, M&E can take you in myriad directions, and it is important to identify what type of work in which you most want to engage. A strength of my team at ISG is that everyone has differing professional interests, ranging from gender equality to how ICTs can catalyze development. Having these range of specialties strengthens what we can offer clients, and when looking at new hires, I look for individuals who are already established or are on their way to becoming an expert in a field or sector.

Aside from passion and subject-matter expertise, people need to appreciate the big picture. When we look to make hires, we want an individual who understands what organizational performance means, ranging from back-office activities such as business development and marketing, all the way through to front-line programming at the field level. If you’re interested in M&E, you have to understand that organizational effectiveness is made possible by a complex interplay of many elements. Having that general appreciation for how organizations function, the types of struggles they face, and how you can improve upon their performance, are keys to success in this field.

isggroupMike with his group at ISG

Q: What does career progression look like for someone in M&E?

M: The reason I was drawn to this field is that there is no set path. Before working in M&E, I worked in Mergers & Acquisitions (M&A), which attracted me for similar reasons. When I worked in M&A, my clients represented a wide range of sectors, and I worked with management to help these organizations restructure, fundraise, find new investors, and generally position themselves to be more successful at what they were already doing (sound familiar?). Monitoring and evaluation is no different. People come to the field from a variety of different backgrounds—such as corporate finance like me, IT, agriculture, and plenty of others, including academic programs focused on M&E—and they serve clients representing the same diversity.

Because everyone enters the field with different skills, it is hard to say exactly how one progresses in the field. Someone entering the field after studying M&E at a university will likely have a fairly technical background may take a M&E support role within a larger organization and begin to assume more responsibility over the years. Whereas, if you’re transitioning into the field with an already established skill set from a different sector, you’re likely to take a different direction and provide either consultancy services or specific project-level guidance related to your expertise.

Q: What role do you think technology plays in M&E?

M: My personal take is that technology in and of itself is not necessarily transformative. Whether M&E is done on pen and paper or done using state-of-the-art IT solutions, good M&E is good M&E. However, technology has allowed organizations to quickly come up to speed with regards to implementing more robust approaches to M&E. If an organization is just shaping their M&E approach, using a technology solution can offer ease and expediency. The way most of these tools are constructed is based on industry best practices, so their structured, hierarchical approach is what we think of as good M&E. Clients who use these tools are then trained to capture their data in a systematic way that has the end in mind.

A lot of our clients initially say, “I really like that heat map,” or cite another specific visualization they see proved by an M&E tool and ask how they can create it for their program. This then launches into a discussion about how to collect data in a way that can deliver these types of reporting and highlight what is most important. It is much harder to go the other way.

Q: Any other pieces of advice you have for people considering the TechChange diploma in Tech for M&E?

M: Know how your study applies to what you want to accomplish and set some goals for yourself before you get into the nuts and bolts of the diploma program. If you come to these courses with a general idea of what skill gaps you want to address, you’re going to be very well placed to make the best use of the TechChange diploma program.

When I was taking one of TechChange’s M&E courses, I knew that I wanted to leverage the new skills I was developing to enhance my company’s marketing efforts. Thus, when I studied data visualization, I created multiple infographics for ISG, using tools that I would not have come across on my own. This was one of the best parts of the class: discovering cutting-edge tools in the field that I could utilize immediately.

———————————

That’s all for this installation in our M&E Professionals Series! Be sure to check out our Technology for Monitoring and Evaluation diploma program – deadline to enroll is September 4!

By Norman Shamas and Samita Thapa

In a previous post, we wrote about why global development practitioners need to be data skeptics. One of the many reasons that we need to be skeptical about the data we are collecting is the biases that are incorporated in the data. The data bias is especially significant when it comes to gender data. Women and groups that don’t identify with binary genders are largely missing or misrepresented in global development data.

Data is a crucial component of any program or intervention. It justifies the need for a specific program, show its effectiveness, and allows us to improve it through evaluation. But this important information tells us little if more than half of the population is missing or misrepresented, so we need to start looking at data with a gender lens.
data in the program cycle

Data on women raises awareness of women related issues

With 62 million girls not attending school worldwide, the U.S. government was able to justify their “Let Girls Learn” initiative. This initiative was announced in February and is aimed at making education a reality for girls around the world. USAID is one of the agencies involved in the government-wide initiative and have presented their approach with data to support it.

But there is still a problem getting good data on women. GSMA’s 2015 Bridging the Gender Gap Report highlights two systemic barriers to mobile ownership and usage for women:

  1. lack of disaggregated data and
  2. lack of focus on women as a market.

However, we need better gender data for more than just the economy. Oxfam conducted a household survey on the gendered effects of the December 26, 2004 tsunami that hit several Asian countries. Women were found to be more severely affected than men. Despite the need for better gender data in the field, it is not always happening. Lack of data on women leads to lack of awareness of issues related to women and consequently, lack of programs designed to tackle those issues.

Survey design can promote non-binary gender inclusion

The problem of gender and data bias gets even more complex when we talk about non-binary genders. Twitter, for example, determines its users’ gender based on data it analyzes from tweets. There are only two gender options: male and female, and users cannot choose to opt out from automatic gender assignment or manually choose their gender. By the simple fact that Twitter is using a gender binary of male/female, individuals who do not identify with a binary (e.g., transgender individuals) or have anatomically mixed sexual characteristics (i.e., intersex individuals) are ignored in the data.

It is important to ask questions about gender on a survey to improve interventions. Instead of restricting gender to a binary, a third option to opt-out or define oneself as ‘other’ can be instituted. When appropriate, additional questions can be used to determine whether practice and self-identification fit into pre-defined categories.

Data must represent local gender categories

It is also important to localize the survey where gender categories and practices may vary. India acts as a good case study for the difficulties in language for demographic purposes. India initially provided three gender options: male, female, and eunuch on its passport forms. However, these three categories marginalized other transgender populations, so in 2014 Indian courts changed the category of ‘eunuch’ to ‘other’ on the election ballots. This simple change in language not only promotes the human rights of India’s non-binary gender individuals, but also provides better data on its non-binary gender communities.

The hijra are a transgender community that has existed in South Asia for over 4,000 years. Along with a few ‘Western’ countries, at least four South Asian countries — Nepal, India, Pakistan, and Bangladesh — recognize a third gender in some legal capacity.

Global development is moving forward with programming for non-binary gender communities. The Swedish International Development Cooperation Agency put out an action plan for working with lesbian, gay, bisexual and transgender (LGBT) issues from 2007-2009. Last year USAID announced its LGBT Vision for Action, a general roadmap of how the donor would support non-binary gender communities. As programming for non-binary gender communities continues and increases, we need to think closely about the language we use and how we collect data on gender to create effective, data-driven interventions.

With development becoming more data driven, the data we collect and the biases we include in the data are having a larger impact. Technology can make these biases more entrenched through features like data validation. Even though data validation is important for survey collection–it limits responses to particular choices or data types (e.g., phone numbers)–it also restricts options based on the choices of the survey creator and can marginalize groups whose identities are not included or allowed as a valid option. Going forward, we need to be careful we are not unintentionally marginalizing other groups or genders with the data we collect.

Interested in engaging in similar conversations around data and tech in M&E? Join us and more than 90 other international development practitioners in our upcoming course on Technology for Monitoring and Evaluation.

On this year’s International Women’s Day, we recognize the important work our alumni and partners are doing to empower women and girls across the world.

At TechChange, there are few areas we see this empowerment happening than in the field of family planning and reproductive health. As we’ve seen in our mHealth online course and community, many organizations are doing fantastic work in this area including the UN Foundation and MAMA, D-Tree International, FHI 360, Jhpiego, John Snow Inc., and more.

We’re hoping to further explore the issue of gender in global development programs and technology in our upcoming online course on Gender.

Use the coupon code, IWD2015, by this Friday, March 13, to get $50 off any TechChange open online course such as mHealth, Mobiles for International Development, Gender, and more.

 

 

The global development industry is generating a lot of data on the ‘developing’ world–data that has not always been available. As technology has made data collection easier and scalable, many in the development industry have already established that monitoring (i.e., data collection) is much easier than evaluating (i.e., data insights). However, both aspects of M&E require good methodologies to ensure the data are accurately represented.

Despite making my living working with data, I am somewhat of a data skeptic. Specifically, I am skeptical of the notion that numbers and data are truth. Much like geographer Doreen Massey’s conceptualization of space as a product of social relations, data embodies social relations and biases. In other words, it is difficult to guarantee the neutrality of data and numbers in terms of how they are collected, what they show, and how they are analyzed. All of this information is subject to human bias – whether intentional or unintentional – with the way humans label data, the limitations of finite data samples, and the human-designed technology that might reinforce biases.

The way humans label data
Does the way we identify data represent cultural bias? In some ways, yes. Labels can be culturally problematic in the way we classify data and the way people interpret those classifications. For example, when collecting demographic information for a survey, limiting gender to two categories, we can reinforce our own notion of gender categories and unintentionally bias the data. India and Nepal, for example, both recognize a third gender on official documents. M&E data in these countries however, do not always reflect this change. Mortiz Hardt, a researcher at IBM, notes five ways that big data is unfair. Along with different cultural understandings and the consistent, if unintentional, representation of social categories (e.g., race and gender), Hardt notes sample size as a problem.

Limited sample sizes of data
The issue of certain groups not being represented in the data is a particular problem for global development. A recent study by the Global Web Index highlights that geolocation can lead to groups in the ‘developing’ world not being counted by web analytics. Virtual private networks (VPNs), which are a common tool for accessing blocked sites, and shared devices are some of the main culprits. Additionally, issues of privacy can change responses and skew the data and limit the sample size of quality data. For example, in some societies, even if a woman owns a cell phone, she is not always free to respond without having her calls and text messages monitored.

Are we training machines to mimic our cultural biases that are in data?
This human bias within data is of particular concern for predictive modeling and big data, both of which are starting to enter development as seen in report reports by UN Pulse and the World Economic Forum. But an algorithm for predictive modeling is just training a machine based on the data that it’s given. So if the data are biased, the prediction will be biased. According to Wired Magazine article with Danielle Citron, a University of Maryland law professor, humans can trust algorithms too much, in that “[…]we think of them as objective, whereas the reality is that humans craft those algorithms and can embed in them all sorts of biases and perspectives.”

So what does data bias mean for global development and M&E professionals?
Global development needs to continue being data-driven. This is emphasized by one of the principles for digital development being focused on data driven decision making. It is equally important we recognize and understand the biases we incorporate into datasets and the biases of the datasets of the datasets we use.

At the end of the day, Tech for M&E begins with the humans behind the data. With the vast amounts of data provided with modern digital data collection tools, M&E practitioners need to understand how they can act as gatekeepers to ensure that we note the bias we are embedding in our data.

===

Interested in this topic on data in global development and measuring results? Join our top selling online course on Technology for Monitoring & Evaluation, which begins April 20, 2015.