We consume and produce data at ever growing rates, aiming to better understand the past, observe the now, and to be better prepared for the future. However, data can only fulfill its purpose when we can make sense of it, generate insights, and put it into action. The process of turning data into insights requires many steps, and doing it effectively involves many strategies.

Step 1: Visualization

One of these key steps is visualization, which is the visual organization of data using various shapes, sizes, colors, and layouts. Visualization creates data charts such as bar graphs, line graphs, scatterplots, and even maps and networks. This step helps us make sense of large volumes of abstract information, without much effort. Effectively using the visual language for data provides a natural, intuitive way to see and understand features and trends in data.

Step 2: Interaction

Another key step is interaction. When you ask questions, focus on certain properties, or change the visual representations, you are engaging in an interactive dialogue with your data. The vast computing capabilities in our digital devices allow us to dynamically filter across categories, resort items, pan map views, retrieve details, and explore alternatives. Together, visualization and interaction with data lets you find the answers you are looking for, and answer the questions that you didn’t know you had!

Step 3: Putting It All Together

How can you start your data dialogue? There are many tools to help you collect, transform, and visualize data in many different forms. However, with so many options, choosing the best approach based on your needs, your data, and your experience level is not trivial. You may start visualizing your data with tools that offer a graphical interface. This allows you to import a dataset and construct data charts by selecting chart types or mapping data attributes to various visual elements and components (shapes, colors, layouts, x-axis, etc.). Existing graphical charting tools still require training to make effective visualization decisions, or they do not let you easily engage in rich analytical conversations with your data in multiple synchronized perspectives. For customized analysis and design needs, you can use programming based tools, but these require significant technical knowledge to figure out and execute the best strategies. How can you get the most from your data, with the least amount of effort, and in the shortest time?

The Solution: Keshif

Keshif is a new web-based tool that brings life to your tabular data by converting it into a beautiful interactive visual interface. Unlike other tools, it creates an environment where you focus on interpreting your  data, rather than specifying the visualization details and getting lost in the many visual  options that may slow you down, or mislead you. Keshif is designed to fit your data exploration needs, the structure of your data, and expands on the best practices. Your categorical data becomes bar graphs, your numeric data becomes histograms, your time data becomes line graphs, all without any effort. For more in depth analysis, you can view your data by percentiles and map regions. Each record of your data can be shown individually in a list, grid, map, or network (if your data supports it).

Everything in a Keshif data browser is connected and highly responsive so that every action is a potential to a new insight. You can highlight to get a quick preview, filter to focus on details, or lock to compare different sections of your data easily. You can import your data to Keshif from Google Sheets, CSV, or JSON files, and decide which attributes you want to explore. This can be done by summarising their characteristics, and manipulating  your data in various ways to explore different trends and relations. From journalism to government, transportation to finance, and music to sports, Keshif can be used for data in many different domains. With it’s minimal, yet powerful features, Keshif lets you make sense of your tabular data quickly, analyze it in multiple perspectives, and reach new insights.

Keshif-TechChange

Keshif is under active development by Ph.D. candidate M. Adil Yalcin and his advisors Professor Niklas Elmqvist and Professor Ben Bederson at the Human Computer Interaction Lab at University of Maryland College Park. To learn more about Keshif, visit its homepage www.keshif.me, find a topic that interests you across the 150 datasets specially compiled, and watch the short tutorial video

About the Author:

M. Adil Yalcin, is a Ph.D. candidate at the Department of Computer Science at University of Maryland, College Park. His goal is to lower human-centered barriers to data exploration and presentation. His research focuses on information visualization and interaction design, implementation, and evaluation. He is the developer of Keshif, a web-based tool for rapid exploration of structured datasets. In his previous work, he developed computer graphics techniques and applications.

If you have any further questions, join Keshif’s email list or contact yalcin@umd.edu.

adil-yalcin-headshot

 

Let’s pick up where we left off in Part 1 of our survey design for quality data series, which was inspired by Dobility founder and CEO Dr. Christopher Robert’s presentation in the TechChange course “Technology for Data Collection and Survey Design.” Lesson 1 focused on designing your survey with empathy for field staff and respondents. Lesson 2 highlighted SurveyCTO tools for building in relevance and constraints. With Lesson 3, we’ll jump into a number of ways that SurveyCTO enables you to automate monitoring and workflow.

survey_design_2

Lesson 3: Automate Monitoring and Workflow 

The staffing structure for a typical survey might look something like this: a research team designs a survey. Thousands of miles away, a field team conducts the surveys. The collected data then goes back to the research team for analysis.

The research team wants to be able to monitor the field team and audit their work throughout the process. Supervisors on the field team may also want to monitor their enumerators. And, just to get complicated, the research team may also hire a survey firm to conduct the survey themselves or to provide an additional layer of monitoring for the field team.

In the case of traditional paper surveys, quality checks might include:

  • a member of the research team accompanies enumerators in the field
  • a supervisor reviews surveys as they come in
  • an independent team conducts “back-checks” after initial surveys are completed, to corroborate the results

Many of the quality checks available when conducting a paper survey occur AFTER the initial surveying is complete. You may not know you have bad data until it’s too costly – and too late – to do anything about it.

One of the most compelling opportunities afforded by SurveyCTO is the ability to easily program a number of quality checks into your survey that can automatically flag issues as they arise. Not only that, with a little extra work up-front, you can prep your data to make the transition to visualization and analysis even faster.

Example 1: Audio audits and speed limits
Back-checks are time-consuming and expensive, so why not listen in from the office? You can program your SurveyCTO surveys to randomly capture audio throughout an interview.

Or, even better, pair audio audits with “speed limits,” which allow you to indicate the minimum time that a particular question should take to ask and answer properly. For example, you can program your survey to automatically start recording after the enumerator violates three speed limits – meaning they didn’t take enough time on three different questions within the same survey.

Since audio audits and speed limits are programmed by the research team, the field team won’t know the specifics – they’ll just know that there’s an additional layer of accountability.

Sample speed limit:
speed_limits

 

Example 2: Automated checks
Our most sophisticated users write quality checks in Stata code, to automatically flag data that doesn’t behave as expected. But we wanted to ensure this best practice is available to all of our users, which is why we’ve built the feature into SurveyCTO.

Spend a few minutes during the survey design phase to set up at least one automated check and you’ll not only be able to identify and address issues right when they arise, you’ll have more reliable data to work with once your surveying is complete.

Sample automated check:
automated_check

 

Example 3: Concatenate and calculate
Let’s say your survey splits first name and last time into two fields but you would prefer it displays in one field during the analysis stage. You can easily program the form builder to concatenate – or link fields together – so that when you output the data, it’s already formatted the way you want it. You can also set up automated calculations, which can help with analysis or serve as a useful relevance trigger during the survey itself.

Sample calculation:
calculation

 

Example 4: Visualize and analyze
As soon as your data is uploaded, you can take advantage of our integrations with Statwing, Google Sheets, Google Earth, Stata, Excel, and Salesforce (via OpenFn.org), or export it to JSON or CSV file formats and start analyzing it in the platform of your choice.

Using a mobile data collection platform enables you to skip the laborious and error-ridden step of data-entry. Instead of spending months entering, checking, and rechecking the data you collected – not to mention storing hundreds (or thousands!) of survey booklets – start analyzing your data the day it’s collected.

Sample integration with Statwing:
statwing

 

Final Thoughts

Just remember that even experienced survey designers struggle at times with developing the best structure for exploring a research question and setting up the systems to minimize the risk of collecting bad data. Hopefully this series on survey design for quality data has given you some ideas for how to approach your next project. And if there are any additional topics you’d like us to cover, please leave them in the comments.

Read Part 1 of the series here. This article was originally published on the SurveyCTO blog

About Alexis
Alexis Ditkowsky is the community and business strategy lead for Dobility, the company behind SurveyCTO. Her experience spans social entrepreneurship, international education policy, higher education, and the arts. She holds a Master of Education from the Harvard Graduate School of Education.

headshot_alexis

1. Privacy

Responsible data management is not new to development. However, with the use of technology-enabled tools for M&E, it has raised a few challenges related to the privacy of individuals. These include the growing use of biometric data for tracking and sensors to monitor daily habits. The collection of personal financial information and affiliation has also made it vital to consider data security when setting up an M&E framework. This can be addressed through data encryption, ensuring that individual data is not easily identifiable, and developing a policy that ensures responsible data practices. Furthermore, organisations need to be aware of the ethical implication of collecting data on people and the necessity to secure all the permissions and consents required. It is also important to be transparent about the methods of collection, why data is collected and how will it be used with the respective individuals. Finally, ownership has to be explicit when information is shared and a plan should be in place on what happens to data collected once a project ends. In South Africa, the Protection of Personal Information Act, 4 of 2013 also lends a relevant and interesting dynamic.

2. The end-user in mind

To select the most suitable technology-enabled tool(s), taking a human-centered design approach to the selection process will ensure that the organisation does not end up with an irrelevant or unnecessary tool. The approach starts with identifying what is desirable (one should consider project managers as well as community members, i.e. the people who will be using the tool), then viewing the solution through a feasibility and viability lens. This ensures and increases the usability of the tool as well as ensuring that no segment of the community is “ignored” as result of the selected tool, i.e. thinking of the accessibility of the tool and the training that would be required. Once identified, the tool should be piloted on one project before rolling it out.

3. Getting the right balance

Technology facilitates, but does not replace, M&E methodologies such as a well-thought out theory of change and quality M&E plan. So it may be tempting to fall into the habit of selecting or collecting data based on the easiest tool rather than what really matters to your program. Furthermore, technology can lead to over-dependence on digital data and missing the opportunity to observe and interact with communities in order to get a comprehensive picture of an intervention. To get the right balance, one must be very clear on the value the tool will add.

Although there are other factors to contemplate, the above three points offer a good guide to anyone considering the use of technology-enabled tools in their programs. With the ever-growing need to understand and measure impact, the integration of technology from delivery of services and monitoring of interventions to the evaluation of programs will continue as it offers possibilities and innovation to increasing reach, moving to scale and improving the efficiency and effectiveness of interventions.

This article was originally posted on the Tshikululu Social Investments blog. Photo courtesy of Jan Truter Creative Commons

About Amira

photo (1)

Amira Elibiary is a Monitoring and Evaluation (M&E) specialist with 10 years of experience in research, grant-making and program management; over two years of experience in the corporate social investment sector for education, health and social development projects. With a keen interest and extensive experience in democracy, governance, advocacy and rule of law work. Amira holds a Master’s degree in International Affairs from American University and a BA degree in Economics.

Today, most of us can have Pad Thai, a craft cocktail, and a professional masseuse all arrive at our doorstep with a click of a button on our phones, but the same can’t be said about data for our projects. I can’t tell you whether the thousands of schools we paid for last year were actually built and functioning! How about an on-demand service for that data-delivery?

The on-demand economy is delivering increasingly brilliant things for our daily lives – at least in advanced economies. There are so many on-demand food delivery options that investors now see the market is beginning to bottom out with saturation. Last year, over $3.89 billion, purely of venture financing, went to on-demand startups other than Uber.

But it’s yet to penetrate how we do business. First Mile Geo wants to change that.

Insights on Demand
We call it Insights On Demand. Drop a pin anywhere in the world, place a bid, task a local to capture data on your behalf, and generate near real-time dashboards, maps, and comparative analytics. No tech team, no GIS specialists, no field managers tabulating survey results. The entire process delivered, on-demand.

600x600-looping-compressor

How it works
The process is pretty simple. Like all of the other tools you may have seen in First Mile Geo (eg mobile, SMS, websurveys, physical sensors), all you have to do is create a form or survey then select the technology for collection –in this case ‘on demand’.

Drop a pin (or multiple), set a sample size (running a survey?), set a bid on how much you’re willing to pay, and you’ll see results shortly thereafter.

ondemand.task

As data arrives you’ll be greeted with real-time maps, dashboards, and powerpoint or pdf executive briefing documents in your preferred language.

dashboard.greekcrisis

A future envisioned
Today, there are over 4 dozen mobile data collection apps. And that’s not even including the other ways we use phones like SMS, IVR, or mass analysis of phone use patterns. But regardless of how we use these tools, data analytics can still be time-consuming: identify the need, allocate resources, create a survey or form, train enumerators, analyze results, write-up findings, brief it, and market the successes.

The future of data analytics in development, where systems are smarter and the institutional burden is lessened, is arriving. We think data delivered on demand, through services like our affiliate partners at Findyr, will have a major role to play in realizing it.

We are excited to have Matt present a demo of First Mile Geo’s Insight on Demand in our data collection course tomorrow! Interested in learning how to implement technology for your M&E needs? Check out our courses related to Technology for Monitoring and Evaluation

About Matt

Matt McNabb
Matt McNabb is CEO of First Mile Geo and a member of the TechChange Board of Advisors. He also serves as an Adjunct Fellow at the American Security Project and a member of the Board at Epirroi, a Beirut-based management consulting firm.

Imagine a tool where you have text and a computer automatically highlights key themes. No need to do complex coding, no word counts that are used to explore the text — just keywords and phrases identified. This is exactly what the tool Textio does for job descriptions. It automatically provides an effectiveness score and identifies words and phrases that affect whether applicants will apply for a job: they identify words through color coding that can act as a barrier or incentive, ones that affect applicants based on gender and repetitive terminology. [Editor’s note: TechChange participated in a closed-beta test of the tool and we will write a separate blog post about Textio and hiring practices. This is not a sponsored post.]

This tool not only has great implications for hiring, but utilizes simple visualizations to analyze qualitative data. As Ann Emery and I have been preparing for the Technology for Data Visualization course, we discuss how best to address the topic of data visualization for qualitative data. While there have been data visualizations featured in art museums (e.g., Viégas and Wattenberg’s Windmap), most visualizations are designed to convey information first.

Textio is using a custom algorithm to do a type of sentiment analysis. Typically, sentiment analysis will analyze how positive or negative a text is based on a word’s meaning, connotation, and denotation. Textio, on the other hand, focuses on how effective words or phrases are at getting people to apply for jobs and whether those applicants are more likely to be female or male. Once their specified level of effectiveness or gendered language for a word or phrase is reached, they highlight it with colors based on whether it is positive or negative and/or masculine or feminine. The gender tone of the entire listing is shown along a spectrum.

Acumen, a tool created at Al Jazeera’s 2014 Hackathon: Media in Context, is another take on how to visualize sentiment analysis. With a focus on trying to uncover bias in news articles, they highlight how positive or negative an article is in relation to other articles on the topic. A separate analysis tab shows shows the two sentiment ratings on a spectrum and ‘weasel words,’ words that are indicative of bias in reporting. The viewer also has the option to highlight the weasel words in the news article.

Both Textio and Acumen are great examples of how qualitative data visualization can be used to aid in the analysis of text. Neither example is immediately suited for generalized needs and require programming knowledge to create a particularized purpose, which myself and Kevin Hong will discuss in a forthcoming blog post. Instead, they can be used as examples of how qualitative data can be visualized to help inform decision making.

Have you used Textio or Acumen? Share your thoughts with us below or by tweeting us at @techchange!

Simply including technology in your M&E plan does not lead to better M&E. This was the mantra as I started TechChange’s Tech for M&E online course in January. I heard about this online course from a colleague, and since Commonwealth of Learning is constructing a new six year strategic plan with a crucial M&E component, enrolling in the course was a no brainer. I enrolled and entered a community of more than 180 international development professionals from over 50 countries. All of us in the course agreed that technology is not the main focus for M&E but rather an important enabler to collecting, analyzing and presenting data.

At the end of the course, I reflected on things you must do to successfully integrate technology in M&E:

1. Establish trust between different stakeholders 

When it comes to using technology for data collection, data security and data privacy are very important concerns. As stakeholders are requiring stronger evidence of impact and the use of data to make informed decisions, building a partnership based on trust among all stakeholders is one way to ensure regular flow of data. This includes having all confidentiality and transparency issues addressed with the stakeholders. All stakeholders need to have confidence in each other and in the process and know that their data will be protected, so there must be enough time and resources for engagement with the stakeholders.

2. Spend adequate time in preparation

Another key element is the level of preparation that is required for any technology use for M&E. The use of checklists, ICT in the different steps of a program/project cycle (diagnosis, planning, implement/monitor, evaluate, report/share lessons) and using an M&E expert are all important. All ethical and cultural issues need to be raised with the stakeholders and to the extent possible, addressed upfront.

It is important to ask these questions as you prepare your M&E plan:

  • How does M&E fit into the organization’s strategy and contribute to achieving the goal?
  • What training and support are required so the field workers are trained, assessing if they have good relationships with the community and what technology will enable best data collection?
  • What role will the field workers play to increase responses to qualitative and quantitative data collection?
  • How does one build confidence with all partners so that the quality of data collected enables better analysis and results?

The course checklist of having a quality M&E plan, a valid design to achieve the results and determining the appropriate technology is very helpful. It also emphasizes the importance of testing this with the stakeholders and identifies how to collect data at the different levels of the project and ensure there is a strong motivation to participate.

3. Consider a mixed method approach to data collection

The key focus of my work is in education, both formal and non-formal. So far, using quantitative data has been the norm for any analysis of education provision and its value to society. But mixed method data collection (quantitative and qualitative) is being increasingly used in education, especially to inform the teaching and learning processes. It allows for an understanding of classroom practices, the learner’s context and its impact on education, offering a more holistic view of whether a project has improved learner performance and learning.

Using ICT for mixed methods of data collection has made the processes easier and ensured easier data storage, tagging, and analysis. Mobile phones can be used to record, transmit and tag the data, and data storage platforms can provide easier access to data, aggregate the responses, and analyze.

The course points to four key questions when designing a mixed method evaluation:

  • At what stages/s does one use the mixed method?
  • Are qualitative and quantitative methods used sequentially or concurrently?
  • Will each method have equal weighting or will one method be more dominant?
  • Will the design be at a single or multi-level?

Just plugging technology in your M&E plan doesn’t do much, but there are steps you can take to integrate technology into M&E so that you can collect and analyze data better. These were the takeaways from the course for me, but the course offered many useful insights into the use of technology for M&E like valuable checklists, platforms that can be used, issues to address for the successful use of technology and a focus on mobile phones. I would recommend the course to anyone looking to learn more about the use of technology in monitoring and evaluation.

Author bio

vnaidoo-Oct2012

Vis Naidoo is the Vice-President at Commonwealth of Learning in Vancouver, Canada. He has spent much of the past 20 years involved in the development of open and distance learning systems, educational technology policy and the applications of technology to education – both in South Africa and internationally.

PreMAND field workers testing data collection tablets in Navrongo, Ghana (Photo: N. Smith)

Mira Gupta, one of the star alumna of our courses on Mapping for International Development and Technology for Monitoring & Evaluation (M&E), is a Senior Research Specialist at the University of Michigan Medical School (UMMS). Last October, USAID awarded UMMS $1.44 million to assess maternal and neonatal mortality in northern Ghana. This 36-month project, “Preventing Maternal and Neonatal Mortality in Rural Northern Ghana” (also referred to as PreMAND: Preventing Maternal and Neonatal Deaths) will help USAID, the Ghana Health Service, and the Ghana Ministry of Health design interventions to prevent maternal and neonatal mortality by investigating the social, cultural and behavioral determinants of such deaths across four districts in northern Ghana. For this project, UMMS will be partnering with the Navrongo Health Research Centre and Development Seed.

Project Regions and Districts

Project Regions and Districts

We sat down with Mira to learn more about this project and how her TechChange trainings in digital mapping and technology for M&E gave her the skills and background she needed to develop her team’s project in Ghana.

1. What interested you in taking the Mapping for International Development and the Tech for M&E online courses?
I was in the process of trying to learn everything I could about our GIS options when I heard about TechChange’s Mapping for International Development course. It provided a fantastic introduction to the range of approaches being used on international development projects and the variety of organizations working in that space. The course material helped me identify which types of visualizations would be most appropriate for my team’s research. I especially benefited from the many sectors represented in the TechChange sessions because while I was trying to create a project for the Health sector, I actually learn best through a Democracy and Governance framework given my previous background in this field. TechChange provided access to mapping specialists in both areas through its instructors and other class participants.

Just as I heard of the mapping course right as I needed it, the same thing happened again with the Technology for M&E course, which I took a year later. By that point, the PreMAND project had just been awarded and I learned that I would be responsible for the evaluation components. I was excited to take the TechChange course because I knew it would provide a great overview of the many different tools being used, and that I would benefit immensely from the participation of classmates working on projects in similar settings. As expected, the content presented was incredibly valuable in informing our project approach in terms of our field data collection, methods of analysis, and presentation of findings.

2. How did the mapping component of this USAID-funded project come together?

The Three Project Phases: Research will inform the visualisations, which will inform programming

The Three Project Phases: Research will inform the Visualisations, which will inform Programming

While working on a maternal and neonatal health qualitative study a couple of years ago, I sensed that there were themes and patterns in the data that were difficult to verify since the locations of the respondents had not been geocoded. Some of the variables indicated 50/50 probabilities of any particular outcome, which seemed to suggest that there was no pattern whatsoever when viewed as a large dataset. Because my background is in Democracy and Governance, I used election maps to illustrate to my research team that once geocoded there might in fact be very distinct geographical trends in the data, drawing parallels to the locational breakdown of political party support in the United States.

I was in the process of researching mapping resources when I first heard about the TechChange’s Mapping for International Development course, and through the course I met some of the mapping experts that ultimately served as key resources in the development of our project strategy. The course gave me the necessary base knowledge to effectively liaise between our health researchers and the mapping experts to determine the best approach to meet our data visualization needs. We were extremely fortunate to have USAID-Ghana release a call for outside-the-box submissions under its Innovate for Health mechanism, right as we were developing our program concept.

3. What are the biggest challenges you anticipate in undertaking this project?
For the visualization component, generating the base layer maps will be more difficult than we originally anticipated. The various pieces of data we need are spread throughout different government sources such as the Ghana Statistical Service, the Lands Commission, and the Ministry of Roads and Highways. We will need to consult with each of these groups (and likely many others), to explore whether or not they will allow their data to be used by our project. It will require some agility on our part, as we need to stay flexible enough so that we collect any outstanding geographic data we may need through our team of field workers. While there are many moving pieces at the moment, it’s exciting for us to think that we’re building what may be the most comprehensive geographic base layer map of the region, as an initial step in developing our health indicator analysis tool.

There are also a handful of challenges related to evaluation. The primary purpose of our project is to provide new information to clarify the roles of social and cultural factors in determining maternal and neonatal deaths, and shed light on a valuable set of drivers which up until now have been unclear. We are currently in the process of finalizing our M&E framework, which has been a complex process because our project doesn’t fit the mold that most performance indicators are designed for. As a result we’ve been carefully drafting our own custom indicators through which we’ll measure our project’s progress and impact.

One of our most interesting evaluation challenges has been the development of our Environmental Mitigation and Monitoring Plan, which is traditionally intended as a tool for implementing partners to take stock of the impacts their work could have on the natural environment. In our case, we’re using it as a tool to think through our ethical approach to the potential impact of our project on the social and cultural landscape, given the challenges associated with collecting very sensitive health information and the need for data privacy. It’s pushing our team to think through every step of our project from the perspective of our various stakeholders, and has yielded many valuable insights that have strengthened our program approach.

4. What are the tools that you became familiar with in Mapping for International Development and M&E and plan on using in this project and how will you apply them to your project?
I came into Mapping for International Development knowing very little about the resources available in that space. Several of the tools that I became familiar with through the class, such as OpenStreetMap, MapBox and QGIS were highly applicable to our project in Ghana. After participating in the session led by Democracy International and Development Seed, I reached out to those instructors for their input on how I could best translate my project concept into actionable steps.

The visualizations I hoped to create were complex enough that I soon realized it would make the most sense for our research team to work directly with a mapping firm. We were so impressed by the technical feedback and past projects of Development Seed that we established a formal partnership with them and worked together to refine the vision for the project that was ultimately funded. TechChange’s training gave me the knowledge I needed to select the right partner and understand how best to combine our research goals with the available mapping resources to maximize our project’s impact.

Programs used on the PreMAND project

Programs used on the PreMAND project

In Technology for M&E I learned about the capabilities of different devices, survey apps—those able to capture geodata were of particular interest to me—and even project management tools. There were many helpful conversations both in the class sessions as well as in the participant-led threads around the data collection process, data privacy, and the ways in which project findings can be best communicated to a variety of stakeholder groups. What I found to be most relevant and applicable to our Ghana project were the conversations surrounding human-centered design, and the use of rich qualitative data. I gained a lot from the session led by Marc Maxson of GlobalGiving, who discussed which forms of data are the richest and easiest to interpret. The University of Michigan and our partner the Navrongo Health Research Centre already excel in qualitative data collection techniques, but the conversations throughout the TechChange M&E course inspired some new ideas as to how we might incorporate multimedia such as video and photographs in our qualitative data collection process to make our project deliverables that much more substantive.

5. What is your advice for researchers working to integrate more data visualization and mapping in their research and project interventions?
My advice would be to focus on the end user of your data and identify their needs and interests early in the process. That clarity can then be used to inform 1) what content will be most useful, and 2) what presentation format(s) will be most effective. It’s important to do some form of a needs assessment and let stakeholder feedback guide the project’s design.

In the case of our Ghana project, we are implementing a two-prong approach to our visualizations because both the government representatives and our donor will find an interactive web application most useful, while local community members in the rural North will benefit more from group discussions centered around printed maps.

Feedback loop with two stakeholder groups: the government of Ghana and local communities

Feedback loop with two stakeholder groups: the government of Ghana and local communities

It is common to sometimes present health indicator data solely as points on a map, but we are designing our visualizations to be much more detailed with background layers including health facilities, schools, compounds and roads so that those viewing the health indicator data can orient themselves a bit better to the local context. Had our end-users only been the leaders of those individual communities such detailed maps may not have been necessary. Similarly, the visualizations for one stakeholder group might incorporate a lot of words or even narrative stories based on their level of education, while for other stakeholders, those visualizations will be more image-based and we’ll orient them to the maps through presentations in their local communities.

About Mira Gupta

Mira Gupta

Mira Gupta is a Senior Research Specialist at the University of Michigan Medical School (UMMS), where she focuses on program design, strategy and evaluation. She has developed successful international aid projects in 18 countries, including 13 in Africa. Mira began her career in the Democracy and Governance sector where she worked for organizations such as IFES, the National Democratic Institute, and the Carter Center. She also developed projects in the Economic Risk and Conflict Mitigation sectors before transitioning into Global Health. Her research on the effects of local power dynamics on health-seeking behavior in northern Ghana is published the current edition of Global Public Health.

Monjolo energy-meter sensor created by University of Michigan’s Lab11. Image credit: Umich Lab11

How can remote sensors and satellite imagery make monitoring and evaluation easier and more accurate than pen-and-paper surveys?

In my most recent post on many of the technology tools for M&E we discussed in the inaugural round of TechChange’s Technology for M&E online course, the course participants and I shared several of the current digital data collection tools. These tools include Open Data Kit (ODK), Magpi, SurveyCTO, Taro Works, Mobenzi, Trackstick, and Tangerine.

Generally speaking, global development recognizes the benefits of mobile data collection–as Kerry Bruce said in the first course, mobile data collection is a “no brainer.” But what kind of cutting-edge technologies like remote sensors and satellite imagery are available for collecting large amounts of data more efficiently and accurately?

In this upcoming round of the course, we will be highlighting these cutting-edge technologies, how they have been used in development, and how they can change data collection methods. Satellite imaging is being used to remotely monitor illegal mining, urban development, and deforestation in real-time. Remote sensors provide accurate, real-time measurements for the adoption of water pumps or cookstoves by detecting motion and reading temperature.

We’re very excited to have presentations from the Center for Effective Global Action (CEGA) in the upcoming TC111: Technology for Monitoring and Evaluation course. CEGA is a global development research network that works to create actionable evidence for policy makers and program designers. They also design and test new technologies to find new solutions to the problem of poverty.

satellite imagery

CEGA is researching how remote sensors and satellite imaging can help create more accurate measurements. We have three researchers on this topic joining us in the course: Guillaume Kroll from CEGA’s Behavioral Sensing program; Dan Hammer, a Presidential Innovation Fellow working with the White House and NASA to increase access to satellite imagery for the non-profit sector; and Pat Pannuto who researches sensors at Lab11 and is working on numerous projects with CEGA.

Dan Hammer was the Chief Data Scientist at World Resource Institute’s Data Lab and worked to analyze deforestation from satellite images. In a recent article, Guillaume highlights how remote sensors are reshaping the way we collect data and highlights CEGA’s success with stove usage monitors in Darfur. Pat presented at the Development Impact Lab’s State of the Science on how remote sensors can be used for data collection and monitoring (slides available here). Pat is trying to “solv[e] the ‘last inch’ problem: bringing connectivity and computing capability to everything” and thinking about how connected devices (the Internet of things) can impact global development.


Patrick Pannuto: “Sensing Technologies for Data Collection & Monitoring”

Interested in learning more about the cutting edge technology for M&E including remote sensors and satellite imagery? Join our upcoming course on Technology for Monitoring and Evaluation.

This feedback on mhealth concerns a field mission I undertook in July 2014. I visited one of Handicap International Federation’s HIV and disability projects being implemented in the region of Ziguinchor in the South of Senegal. Like many other organisations represented by colleagues in TechChange’s mHealth course, Handicap International is strongly exploring how mHealth can best fit in and with what we can offer not only to our primary focus on people with different impairments (our main targets), but also to various communities confronted with different issues, be they related to development, relief or emergency settings..

I realised that our project was provided with two android phones from CommCare to collect data as a “pilot activity” (not initially designed in our project, but rather as an add-on to our M&E system and tools). The project M&E officer in charge was supposed to learn about how it works and two project community mobilizers were supposed to collect specific information to feed into the beneficiary and activities database.

What happened with this pilot was quite interesting. Given that there was no specific planning or budget assigned to this seemingly exciting additional activity, and after discussions with CommCare, they graciously provided the project with two phones and basic training to the staff. Project staffs started the process of collecting data, but it didn’t work because there was the phones had no credit. So, they added credit and restarted the process of collecting data. Data were entered and things seemed to be on the right track. Knowing this, the M&E specialist in charge wanted to synchronise the system to see how data looked like. It didn’t work. After another brainstorming, the team learned that they had to set other aspects on the phones so that data can reach to the other end. Furthermore, given that this was an “extra activity”, problem-solving was not that fluid with CommCare as it was not the priority of neither party. And barriers continued, to the point that no one really bothered with whether the phones were useful to the project, to the beneficiaries, to the staff, nor to the system.

A few lessons learned from this minuscule pilot trying to use mobile technologies for data collection (and arguably for other aspects of project management and global development):

  1. If rationales are well thought through at project inception, it would be important to include planning, budget and dedicated human resources for the utilisation of mHealth within a project.

  2. Having “free phones” may not be the best incentive to projects when it is not tied to any specific performance indicators associated to bigger project goals.

  3. Excitement about mHealth is insufficient; there needs to also be interest combined with strong planning and field testing, coupled with systematic follow-up from the mHealth provider. This aligns with what mHealth guest speaker Ray Brunsting told us in the course about the importance of a project preparation phase that regularly iterates and progressively constructs what is needed so that the mobile mechanism works smoothly thereafter.

  4. Careful, regular, and frequent feedback is needed especially when getting an mHealth program is in its initial phases.

But this experiment didn’t deter us to pursue our desire to use mHealth and mainstream disability. We decided to partner with AMREF (France) which has tremendous experience in using mHealth. This project will start shortly and is going to use mHealth in the context of maternal and child health in Senegal. It will bring the expertise of two different organisations for the benefit of mothers and children, through a specific project, planning and budget, and through disability lens.

All this to say that using mobile phones to promote public health is not that straightforward. However, when we attempt to consider lessons learned and good practices from others, it tends to work better. So thanks so much to TechChange, all participants in the mHealth online course, as well as from our great speakers and facilitators for sharing all the mHealth wisdom

Interested in learning more about mHealth pilot programs and successfully scaled projects across the world? Register now for our mHealth online course which runs from November 17 – December 12, 2014.

About Muriel Mac-Seing

Muriel Mac-Seing

Muriel Mac-Seing is an alumna of TechChange’s Spring 2014 mHealth: Mobile Phones for Public Health online course. For the past 12 years, Muriel Mac-Seing has dedicated her work to community health development in Sub-Sahara Africa and South and South-East Asia, in the areas of HIV and AIDS, sexual and reproductive health, gender-based violence and disability. Currently, she is the HIV and AIDS/Protection Technical Advisor to Handicap International Federation supporting country missions and national programmes to include disability for universal access to HIV and AIDS and protection services for all. She co-chaired the HIV and Disability Task Group of the International Disability and Development Consortium (IDDC) from 2010 to 2012. Since May 2014, she is also a member of the Human Rights Reference Group at the Global Fund to Fight AIDS, Tuberculosis and Malaria. Trained as a nurse, she served an underprivileged and multiethnic clientele in the regions of Montréal, Canada.

 

At TechChange, we’re always looking for ways to make online learning more interactive, engaging, and relevant for busy, global professionals interested in technology and social change. One way we do this is by bringing together our online TechChange community offline with hybrid learning. Specifically, we try to overlap the schedule of some of our online courses with industry events such as the recent M&E Tech Conference and annual mHealth Summit. We also arrange in-person meet-ups in various cities across the world including happy hours in Washington, DC and other meet-ups including most recently in Lusaka, Zambia.

Last month, we launched our very first round of TC111: Technology for Monitoring & Evaluation with a class of over 100 participants. As one of the top guest experts of the online course, Christopher Robert, who is CEO of Dobility Inc. and a Harvard adjunct lecturer, joined us in the first week of the course while he was traveling in Zambia. To take full advantage of the course, some of our participants based in Zambia asked him if he would be willing to meet with them in Lusaka. So, three of these M&E tech course participants (Ladislas, William, and Mine) met Christopher and his colleagues on the same day to continue the technology for M&E discussions from the online course in-person.

Here’s what happened at the TechChange Tech for M&E meet-up in Lusaka:

Reuniting alumni from different communities
It turned out that Ladislas, William, and Mine had already known each other as alumni of the Global Health Corps (GHC) fellowship. According to Mine Metitiri, a Senior Research Associate at the Zambia Ministry of Health, “A number of Global Health Corps fellows are taking the TechChange Tech for M&E online class and we recommended Chris to be a speaker at our annual training at Yale. Hopefully it works out because he had a lot of great things to say that are relevant to our fields of work.“

Strengthening online connections and learning offline
TechChange alumni such as William Ngosa who works at the Ministry of Health in Zambia appreciated the chance to reunite with his GHC colleagues and to meet Christopher and his team members, Faizan and Meletis. “It was a privilege to meet one of the speakers in the online course to provide a meaningful and enriching learning experience,” said William.
Christopher Robert and his team really enjoyed meeting the Zambia-based course participants as well. “It was lucky that we had the chance to meet!” said Christopher. “These Tech for M&E course participants are doing some wonderful things with ICT for social good there in Zambia. It’s always inspiring to meet people doing good work!”

Sharing good news of a job offer for M&E consulting
One of the participants, Ladislas Hibusu, received a M&E consultant job offer after interviewing with Jhpiego while taking the M&E online course.

“At this M&E meetup in Lusaka, I mentioned that during the M&E course, I interviewed for a position at Jhpiego. I am happy to announce that I have been offered an M&E Consultant role and thanks to the valuable insights to this course, as I was able to apply the knowledge I learned in the course. Although I have had limited experience in applying much of my M&E theoretical work in the field, I am happy to say this Tech for M&E online course is addressing most of challenges that I anticipate in my new role.” – Ladislas Hibusu

Everyone congratulated Ladislas and Christopher Robert joined us for another live event the following week wanting to continue the discussions with other participants in our course.

Several of TechChange’s online courses are designed to facilitate interactions like the one in Lusaka. Participants from all over the world are able to connect with like-minded professionals in the international development sector and continue discussions on specific topics. Watching live and recorded videos, completing different activities, and participating in ongoing discussions on an online forum combined with offline, in-person learning is really what enriches e-learning.

Interested in technology for M&E and want to connect with other M&E practitioners across the world? Register now to lock in early bird rates for our next round of our Technology for Monitoring & Evaluation online course which runs January 29 – February 20, 2015.