Champions and super users: the health workers behind BID’s success

Posted:

Ona is partnering with PATH and the Zambian Ministry of Health and local technology partner, BlueCode, to adapt OpenSRP to the vaccination workflows of health workers in Zambia’s Southern Province. We wrote earlier this year about the project, which is in support of the Better Immunization Data (BID) Initiative. We are excited to repost this article from Mali Kambandu, Communications Officer, BID Initiative Zambia, which sheds light on the impact the Zambia Electronic Immunization Registry (ZEIR) platform has had on health workers and what it means for them to use ZEIR in their interactions with clients. This article was originally posted on the BID Initiative blog:

image The MCH nurses at Nkabika Clinic. Photo: PATH/Mali Kambandu

At the heart of the BID Initiative interventions are the health workers: men and women who are trained in data use interventions, and who work hard to adopt them, proving that improved data leads to improved decisions and thus better health outcomes for children in Zambia.

“With the ZEIR I can deliver better care.”

 Continue reading Champions and super users: the health workers behind BID’s success...


New Feature: Merging Multiple Datasets Into One

Posted:

image

Merging datasets is a new feature that lets you combine multiple datasets from different forms into a single dataset. When datasets are merged, data from 2+ parent datasets are combined and the child merged dataset gets updated whenever changes occur in the parent datasets. Viewing data in merged datasets is similar to other datasets, allowing you to view the data in tables and maps, create graphs and charts, and integrate with Ona Apps.

Get started merging datasets

image

Suppose your organization collects information from refugee camps in countries such as Kenya, Tanzania and Rwanda. You have been collecting this information for months using similar, but different, forms to account for different timelines, program goals and local needs. Merging the datasets from the three countries provides the ability to analyze the common data as a whole and compare country outcomes against one another.

 Continue reading New Feature: Merging Multiple Datasets Into One...


Introduction To Data Scraping With Python

Posted:

Last Thursday, I gave a talk at PyconKE 2017 titled “Introduction to Scraping using Python”. This was a beginner-level introduction that used three cool Python libraries:

In the talk, I demonstrated how to use these libraries to programmatically access the Kenya Power & Lighting Company’s website and automatically fetch a monthly power bill.

Below is the full presentation and the accompanying code is on Github. Enjoy!


Dynamically Clean Shared Data with Filtered Datasets

Posted:

image

A filtered dataset is a subset of submitted data that satisfies certain conditions set by the user. Filtered datasets are helpful when you need to share a dataset with other users, yet want to keep certain fields or records private that are repetitive, irrelevant or sensitive.

Suppose you carried out a baseline survey on households in several drought-stricken states, and you would like to have local analysts access data that’s only from their individual state. In this case, you could create multiple filtered datasets (one from each state) that segment data from each state — no external tools needed.

If the data is sensitive, the filtered datasets can go into unique projects that are selectively shared with specific users. If the data is not sensitive and you just wanted to make the data easier to understand, all of the filtered datasets can go into the same project.

 Continue reading Dynamically Clean Shared Data with Filtered Datasets...


Streaming Ona Data with NiFi, Kafka, Druid, and Superset

Posted:

A common need across all our projects and partners’ projects is to build up-to-date indicators from stored data. We have built dashboards showing project progress and other stakeholder-relevant information in our malaria spraying project (mSpray), drought response monitoring project in Somalia, and electronic medical record system (OpenSRP). Currently we create indicators on an ad-hoc basis, but we are in the process of building a unified pipeline to move data from heterogenous systems into a data warehouse and build indicators on top of this data.

This need breaks down into the following minimal requirements:

  1. Store data
  2. Store queries relative to the data
  3. Retrieve the results of queries executed against the latest data

 Continue reading Streaming Ona Data with NiFi, Kafka, Druid, and Superset...