Skills for Success Afghanistan

Kabul Medical University by By Ben Barber (USAID) [Public domain], via Wikimedia Commons
Today there are still more than 4 million Afghan refugees outside of Afghanistan and over 1.5 million internally displaced persons within Afghanistan. Afghanistan has seen over 15 years of solid progress; GDP production is more than 4.5 times larger than it was in 2001 and school enrollment has increased from less than 1 million to over 9 million children. Yet 40% of those who want a job remain unemployed.

The University Skills and Workforce Development Program (USWDP) –  funded by USAID and implemented by FHI360 – aims to bridge the gap between universities and the needs of the labor market. The in-person soft skills courses created through the program are oversubscribed. To meet student demand, FHI360 decided to transition to a blended model with students completing mobile course modules on their phones before attending in-person classes.

Ustad Mobile is thrilled that USWDP chose us to create an open source application that enables students to experience interactive video-based simulations that work on smartphones, feature phones, and PCs. When Ustad Mobile conducted focus groups with university students in Kabul (December 2016) between 30% and 50% did not own smartphones. It is therefore essential to support feature phones and PCs (e.g. shared use PCs in libraries etc.) to avoid further disadvantaging lower income students. As less than 20% of the students had their own mobile data packs, the app must also function offline.

The app content is based on the existing in-person soft skills course materials. Each module will contain a short introductory video which explains the learning objectives and relevance of the module to the students. Summative and formative assessment will be conducted on a rotating basis as the learners progress through the modules choosing how to respond to simulated situations. These elements of the module design – feedback, challenge and practice at the right level and formative assessment – have been found to be particularly impactful on learning (Hattie, 2009).*

Using video in the mobile application will enhance the learning experience but technical considerations are important to avoid relying on expensive and unreliable internet services. Our architecture enables the usage of text, images, audio and video without requiring any connectivity. Usage data – such as time spent on each module and quiz scores – is logged to the devices offline and automatically uploaded securely to a cloud server when a connection is available. The app will even load modules and send usage data offline with automatically managed WiFi direct peer-to-peer connections on smartphones and PCs and using Bluetooth on feature phones.

  • Content listing - Android

Afghanistan is an ethnically and linguistically diverse country. It is therefore critical that the module content reflects this. For this reason, the modules will be available in Dari, Pashto, and English. We anticipate that some students who may be preparing for interviews in or working in English may prefer to use English irrespective of their native language.

Early user trials will be conducted to ensure the application is easy to use both for students and USWDP staff who will be helping students to install the application on their devices. We are soliciting feedback from Afghan men and women to ensure this application serves both genders equally well. Our media partner DNA Media Production is casting both men and women of different ethnicities to provide positive role models for all Afghan students.

While the mobile modules will serve as a passport to attending USWDP classes for eligible university students and recent graduates, the app and modules will be freely available for everyone to download via app stores and a website. Once the app is downloaded on one phone it can be shared with and downloaded by others without Internet. We believe that the model is applicable to a wide variety of education settings to increase access and decrease the costs of serving more students without compromising learning outcomes.

* While Hattie collected data which included, but was not limited to influences on achievement in tertiary education, his synthesis of 1200 meta-analyses is the largest collection to date of evidence-based research focusing on factors that influence learning (Hattie, 2015).

This post was co-authored by Mike Dawson and Benita Rowe.

Hattie, J. (2009). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement (1st ed.). Oxon: Routledge.

Hattie, J. (2011). Feedback in schools. In Sutton, R., Hornsey, M.J., & Douglas, K.M. (Eds., 2011), Feedback: The communication of praise, criticism, and advice. Peter Lang Publishing: New York. Retrieved from:

Hattie, J. (2015). The applicability of Visible Learning to higher education. Scholarship of Teaching and Learning in Psychology, Vol 1(1), Mar 2015, 79-91. Retrieved from:

What would a Learning Management System look like if it was designed for offline low resource areas?

Sustainable Development Goal #4 is an inclusive and equitable quality education and promote lifelong learning opportunities for all.   There’s many efforts that make some of what you can do with an online learning management system (LMS) like Blackboard or Moodle work offline.  You can use content offline; but you can’t enrol students for example.  So what would an LMS look like if it was designed with offline low resource areas in mind from the start?

You could do attendance, exams,  homework, gradebrooks etc. on paper – then snap it with a shared phone

20160721150657-2-webIf you spend $50 on technology that’s $50 you don’t get to spend on teacher training, school buildings or anything else.  Schools in low income countries often don’t have computes, libraries kitted out with a bunch of tablets, Internet or even power.  So instead of getting every teacher (or even student) a new device why not fill in sheets on paper and then snap it?  That way decision makers could get accurate information and there’s no need to go buying and maintaining a bunch of devices.

You could do everything offline and it’d sync peer 2 peer automagically

It’s not as simple as saying there’s offline and online.  There’s plenty of places with a mobile Internet signal – but that signal may or may not be reliable.  And if you need to register hundreds of students do you want to have people waiting because the Internet just died?  Do you want the principal to have to for reports on because the net choked even though their phone has thousands of times more computing power than the Apollo moon mission?  No – everything – really everything – must work offline : nor just viewing resources but registering students, recording grades, viewing reports should work offline.  Devices would talk with each other locally over WiFi and bluetooth.  Complex behind the scenes; but necessary.

You’d verify the information coming in

dashboard-webUnfortunately almost every study finds education space in low income areas has accountability problems.  Teacher absenteeism costs $1.5billion/year in India alone with 23.6% of teachers absent.  In difficult circumstances there’s incentives to game the system; and generally right now it’s pretty easy to game the system.  That punishes taxpayers and the majority of teachers who do their job (and get paid less).  So why not use mobile phones for random verification with location stamped photos?

You’d be able to deliver content to almost any device – even if it’s not a smartphone

IMG_20160721_201305358-webNot everyone has a smartphone – even in IT saavy India less than 30% of people have a smartphone.  Why should we wait for everyone to get a smartphone when they need education content now?  Feature phones – of which there are 3 billion with Java Micro Edition enabled can be used for content: they have about the same computing power as a 1997 computer.  Why not use those feature phones people already have for what their capable of?

You’d be able to use existing open standards

Just because you want to use something offline doesn’t mean you want to throw out everything else.  In fact you might need to connecting up different pieces of the puzzle.  So best to use available open standards like the Experience API for recording learning experiences and the Open Publication Distribution System for content listings.

We didn’t find that yet… so we’re building it (and open sourcing it)

With over 9 years of experience of education technology in some of the toughest environments (including 7.5 years in Afghanistan) we care about empowering educators to make quality learning a reality everywhere.  So we’re building that… we’re putting together what we’ve learned over years – and we’re open sourcing it.

Got another idea about what a LMS designed for offline low resource would look like? Let us know on the comments!



Assessing an open source project

It’s hip and trendy to declare a project “open source” – it means something about sharing, freely available, can be used by anyone, right? Well most of the time that’s the case – but I’ve seen a definite uptick in false declarations. I’m asked questions about selecting open source software quite a bit – so here’s a guide suitable even for the non-technical folks.

1. Find the code

This isn’t as scary as it sounds.  Open source projects have to post the software source code online; otherwise the label “open source” is being used fraudulently.  Look around the site for words like “Community“, or “Open Source“.  There should be a link for the source code itself – this could be called “Github” (a community site commonly used for this purpose), Git or SVN (systems commonly used to manage code) or just “Source Code“.  You should find something like this:

um-opensourceWhat the open source link looks like on UstadMobile odoo-open-sourceThe open source link on

You should then find something that looks like a bunch of files (the source code) – like this:


If you can’t find that – then someone either has a very badly designed site; or more likely is lying to try and get people to adopt their software.  Also check when it was last modified – that is better when it’s recent and frequently updated; but some software which just meets all it’s requirements can hang around for a while without modifications.  In this case are normally signs that it’s used by a lot of people – e.g. high download counters, active forums, something like that.

2. Try using it

It should be possible to try the software out. You should not have to compile source code to use open source software; it should be easy to use and install through an online demo or by installing on your computer/phone etc.  If they say something like ’email us’ etc. then it’s obviously not so free and open to be used.  Look for a Demo or Download link.  No link that you can use – perhaps the software is technically open source but it’s authors obviously don’t really want people to freely use it.

3. Check who else is using it

Perhaps the website explicitly tells you about who is using it, or perhaps there are active forums.  Some new projects might not have that many users.  If you’re not sure then post your own question about the state of the community on an email list or forum and see if you get a reply.  Some new projects might not have that many users and they might be looking for people to join in; there’s always a risk there though it can be a much lower risk than making something of your own from scratch.


Now you have a report card

Now you should be able to see if the project is really open source; and if you can actually use it.  A project that fails the first two tests should be avoided just like a second hand car you’re told works great and has a perfect history and you then find out it’s been almost written off in accidents twice.  It’s definitely better to see a wide community of users.  If you’re depending on a piece of software you might want to get someone qualified to check into the quality of the project by looking for the testing procedures used etc.

Mobile Learning Data: It’s a tossed salad (MLW 2014 Reflection)

Mobile Learning Data (Image courtesy of Wikipedia)
Mobile Learning: Healthy, tastes good, but now you got so many apps on your plate… all working independently… how do you bring all this data together? (Image courtesy of Wikipedia)

At Mobile Learning Week I saw presenters from all over the world describing how they are picking and choosing applications to deploy mobile learning like they would pick up good healthy stuff from the salad bar. We get App A for one thing, App B for another, and then ask students reflect and share it with App C. But if now at the end I want to figure out the total nutritional value I put on the plate I am out of luck. Every tool has it’s own portal, it’s own stats, and trying to combine them is about as much fun as asking a fundamentalist vegan to work their way around the chicken.

There were a few projects that were not collecting usage data logs and based their research outcomes on self reporting.  This is in my opinion totally unacceptable when using mobile devices that are built to record, process and transmit data. Self reporting is fatally flawed; to expect bottom of the pyramid beneficiaries to say anything negative having been given a new mobile phone and likely transportation money for attending workshops is insanity.  We found this in Ustad Mobile projects; some students would tell us about how great they found the program in spite of the fact the usage logs clearly showed they weren’t actually using the program.  These insights are valuable but meaningless without data.  We can get informed consent from users on this; just as handing in a piece of homework represents informed consent that the teacher will judge it, possibly share it with the principal.

It is essential that we standardize around sensible technical standards: and that standard is the Experience API (aka TIN CAN).   To start with learning objectives first technology second is correct, but you need to make some sensible technology standards choices otherwise one is very soon going to get a massive indigestion problem.

UNESCO is to be commended for getting that many serious Mobile Learning practitioners together in one place. That particular place being Paris might have made it that much easier to attract them, but ultimately people running everything from small experiments to some of the biggest school districts in the United States came together in one place.

Photo Credit: mEducation Alliance
Photo Credit: mEducation Alliance

I was greatly encouraged by the attendance of the session we ran that was a practical lab. It was bring your laptop, and let’s create a course. Trying to get the authoring environment installed on all the laptops was the trickiest part given the connectivity situation. I also see how we really need to focus on getting down to having one click, one button publishing – make that course appear on my mobile. No folders, copy/paste, find it, move it there stuff is acceptable these days.

We at Ustad Mobile, having worked in Afghanistan and the like are very much aware that the world is not always connected together at super high speed. UNESCO together with the IBIS hotel down the road made sure that everyone else also understood this and I remember how much better my connectivity situation had been even in remote Afghan districts than in the middle of Paris.

There is a definite authorware problem for the mobile learning space. Creating content by purchasing each student, teacher, and others a $1,500 per year subscription to licensed tools like Captivate and Articulate storyline is clearly not an option. Educators are not looking for a one way create and consume model; they are looking for ways to enable creation and sharing as well.  That is where the beauty of eXeLearning comes into it’s own: 10 years of work on a great open source authoring platform from Spain, New Zealand, Afghanistan, Dubai and more…

Anyhow… it is one thing to criticize; another to try and do something about the problem at hand.  With that thought in mind it’s back to design and code to make publishing courses to mobile easier.

3 billion capable feature phones: why wait for smartphones?

nokia109Rise of the smartphones! More smartphones are in everyone’s pockets. Everyone will be able to share cat videos and argue with strangers (maybe even learn something). However, almost all of what we hope will come from the “smartphone revolution” is actually already possible with the feature phones that people already have in their pockets.  3 billion people already have highly capable Java enabled feature phones vs. 1 billion smartphones there.

There are three types of phones out there, really:

Dumb / basic phones: capable of only SMS, voice calls and USSD (that’s the *# stuff for checking credit and what not).  They’re getting seriously cheap – $10-$30.

Feature phones: these $30+ devices are mostly 1997ish computers that happen to have small screens and the ability to make phone calls and send text messages.  They more often than not have an SD card slot – that $2 SD card has the same space that my $1,500 Fujitsu laptop had in 1997.  They can play audio and video files.  And that’s important in the middle of an isolated village – it’s an affordable source of entertainment.  3 billion of them have Java 2 Micro Edition.  That’s a version of the Java programming language specially designed for small limited capacity devices.  It means – as the Internet cats would say – you can haz appz with dat.  You can do video, audio, quizzes, saving files, mini games, all of it. Not all of them have Java 2 Micro edition – which is a shame, because it makes so much more possible. Some of the lower end models have can do video but not apps.  Some of them run ‘BREW’ for apps that is about as much fun to develop for as it is to develop another hole in your head.

Smartphones: We all know what that is – Android, iPhone, iOS, Tizen, Firefox phone and the like.  They’re getting cheaper.  There’s just one problem with the $60ish smartphones for mobile learning in development: they’re typically awful.  Unlike the indestructible cheap feature phones that run for days on a single charge, cheap smartphones have fast dying batteries and themselves die pretty fast. They are improving, coming down in cost, but if you were on a budget you’d probably rather spend $35 once on a feature phone (which maybe has a smaller screen for all those cat videos) than risk needing to spend money again and charging it more frequently which is often expensive in off grid areas.  Most smartphones listen to some extent to the all important HTML5 standard.  It’s certainly much nicer as a developer to work with than Java 2 Micro Edition. Much less picky, much richer tools, more power to work with.

Why is it important in mobile learning?

© [Scaling Mobile for Development: A developing world opportunity, 2013]
Smart phones are but 22% of the Middle East and Africa.  So if we want to make learning accessible to more people; the feature phone is our friend for the next 5 years at least.  That’s why Ustad Mobile was built from the ground up to support feature phones and now all smart phone platforms as well.

Known and unknown in mLearning


It is the question that will not die and doesn’t evolve as it should. Can technology improve learning outcomes?  We have known for decades that certain things can help in the learning process: computer aided instruction, Sesame Street, reminders, feedback since way before the proliferation of mobile phones. The right questions are how?, how can we measure that?,  and most important: how much does that cost?

Known knowns

space-learningBefore lamenting the state of evidence on the role of technology in improving education; I wish to lament the state of evidence on improving education in developing countries more generally.  That is not without reason; these are difficult to reach places. In some countries in remote areas we don’t even know if the school building physically exists to the extent we need GPS enabled smartphones, drones or satellite imagery just to be sure that the school is actually physically there.  Knowing or measuring what students are learning, assuming they actually physically come to the building, is a new challenge.  So: A known known is that we often don’t know what students and teachers know or don’t know there if we don’t so much as know that classes are running there.

What does mobile learning really include? Well using apps is a form of computer aided instructions.  There are serious games, so game based learning research is hardly out the window.  Reminding learners about something via SMS is a form of reminder nevertheless.  Even we have research from Matthew Kam’s MILLEE project that shows this research still holds true when applied to using games for learning on mobile phones: there were statistically significant improvements in learning outcomes.

Patrick McEwan just published meta analysis of 76 RCT’s locatable on education improvements in developing countries (that is up from a total of 9 being available in 2003) – even the dramatic rise in RCTs means there is a maximum peak rate of 6 or 7 RCT experiments in developing countries published per year.  The number one program type that achieved statistically significant improvements in student learning: “computers or technology”.  Just ahead of teacher training. RCTs are not cheap exercises; insisting on RCTs before more basic experimenting creates a no innovation chicken and egg loop.


We know we know that students can know more faster with technology enhanced learning and we know this is applicable to small portable computers (also known as mobile phones).

Known unknowns

Key recommendation of the report: report cost data. 56% of these interventions had no data on cost; that clearly is a problem. We still don’t have sufficient comparable cost data on what works to use for decision making on opportunity costs. Right now it looks like computers or technology are relatively expensive: so let’s rework the question:

The question is not “does it work?” The question is: how much does any of this cost and then how can we do that cheaper and sustainably?

We don’t know how to build capacity to create mobile learning applications. That’s what we’re working on with Ustad Mobile. I know if you look in the app store you won’t find any way to make your own materials for low cost devices without learning how to code BREW or Java 2 Micro Edition (and re-invent the wheels of audio video slides, quizzes and mini games). Given that we know cost is a problem spending time and therefor money re-inventing this is a problem.

We know that we don’t know enough about the cost of technology for people to learn to know stuff.

Mike Trucano of the World Bank has a very good answer that addresses this: The best technology is the one you already have, know how to use, and can afford. I’d say we really have to do better with the devices we have: there’s a whole lot that needs more than 160 characters in an SMS message to teach.  But these $30+ Java enabled feature phone of which there are 3 billion out there now can do audio, video and small apps. Ustad Mobile will enable all of those to provide a full computer aided learning experience.

Unknown unkowns…

Ehhh… hmm…  Where’s my hoverboard already?

How SciDev.Net article fails in mobile learning analysis

In this article on titled “How teachers in Africa are being failed by Mobile Learning” Niall Winters asserts that:

  1. Teachers are being excluded en masse from mobile learning projects with a whole sub-heading called “excluding teachers” (though the examples cited are irrelevant to mobile learning such as a project that fixes computers inside brick walls)
  2. “We know that many mobile learning projects are funded by sizeable donations made under corporate social responsibility budget” (without examples)
  3. “The idea that techno-centrism or even solely content-based solutions can address important educational challenges by themselves must be dropped. Research shows they can’t” (by referencing a blog post the author himself wrote)

I agree that these are mistakes that have been seen; and no one is arguing what is being cited is good practice; but this article is extremely flawed and unbalanced.  I’ve been working with education technology in Afghanistan for 6 years. The idea that you need to involve teachers (and maybe even students and parents to some extent) in an education solution is no more novel than the idea that you will need to speak with astronauts when designing a space suit or company managers when designing a business analytics software.

hiwel_050-300x225The first example cited in the article:

“A good example of this is how the technology community has openly welcomed 2013 TED Prize winner Sugata Mitra’s work on learning through self-instruction and peer-shared knowledge, even though his approach to achieving this is highly contested among educational researchers and practitioners.”

The author’s first and apparently prime example when discussing a failure of mobile learning for teachers doesn’t seem … um… very mobile. It is a system that involves a computer physically locked inside a fixed brick wall. That someone won a TED prize does not mean the “technology community” has overwhelmingly endorsed it. A computer lab in brick walls to attempt to bring about peer based learning is about as relevant to assessing if mobile technology can help students and teachers as it is mobile.

The author fails to consider relevant examples like Eneza Education in Kenya where a teacher-led company makes a system that helps around 100,000 pupils to do test preparation by SMS, or our own Ustad Mobile in Afghanistan where we built a system that lets curriculum designers put together audio, video, quizzes and games for learners to use outside the classroom precisely because the teachers don’t have enough time in class. We can then also rapidly help teachers understand which pupils need further help. The models where developed after consultation with education stakeholders (not excluding teachers).  Matthew Kam’s research with MILLEE show children with a game on a low cost feature phone could statistically significantly improve test scores.

The author also fails to consider the informal learning environment outside the classroom – like the demand for learning English and the success of programs like BBC Janala where users call in on a subsidized phone number to cheaply access english language lessons (working together with television and radio programs).

The author asserts:

“Third, learn from the One Laptop per Child programme. Its uptake in Sub-Saharan Africa was generally judged to have failed because of a lack of integration with education ministries.”

This is asserted interestingly without any reference. People more familiar with it might wonder if a lack of uptake had something to do with the cost being around $400 per laptop per child over it’s life… The argument that every child should have a laptop is incomparable to the value proposition of using mobile phones people already have in the family home for supplementary learning.

And finally:

“The idea that techno-centrism or even solely content-based solutions can address important educational challenges by themselves must be dropped. Research shows they can’t [5]”

This is quite a definitive statement – let’s look at the source of the certainty:

[5] Winters, N. Why mobile learning on its own won’t solve the access problem (LIDC blog, 13 November 2012)

If one were to reference Wikipedia there would be only the suspicion that you yourself actually controlled the content; here it references a blog post the author himself wrote so we have certainty that is the case.

I would agree with the author that teachers in Africa (and throughout the developing world) have a difficult life and there is no technological magic bullet; the article however ignores how teachers and students can and already do in many examples around the world both benefit from well designed mobile technology.

In summary the article does not present any coherent argument or evidence that teachers are being failed by mobile learning. In mobile learning as with any new technology there will be instances of failure and there are instances of success. Articles should consider relevant examples, and real alternatives (technological or not) to those relevant examples. They should examine what’s behind success or failure in order for it’s recommendations or opinion to be of any use.