An issue that comes up time and again in the technology for development space is whether to focus on an applications’ reach or the richness of services provided. In the next couple of posts I’m going to be looking at how this debate is playing out in mHealth.
The X-prize Foundation, which helped sparked a commercial space race, is going to be offering a $10 million incentive to the first team who can replicate the diagnostic ability of live physicians on a phone with an artificial Intelligence diagnostics tool. The technique would likely be based around having a patient answer questions while the phone analyses photos taken of any symptoms.
One potential technique was recently developed by Applied Nanodetectors. Their mobile based breath analyzer contains sensors that can detect the composition of gasses in the users breath. It then goes one step further by comparing those results with the known characteristics of diseases. The company says that it currently can identify asthma, diabetes and lung cancer, as well as less chronic conditions like halitosis (bad breath).
While the smartphone physician or breath analysis may be most useful in situations where a health worker has access to the phone, these “richer” applications will also exploit the “glued to the hip” nature of modern mobile users. HeartToGo is one example of this. This application tackles cardiovascular disease, the world’s leading cause of death, by constantly monitoring a users cardiac health with an ECG. Not only does the phone collect the information but it analyses the data for abnormalities. It will even automatically alert emergency services in the event of a heart attack.
What makes these examples particularly compelling is not simply their portability, but their ability to generate continuous, longitudinal streams of data rather than one-off clinical snapshots. When a device moves from occasional measurement to persistent monitoring, the conversation shifts from reactive diagnosis to proactive pattern detection.
Artificial intelligence systems trained on large datasets can begin identifying subtle deviations from baseline—micro-changes in breathing patterns, cardiac rhythms, speech cadence, or motor activity—that might otherwise go unnoticed in traditional appointment-based care.
Over time, this richness of data allows algorithms to distinguish between normal variability and clinically meaningful trends, refining alerts and reducing false positives while offering earlier signals of deterioration. The real promise lies in pairing reach with depth: tools that are widely accessible but also capable of layered analysis that grows more accurate the longer they are used.
This approach becomes especially relevant in progressive neurological conditions, where understanding change over time is central to care planning and therapeutic decision-making. In diseases such as amyotrophic lateral sclerosis, where functional decline follows a recognizable yet individually variable course, structured tracking aligned with the established ALS timeline can help contextualize motor symptoms, respiratory shifts, and speech changes within broader progression patterns. By mapping patient-reported outcomes and sensor-derived metrics against known stages of the disease, AI-driven systems could assist clinicians in anticipating care needs, adjusting interventions, and supporting families through informed forecasting rather than uncertainty.
In this way, the same mHealth principles driving innovation in acute diagnostics may also reshape how chronic neurodegenerative conditions are monitored—moving from isolated assessments to data-rich narratives that reflect the lived trajectory of disease progression.
I’m excited to explore more examples in the weeks and months to come. Stay tuned and feel free to share any mHealth case studies that you come across.


