EasyGlucose is a cloud-powered, non-invasive, and cost-effective method of blood glucose monitoring for diabetic patients. Users first capture high-resolution images of their eye with their smartphone through a FDA-approved low-cost iris imaging adapter. A patent-pending deep learning computer vision framework using convolutional neural networks then analyzes iris morphological variation in the eye image to predict the patient’s blood glucose level. EasyGlucose is incredibly accurate with an unprecedented error rate of 6.93%, significantly outperforming existing state-of-the-art non-invasive methods by over 30%. In addition, on the gold-standard Clarke Error Grid analysis, 100% of test predictions were given the highest possible evaluation of “clinically accurate” in Zone A.

Since glucose levels are synced to the cloud, patients can easily see long-term glucose trends to optimize their insulin treatments, and parents receive automated alerts if their children’s glucose levels reach critical levels. In addition, because all the machine learning happens on-device, EasyGlucose requires no internet connection, increasing platform portability and facilitating deployment in low-income and rural areas. No maintenance is required for EasyGlucose, unlike the calibration and replacement of test strips and sensors required by current invasive methods. Ultimately, by reducing the pain and cost of current methods, EasyGlucose provides a comfortable, user-friendly, non-invasive and accurate way for diabetic patients to manage their blood sugar levels at a fraction of the cost of today’s methods.

Tools used include Python programming language and TensorFlow for deep learning. The mobile app was built for iOS with Xcode, Swift, and CoreML.

What inspired you (or your team)?

After I graduated high school in summer 2018, I visited my grandparents in my home country of Taiwan. I was devastated when I found out that my grandmother had been diagnosed with type II diabetes – her struggles with her condition inspired me to leverage my background in computer science to come up with a solution.

— More Background —
I started doing literature reviews and realized that most existing methods involved invasive physical or biological procedures, and that computer science was extremely underutilized.

In September 2018, I reached out to professor Dr. David Myung at Stanford for mentorship, who helped me better understand ophthalmological (relating to the eye) concepts and provided me with the FDA-approved smartphone adapter called PaxosScope.

I worked on the project over the next 8 months, and in May 2019 I won first place in Microsoft Imagine Cup competition (out of 30,000+ students), winning a $165K prize total and mentoring from the Microsoft CEO.

Today, I’m wrapping up a provisional patent for the deep learning technology, and am in the process of collecting additional data through an IRB clinical study at Stanford with collaborators there. I’m incredibly excited to continue developing this technology and help millions of diabetic patients around the world, just like my own grandmother, manage their condition better.