Colorectal, breast, and skin cancer are among the most common and deadly diseases in the United States, according to the American Cancer Society. Despite the pressing need for a fast, accurate and data-driven approach to diagnose and prognose cancer, a current solution does not exist.

We created PocketOnco, a mobile app developed in CoreML and XCode that uses multilayered convolutional neural networks (CNN) to automatically diagnose and prognose colorectal, breast, and skin cancer through tumor feature segmentation and prediction within seconds. Users can either import a histopathological tissue image or take an external dermoscopic picture and select crop for the region of interest. Leveraging unique phenotypic features such as nuclear pleomorphism, glandular and tubule formation, mitotic activity, and molecular subtype, the CNN is trained through a dataset of over 5,000 images acquired from University of Porto, University Hospitals Coventry-Warwickshire, and International Skin Imaging Collaboration (ISIC) to produce an accuracy of 100% for validation for all cancers, identification/diagnosis accuracy of 96%, 78% and 75% and prognosis accuracy of 76%, 97%, and 80% for skin, colon, and breast cancer, respectively. Following prognosis, the app then provides potential treatments (chemotherapy, radiation, immunotherapy, targeted therapy, etc.) and clinical trials based on the location of the user, with data retrieved from the U.S. National Library of Medicine.

PocketOnco is high-speed, low-cost, and user-friendly with significantly improved accuracy over the current gold standard, bridging the gap between patients and experts as diagnostics are now available at the touch of users’ fingertips within seconds.

 

What inspired you (or your team)?

Working in wet lab research over the past four years in a multitude of areas — DNA damage, cancer biomarker discovery, drug discovery — spanning from University of Southern California, University of California, Irvine, and Purdue University, I’ve been able to experience the successes of laboratory research, while also encountering first-hand the shortcomings of traditional laboratory pipelines. The current cancer diagnosis pipeline takes several weeks and requires costly manual tumor feature segmentation, survivorship estimates, clinical trial matching, a large team of pathologists, oncologists, radiologists, and often suffers from inter and intra-observer variability. Despite the need for an accurate and data-driven approach to diagnose and prognose cancers, a solution does not exist.

It was when I attended the American Association for Research conference this past April, I realized the true extent of the problem. In a Q-&-A session with Dr. James Allison, the 2018 Nobel Prize Winner in Medicine, an international oncologist from Nigeria brought attention to the lack of cancer treatments and diagnosis in underprivileged areas, stating that cancer treatments are often overlooked in developing countries due to the dominance of infectious diseases — creating an awkward “double-whammy”—a pressing need to treat both terminal and infectious illnesses in underprivileged areas.

Thus, we were inspired to leverage technology in an effort to democratize medicine and healthcare — not just to advance precision medicine — but also to alleviate the global healthcare inequality. However, applications of artificial intelligence to disease diagnosis have already been performed; thus, we sought to create something more meaningful, novel, and impactful, rather than repeating what has already been done.

My partner is an experienced app developer and UI designer. For him, he sees user-friendliness and accessibility as integral components of spreading the benefits of technology. Many existing tech products that are in production today contain confusing interfaces and manuals that only trained professionals know how to use. So, my partner sought to create a design and user experience that would come natural for all users, regardless of their educational level or socioeconomic status. While my partner designed the user interface, his work was heavily influenced by Material Design and iOS Design guidelines that represent modern UI (User Interface) trends, which put users as the central focus of the product.

My partner is also inspired by the potential for impact through the marriage between technology and humanity. From experimental prototypes to apps used by thousands of people every day, my partner has programmed dozens of mobile applications that deepened his attitude to create technology that puts an aesthetic appearance on top of complex algorithms. In the app, my partner trained and optimized countless machine learning models using thousands of sample images; yet, the interface itself delivers a flawless experience that masks the complexity of the application but not the powerful functionalities that users enjoy.

We decided to create a mobile app platform, where users can import or take pictures from their cell phones and run the machine learning model locally. What we saw from past research is that while others have created machine learning algorithms to detect different features of tumors before, no one had yet integrated the technology into a user-friendly and accessible platform that informed a holistic and centralized user-patient experience that incorporated disease diagnosis, prognosis, treatments, and clinical trials all in one. In doing this, we hope to maximize impact so that the technology will be directly accessible to people of all backgrounds, rather than just in a laboratory or pathologist — democratizing medicine and bridging the gap between patients and experts.