MyVoice is an iOS app that provides Alternative Augmentative Communication (AAC) to people with speech impeding disabilities.

Using Text to Speech synthesis, MyVoice converts a typed message into spoken words. This remains the primary function of MyVoice, giving someone without a voice a means of communication. Additional underlying technology includes machine learning. Using machine learning models and AI, MyVoice can determine what object you are looking at and say the name of it aloud. This allows a user to request assistance with little effort. Using OCR and NLP, MyVoice can determine words on a page and read them aloud for a user. All machine intelligence and synthesis tasks are handled on the device to enable quicker communication for the user.

MyVoice was written in Objective- C and Swift in Xcode (Apple’s IDE for iOS and macOS apps).

Development of MyVoice began towards the end of 2015 after attending WWDC15 as a Student Scholarship Recipient. MyVoice was released to the public in May of 2016. In August of 2016, MyVoice was featured on the App Store in the Special Education category.

MyVoice has recently been made the primary communication tool for special education classrooms throughout the Lee County school district.

Overall, MyVoice provides a cheaper and more efficient method of communication that is vital for a student with special needs at home or in the classroom.