A apple introduced a series of innovations aimed at cognitive, visual, auditory and mobility accessibility. Among them is the personal voice, intended for people who may lose the ability to speak. This feature makes it possible to create a synthesized voice, similar to the user's, to facilitate communication with friends and family.
See too: Find out why the new generation no longer prefers to buy iPhones
see more
Google develops AI tool to help journalists in…
Unopened original 2007 iPhone sells for nearly $200,000; know...
According to the company, users can set up Personal Voice by reading a set of instructions aloud for a total of 15 minutes on their iPhone or iPad. Integrated with the Live Speech feature, it allows users to type what they want to express and the Personal Voice takes care of delivering the message to the interlocutor.
Apple emphasizes that the feature uses 'machine learning on the device itself to ensure the privacy and security of user information'.
iPhone voice feature targets accessibility
Additionally, the tech giant is rolling out stripped-down versions of its core apps. as part of a feature called Assistive Access, aimed at assisting users with disabilities cognitive.
Designed to 'simplify apps and experiences by focusing on core functionality to minimize cognitive overload', the feature includes a unified version of the Phone and FaceTime apps, as well as tailored versions of the Messages, Camera, Photos, and Music. These feature high-contrast buttons, enlarged text labels, and additional accessibility tools.
A 'custom accessibility mode' was identified late last year in a beta version of iOS 16.2. Apple promises these features will be available 'later this year', hinting they could be part of of iOS 17.
There is also a new detection mode in the Magnifier app, designed to help blind or low vision users interact with physical objects through various text labels.
As an example, Apple explains that the user can direct the device's camera to a label, such as the dashboard of a microwave, and the iPhone or iPad reads it aloud as the user slides a finger over each number or setting on the device.
What is this technology for?
Artificial intelligences that mimic voice offer a number of benefits. That is, they allow people with speech difficulties, such as those who have lost the ability to speak due to medical conditions, to communicate again. These technologies make it possible for these people to create synthesized voices that sound like themselves.
Additionally, voice imitation AIs can facilitate digital accessibility by making apps, devices and services more inclusive for users with speech impairments. They also have the potential to improve the user experience in many areas such as customer service, storytelling and media dubbing.