Apple recently began a research study designed tofrom study participants. Earlier this , the company launched a new iOS app called “Siri Speech Study“on the App Store, allowing participants who opted in to share their voice requests and other feedback with Apple. The app is available in several worldwide markets but does not register on the App Store’s charts, including under the “Utilities” category where it’s published.
According to data from Sensor Tower, the iOS app first launched on August 9 and was updated to a new version on August 18. It’s currently available in the U.S., Canada, Germany, France, Hong Kong, India, Ireland, Italy, Japan, Mexico, New Zealand, and Taiwan — an indication of the study’s global reach. However, the app will not appear when searching the App Store by keyword or browsing the list of Apple’s published apps.
The Siri Speech Studylittle information about the study’s specific goals nor explains how someone could become a participant. Instead, it only provides a link to a reasonable standard license agreement and a screen where participants enter their ID number to get started. Reached for comment, Apple told TechCrunch the app is only being used for Siri product improvements by offering participants to share feedback directly with Apple. The company also explained people have to be invited to the study — there isn’t a way for consumers to sign up to join.
The voice recordings to contractors for manual grading and review. But a whistleblower alerted media outlet The Guardian that the process had sometimes allowed them to listen in on confidential details. Shortly after, Apple manually reviewed an opt-in process and brought in-house audio grading. This type of collection continues but has a different aim than a research study.is working to improve Siri. In the past, Apple had tried to learn more about Siri’s mistakes by sending a small portion of consumers’
Unlike this broader, more generalized data collection, a focus group-liketo better understand Siri’s mistakes by combining the collected data with human feedback. With the Siri Speech Study app, Apple said that participants provide explicit feedback per-request. For instance, if Siri misheard a question, . If Siri was triggered when the user hadn’t said “Hey Siri,” that could be noted. Or if Siri on HomePod misidentified the speaker in a multi-person household, the participant could mention that, too.
Another differentiator is that none of the participants’with Apple. Instead, a list of the Siri requests they’ve made and then select which to send to Apple with their feedback. Apple also noted no user except the data directly provided by participants. Apple understands that an intelligent virtual assistant that understands you is a competitive advantage.
Their advanced capabilities are often a key selling point for Android devices., the company scooped up ex-Google AI scientist Samy Bengio to help make Siri a stronger rival to Google Assistant. Meanwhile, in the home, Alexa-powered intelligent speakers dominate the U.S. market and compete with Google in the global landscape outside China. Apple’s HomePod has a long way to go to catch up.
But despite the rapid progress in voice-based computing in, virtual assistants can still have difficulty understanding certain types of speech. For example, earlier this , Apple said it would use a bank of audio clips from podcasts where users had stuttered to help it improve its understanding of this kind of speech pattern. Assistants can also stumble when multiple devices in a home are listening for from across several rooms. And assistants can mess up when trying to differentiate between family members’ voices or understand a child’s voice.