The Siri app parses the sound, interprets the request, and hands it off to an appropriate web service, such as OpenTable, Yelp, CitySearch, and so on. It displays the results onscreen as it goes, giving you a chance to correct or adjust your request via onscreen taps.It’s the most sophisticated voice recognition to appear on a smartphone yet. While Google’s Nexus
One offers voice transcription capabilities — so you can speak to enter text into a web form, for instance — the Nexus One doesn’t actually interpret what you’re saying.
The voice recognition and interpretation abilities built into Siri have their origins in artificial intelligence research at SRI, a legendary Silicon Valley R&D lab that was also the birthplace of the mouse and of the graphical user interface. Spun out of SRI in 2007, Siri garnered a lot of attention for its ambitious plans to develop a virtual personal assistant. Actually bringing the product to market has taken quite a bit longer than expected.
In a demo shown to Wired.com, Siri responded quickly to spoken requests, answering questions about restaurants, directions and the weather with relative ease. It’s well-integrated with about 20 different web information services, and Siri representatives say that their application programming interface will allow many others to connect in the future.[via wired]
Get your ihelplounge t-shirt here