“The Babel fish is small, yellow, leech-like, and probably the oddest thing in the universe. It feeds on brain wave energy, absorbing all unconscious frequencies and then excreting telepathically a matrix formed from the conscious frequencies and nerve signals picked up from the speech centres of the brain, the practical upshot of which is that if you stick one in your ear, you can instantly understand anything said to you in any form of language: the speech you hear decodes the brain wave matrix.”
The launch of Pixel Buds, Google’s first foray into the portable audio market, was originally somewhat buried behind the announcement of the latest iteration of the previously maligned Chromebook, and an updated Pixel smartphone range; a seemingly innocuous segue to address the lack of a 3.5mm audio port on the Pixel 2.
Wireless headphones to rival Apple’s AirPods, on the face of it. Nothing new, or so it would seem. Except for one distinct feature, as outlined in a tech demo during Google’s annual hardware presentation between an English speaker and a Swedish speaker: live translation, derived from neural machine translation, fuelled by burgeoning mass-adoption AI client Google Assistant, with very little lag, resulting in a relatively seamless exchange, with translation being fed via wireless connectivity and fed to the Buds via Bluetooth.
While the superb Google Assistant has been capable of this act via a handset from launch, a number of factors present the Pixel Buds as a somewhat indispensable accessory for those wishing to adopt this technology. Chief among these is the price: $159, a price point that both matches the aforementioned AirPods, but also brings this advancement straight to the attention of the mass market and, subsequently, the international business market on a wider level and beyond just those individuals willing (and able) to splash out on pricier tech from a vastly smaller production scale.
The form factor is certainly worth noting as regards the intuitiveness of the Pixel Buds. The bud sits at the entrance to the ear canal, rather than inside it, allowing the user to not be isolated from ambient noises or, indeed, surrounding contributors to a conversation. The cord between the buds can be cleverly pulled through the back of the bud unit and adjust in a loop to the contours of the user’s ear, rather than using a traditional clip assembly that previous on-ear buds have predominantly utilised. All of this adds to the allure of the device; functional earbuds with a built-in automation factor: bringing the allure of home solutions like that of the Amazon Echo or the Apple HomePod to the hands-free user on the move; on-ear IoT, in effect. It is somewhat difficult to not be swept up in excitement over the implications for international personal communication as presented by this device.
Live translation devices are, of course, nothing new on a commercial level, but the prospect of wearable interpretation devices are still a relatively new concept. Contemporary rivals in this sphere include a number of crowdfunded products; Waverly Labs’ Pilot ships to early adopters later this year, while Clik, presented at CNET in January offers multilingual conferencing and auto-transcription in 37 languages, for a significant premium over the Pixel Buds. These products are dependent on decent wireless data connections, limiting their functionality base to built-up-areas and to those with a very accommodating overseas data plan. Japanese company Logbar’s Ili device does not require any connection, and can be used in the most remote of environments, but as a one-way communication tool it is limited to basic yes/no or physical instruction scenarios, and is presently only able to bridge the gap between 10% of the languages Google Assistant currently boasts. Finally, in an office context, Microsoft’s Skype Translator is an excellent tool for conference calling, but is highly dependent on the quality of both the connection and user microphones.
However impressive and provisionally accessible these devices and services are, ultimately they are designed for, essentially, one function. Pixel Buds, by contrast, allow for reasonable one-stroke control of recent high-end smartphones (for media audio, both Android 5.0 and iOS 10.0 or above will work, whilst 6.0 Marshmallow and a dedicated iOS app are needed for Google Assistant features).
The shift of translation from a specific device to a multipurpose platform is a substantial leap and affords Google a significant advantage over its competitors in both the wearable audio and translation services spheres by giving each a mutually-reflexive selling point. In a crowded personal audio market, this feature enables the Pixel Buds to stand out very clearly, whilst also neatly incentivising consumers towards the Pixel 2 range of smartphones, still relatively unproven commercial commodities following the difficulties beset by the Nexus smartphone range previously.
It is worth noting that the Pixel Buds, and Google Assistant by extension, are not foolproof technologies. There are certain unavoidable foibles inherent within any translation device, primarily the nuances of accents, localised intonations, ontological language and, crucially, interpreting prosodic qualifiers. The nuances of computational linguistics therein derived from these former roadblocks will be further refined in subsequent evolutions, with the eventual goal being turning the use of said services into a passive act, rather than an active choice; the world around the user interpreted automatically, be that in a business meeting or a town centre abroad, enabling the technology itself to be as immersive as one’s surroundings.
There’s a long way to go yet, but the Pixel Buds provide an indicator that this is not only possible but inevitable. Today, they represent a significantly disruptive technology, able to circumvent cultural barriers and make the world that little bit smaller; and with each progression, we get closer and closer to the realms of science fiction, and catching, and adopting, the elusive Babelfish.