Column

Community Lives

It’s good to talk

Jeannette Stewart

Jeannette Stewart is the former CEO of CommuniCare, a translation company for life sciences. An advocate for the language industry, she founded Translation Commons, a nonprofit online platform facilitating community collaboration.

Sarah Calek

What started as a mere whisper has grown steadily into an almighty roar.

I’m talking about voice-controlled technology. No one can have missed the hype trumpeting the advent and rapid proliferation of all kinds of devices that accept the spoken word and dispense with other means of input. Those of us in the language community in particular will have also pricked up our ears with the arrival of features upgraded from monolingual to multilingual capabilities.

Sundar Pichai, Google CEO, has spoken repeatedly of how our device-empowered world is segueing from mobile to AI-dominated technology. The introduction of Levi’s Jacquard jacket’s gesture-understanding wearable is breaking new ground in human-machine communications. How long, we may well ask, will it be before we start talking to our clothes? After all, we can already talk to our watches, microwaves and cars. As this revolution sweeps across our lives, just what will the consequences be? In particular, what impact can we expect on the language community? Do we face another doom-laden future for our hard-won professional skills? Or is there any hope for evolutionary advancement?

Community Lives Chat Version

Having an education in classical Greek, I’m well aware of the huge change that in just a few centuries took our culture from its pre-Homeric oral form to a deluge of written output. It seems clear that at the present time, we are undergoing changes that are fundamentally altering lives across the planet. Many ancient languages fell into disuse in classical times and now exist only in the pages of dusty tomes in the philology section of graduate-school libraries. Will future historians look back at us and earn PhDs from trying to decipher languages lost in a planetary blizzard of tech dominated by a restricted language hierarchy? Or will our community perform yet another Houdini-style escape from a seemingly certain demise?

Natural language processing has a long history of varied activities stretching back at least five or six decades. Language understanding, speech recognition and machine translation are just three examples of language-based technology that are now commonplace. What is perhaps less well known is that modern development environments provide libraries that work easily with all manner of hardware. Our phones can still amaze us with the ways in which we are able to interact with them, but less so as time passes and we begin to take such features for granted. In researching this article and with the help of a couple of techie friends, I have seen for myself just how straightforward it is to code basic language processing apps in both Java and Python. No doubt, all other computer languages can be used to program to the same end. I’m also sure that those vast code resources at SourceForge, the open source repository, and many other platforms probably contain downloadable apps for language tasks.

The reason I mention these and not proprietary apps is because I want to emphasize that language technology is no longer confined to the province of the engineering world: any of us across the entire community can get in on the act. You might not make a red cent from your efforts, but at least you would be in control of the language files you produce and might well be preserving endangered content for some future era.

Despite my own skepticism, the Internet of Things (IoT) has persisted and is now firmly rooted in our culture. We read of the excitement of chip makers rushing to produce micro-microprocessors that will be embedded… well, everywhere! We seem to have progressed well beyond much-vaunted vaporware and shiny new tech toys into an era that might even raise George Jetson’s eyebrows in surprise and approval. A recurrent mantra I hear is that we will not know how beneficial these devices will be for us until they’re with us. We won’t know what we need until we’ve got it. Putting my initial skepticism of the IoT aside, I wonder quite seriously about the impact this ubiquitous, enabling technology will have on our community. Since many devices will be voice-driven, again given the size of a multilingual market, the prospect of a purely monolingual interface just does not seem likely. Already, Alexa, for example, can be set to speak with a growing variety of accents. Good grief, you can even have Gordon Ramsay’s voice bark back at you! I have no idea what the great chef’s commercial arrangements are with Amazon, but I’d be interested to find out.

I was lucky enough to meet Susan Bennett, the actress who is the voice of Siri. She is still, I believe, waiting for Apple to acknowledge this officially, but it seems they’re dragging their heels a tad. Is it just me or is there a vague echo of content ownership issues that we in the language community have experienced? Mention of royalties or copyright rarely seem to result in the plaintiff’s favor. Financial considerations aside, there are identity issues lurking in the bushes on the road ahead. Gender, nationality and politics, for example, will at some point result in winners and losers in the language game.

For example, one unsettling development is the use of computer-generated speech providing the interface with users. This is sometimes referred to as augmentative or alternative speech synthesis. This is a very good thing if employed in aiding those with disabilities. In devices using this approach, text to speech or gestures or vocalizing phonetic scripts provide a means for users to interface with them. However, these capabilities do not stop there. Recent reports tell of products like Google’s personal assistant synthesizing human speech to book restaurant tables, whether that was actually the humans’ intention or not.

In itself this is reasonably innocuous, but ramp this capability up and the stakes become a lot more worrisome. Another demo showed an AI-driven app simulating former president Obama delivering a public service announcement. Proof of concept is one thing, but we have to wonder at just what kind of abuses such technology can be used for. For the language community, the same concerns about the linguistic skills being employed here, the ownership of voice-generated corpora and the implications for maintaining language standards certainly trouble me.

Another consideration that occurs to me is storage, and from that source springs several more streams of possible contention like privacy, security and data analysis. We are by now accustomed to previously unthinkable volumes of data storage. From terabytes to exabytes to petabytes, and to infinity and beyond! Vast repositories in remote locations, now even underwater, hold every byte we produce in our madcap, multimedia existence. Of course, the majority of these stores are inaccessible to us, but we are constantly reminded of the presence of that amorphous storehouse, the cloud. We want to know that what we put there can be retrieved by us and only us unless otherwise specified. Given the present menace of cybercrime, just how secure is our data? This is a question that is much easier to ask than to answer. Will all the words we feed into the ears of our myriad multilingual devices be similarly imperiled? If our device deals with sensitive data relating to our health or finances, for example, we need our privacy to be assured. We also want our content ownership rights to have full integrity. Any hope we may have had that blockchain would give us a foolproof guarantee of that now feels like a distant dream. Could there be a revival? That seems about as likely as our financial institutions suddenly deciding to give cryptocurrencies their blessing.

The impact that data analysis is having on our world has taken almost everyone by surprise. It’s not just the scale of stored information mentioned above that is surprising, it’s just how valuable an asset it can be when super-smart techniques are used to process it. Entirely new facets of a vast whole can be revealed and hidden patterns discovered. It may seem to be a stretch to invoke a technology more associated with cosmology than languages, but it’s not. Enterprises will explore any and all avenues in the search for even the faintest edge over the competition, and the words we write and utter are fair game to them.

Whether spoken, written or even digitally-generated, our words are assets with value. Furthermore, we can bet that when it is possible to harvest our raw neurobabble, those thoughts will be grist to the mill of commerce too.

I don’t want to come across as some lingo-Luddite. Emphatically, I am not. I embrace the language technologies that have thrust our skills into the forefront of global progress. But I do think we need to proceed with caution or at least a strong awareness of what machine-generated language implies. Given that big-name labels like Levi’s or IKEA are now selling things that extend our ability to communicate, it could be good to employ some risk analysis in assessing the potential impact of these innovations.

Most importantly, however, I suggest we attempt to muster some thought across the community about the role that our highly-trained, uber-motivated members can play. What opportunities are coming our way that just might prove to be beneficial for a healthy, sustainable future? A crystal ball would be ideal, but ideas, innovations and even entirely new concepts of language use are tangible possibilities.