While generative large language models are not inherently nefarious, it is essential to recognise that some of their consequences may disrupt the delicate equilibrium of our economic systems, thereby underscoring the need for increased regulation within this sector.
I started being that pessimistic just a few days ago, following the Italian press reaction to a new video shared via the official profile of the minister of the infrastructures, Matteo Salvini. I have always followed the Italian political scenario since I was a teenager but that was the first time I saw Salvini unusually speaking an impeccable French to invite people to his annual party meeting in Pontida. As per the language choice, that was motivated by the presence of Marine Le Pen, the French leader of the nationalist party, on the Pontida stage. Beyond the political ramifications of this event, what immeditely surprised me watching the video was the fluency with which the Minister was expressing himself. Not being able to come up with an answer by myself, I thus decided to take a closer look at the text of the post, where I found a disclaimer praising the potential of the AIs and acknowledging the use of HeyGen for the realisation of that video. According to their own website, Hey Gen is an AI video generator that lets you create videos from text in minutes with AI-generated avatars and voices with the possibility to choose from over 100 avatars, 300 voices, and 40 languages to suit your needs. You can also use Hey Gen to create talking photos, where you can animate any photo with your own script and its usage, despite what stated by the developpers, is not restricted to marketing, sales outreach, and learning anymore –that is why I am blogging after the break.
My training as a translator is problably at the basis of my post-summer disquietude amplified by the seminars and webinars I have been attending since last year. One even in particular, hosted by the Center for Culture and Democracy, left a bittersweet impression I could not fully understand at the time. Dr Monojit Choudhury, working in the Microsoft team, who was one of the participants of that round table, found himself at the centre of an intense discussion among the convenors to the point that he had become the perfect "target". Amidst the fervour, his tangible frustration led him to what what I belived and still believe to be a candid confession. Dr Choudhury argued that the process is too daunting even for industry insiders to keep pace with the rapid and radical developments occurring almost every six months, and that, he contended, hinders ethically sustainable product building. Further, he noted that big companies are driven by their customers, for which business is the leading power and the prime driver for the current investment strategies for NPL tools by big tech companies.
Initially, I was listening to this dialogue without particular astonishment, taking for granted that companies prioritise revenue growth to anything else. However, now that I witness the impact of high-tech developments on translation field –my field– I am more concerned. To me, if AIs becomes capable of transforming politicians' voices to make them speak different languages on the fly, with fewer revisions –I imagine– and almost without human control over this process, the future of the field is uncertain. Why should aspiring interpreters worry about going through the rigorous interpreting training if they are potentially replaceable? They will not be needed at the European Parliament or the United Nations anymore. Institutions like the European Parliament or the United Nations may find less need for their services. Moreover, this concern extends beyond traditional translators to encompass voice actors. If iconic figures like Robert De Niro can perform in English and then have AI dub their voices into multiple languages, what is the point of their craft anymore?
I acknowledge that this will not usher in an apocalyptic scenario, but, akin to the automation of manual labor, the long-term trajectory suggests a reduced demand for specialists, potentially displacing a substantial portion of the current workforce in the field.
What potential solutions can address this issue? Perhaps the most direct approach is to establish legally binding workforce compositions within companies, ensuring that a designated percentage cannot be substituted by generative large language models –a sort of "human quotas". Drawing a parallel with the concept "gender quotas" in regulating the implementation of AI in the market reveals an interesting perspective on the potential effectiveness of such measures. Gender quotas have been successfully employed in various contexts, particularly in corporate boardrooms and other leadership positions, to address gender imbalances. Likewise, the notion of "human quotas" within the context of generative large language models implementation could involve the establishment of limits on human replacements. In my view, this would entail setting boundaries on the extent to which AI can supplant human roles, thereby ensuring that human expertise and judgment remain integral to the deployment of AI technologies.