The week started with the "AI-powered multilingualism: friend or foe for European Universities" keynote panel. The discussion focused on the Ethical considerations and challenges in implementing AI for multilingualism, The role of AI in enhancing multilingual communication in academia and a presentation of Case studies of successful AI-powered multilingual initiatives in European Institutions.
For the latest, the European Commission has developed AI-Based Multilingual Services, a portal which offers both web pages and APIs for machine-to-machine access. Designed to be used by EU institutions, public administrations, academia, SMEs, NGOs, Digital Europe Programme projects and EPSO candidates.
The ones on the platform are secure by design, by which we mean that we don't access your data. Whatever you upload into the system, you remain the owner of your data throughout the entire process. We don't store the data, we don't collect it, we don't share it. And more importantly, we don't even use it for training the system. So this is really unique in today's AI landscape. Everything happens exclusively in the EU zone. It is as secure as it can get, but of course, there's no way to guarantee 100% security, explained Ms. Ágnes Farkas, Team Leader - DGT AI Language Services Advisory.
Multilingualism is not multiculturalism
But when it comes to translations, an important highlight was brought by Fanny Meunier, Professor of English language, linguistics, and teacher education at UCLouvain (member of the Circle U Alliance):
Those tools are not culturally neutral so if you ask the same question to ChatGPT to DeepSeek or to Gorg you get different answers so one thing that is important to remember is that multilingualism does not mean multiculturalism. Just having the words doesn't say much about the culture. If we over-rely on AI, we cannot see very embarrassing mistakes sometimes, especially in terms of tone, culture, humor or context, warns the professor".
AI tools: sexist, racist and politically biased
The Keynote Panel took place on the International Day for the Elimination of Violence Against Women, which brought to the attention how biased the AI tools are, as they very much depend on the culture of their creators and trainers. Prof. Damien Hansen, Chairholder in Artificial Intelligence at the Faculty of Arts, Translation and Communication at ULB, explained:
"AI is actually a pretty violent tool when it comes to reproducing those sorts of violence against women, whether they are symbolic or actually physical. We all know that those outputs that, for example, chat GPT gives you are often prejudiced, biased, sexist towards women. We've all seen plenty of images or texts. And if you use AI to hire people, which is something that is getting more and more commonplace, it's going to discriminate against women. We already know that, we have numbers. But I could also mention the deep fakes that are used to generate childlike images, porn images of women that you can actually freely use on some social media today".
And the situation is the same in terms of racial discrimination and political affiliation, too, says the professor, who warns that only 40% of the AI generated answers by the AI tools on the search engines is right. The rest might contain hallucinations, disinformation and propaganda. Their energy consumption and environmental impact is also highly damaging.
Those models that we have today are the same ones that came around in 2014. We haven't really progressed, we just had bigger systems. It still doesn't have access to context, to intention, to target audience. You still need a human making the decisions - and not behind the scene, like actively making decisions, reassures Prof. Marius Gilbert, Vice-rector of research and valorization, Vice-rector of culture and scientific mediation at ULB.
In other words, it's the prompt that really matters - the command given by the human, and also the final check-up - made also by the human eye and mind.
AI and dissinformation
The discussion continued on the next day, when Prof. Bogdan Oprea, from the University of Bucharest, brought into attention the ethical use of AI, its impact on disinformation - whether positive or negative, and its level of privacy and transparency. It was the moment when everyone realised that, when it comes to the disinformation, propaganda, reinforcing conspiracy theories, pushing people to suicide, mass surveillance or data privacy - legal gaps make it difficult to hold someone accountable.
Tools for fact-checking
Having all that in mind, the next and most important step - especially for a communicator - is to learn how to identify and debunk disinformation. Of great help with that was Pr. Laurence Dierickx, from ULB, who shared dozens of tools and tricks which can support in verifying and confirming information. The same artificial intellingence tools that can create problems, if used properly, can enhance counteracting dissinformation.
The most simple and at hand asset is "knowing how to Google" by using different keyboard characters to make your search results more suitable to your needs. But there are also some platforms that might help:
- howtoverify.info - offers an overview of workflows and tools for verification of digital media
- bellingcat.gitbook.io/toolkit - includes satellite and mapping services, tools for verifying photos and videos, websites to archive web pages
- factinsect.com - automated AI fact checker
- metadata2go.com - verifying images and multimedia content
- aivoicedetector.com - detecting AI-generated audio
and many, many more which you can keep organised and always in hand by creating your own toolbox.
What can be automated in crisis communication?
The afternoon continued with a training on the Role of AI in managing and responding to crises, held by Jérémy Jenard, Communication Officer at ULB. He reminded us that, no matter how helpful the tools are, AI cannot carry the moral responsibility of communication, its ethical burden remaining profoundly human.
Some of the main core ideas Jérémy Jenard underlined were:
- The reputational damage arises not only from the event; but from how stakeholders interpret responsibility
- If unattended, an issue becomes a crisis as it gains visibility (often via social media) and provokes rising emotion within and outside the community.
- The structure of the key message should be fact - emotion - action: we know, we care, we do, we'll get back to you
- The average time spent on reading a text is 8 seconds: keept it short.
- The audience needs to feel seen, heard and taken seriously.
- Do not conceal facts!
- Issue communication writes the footnotes, but crisis communication writes the headlines.
Artificial Intelligence for the Common Good
The last day of training took the team to FARI - the Artificial Intelligence for the Common Good Institute. An independent, not-for-profit Artificial Intelligence initiative led by the Vrije Universiteit Brussel (VUB) and the Université libre de Bruxelles, FARI aims to be an interface between the university and society in the specific field of AI.
We found ourselves in a true "library of robots", surrounded by tens of projects created to ease and improve people's lives. After learning how much good AI can do for each one of us and for the world in general if used properly, we ended the day with the "AI Tools to Boost Content Creation for Communicators" workshop and started exploring AI tools applied directly to the professional context. The chosen topics were:
- Text Drafting & Adaptation
- Translation &Multilingual Communication
- Visual &Multimedia Content
- Events (Planning & Delivery)
AI: only a vehicle, the wheel is still in human hands
These days showed participants that "intelligent" instruments are now part of our ecosystem and we cannot completely ignore them. But we can shape the way in which we use them as long as we understand that the human impact on the results is higher than we probably think.
It's normal to still be afraid of them, but we have the tools to help us learn how to keep on connecting to our human side and the pleasure of communicating, of creating content. And thinking about useful and reasonable ways of using these tools, as working with AI is very much a collaborative process - it's not like one's just pressing a button and the machine does the whole job.
There were many conclusions drawn after these days, but if we were to stick to one, it's that AI has become a potential helper in facilitating collaboration on communication projects across different universities, languages and cultures.
Will it take our jobs as communicators in the near future? Most likely not, but someone who knows how to use it most likely will, as one of the participants concluded. As the technology can turn into a real helping hand as long as we keep on raising awareness and educate ourselves and the ones around us about the friendliness, but also the dangers of AI.