Christos Ellinides, Acting Director General, the European Commission’s Directorate General Translation, a keynote speaker at the recent ATC – EUATC Ethical Business Summit held in London, outlined how the Commission had been addressing both the threats and opportunities posed by the growing importance of generative AI for more than five years with the adoption of its strategy Artificial Intelligence for Europe in 2018.
In a powerful speech (see full text below), he told a packed audience that with generative AI moving at breakneck speed, which, even developers such as Yoshua Bengio could not have anticipated, “we have a narrowing opportunity to guide this technology responsibly.”
He went on to describe three pieces of draft legislation being brought forward by the European Commission: the Artificial Intelligence Act; the AI Liability Directive and the Defective Products Liability Directive.
He said that the EU Artificial Intelligence Act, now in its second stage in the European Parliament, will be the world’s first comprehensive legal framework for artificial intelligence and is already becoming a blueprint for the rest of the world seeking to manage the growing importance of AI not only in the language industry, but in our everyday lives.
The key elements of the Act are outlined in the full text of Christos Ellinides’ speech below:
ATC – EUATC keynote: Ethical, sustainable business from EU perspective by Christos Ellinides
- Why we need to talk about ethics in AI for our language services.
It is good to be here today and share with you the European Commission perspective on AI and the ethical dimensions attached to this technology.
During the morning session we heard some useful thoughts and interesting positions about ethical and sustainable business, so I hope I can chip in with some institutional positions to broaden the perspectives on this topic.
Ethics has always been critical to the success of every translator and every translation service. Conveying precisely a message from one language to another, knowing that your reader is trusting you to be accurate and true to the original message, is at the very heart of what translators do.
And as with every other major technological evolution in the translation profession, the discussion on ethical issues is inevitable in the case of AI as well.
The evolution of AI is fast paced and different to other technological developments we experienced in the past.
I can tell you that AI is not IT. It is not the traditional IT we are used to. It cannot be developed, deployed, used, and managed like the rest of our traditional IT systems. What makes it unique is the capability of self-learning together with the capacity to process vast amounts of data and reach a decision in fractions of the time that it would take a human being.
We cannot ignore the speed of developments in the AI field; it was only in November last year that Open AI released ChatGPT and since then we are experiencing daily, new AI- powered systems and products.
As a result, the ethical use of AI must be one of the most pressing issues in industry and government today.
It is also particularly close to my heart because the Directorate-General for Translation is among the leading services for the European Commission’s work on AI.
So, I would like to share with you our approach to AI; how we are navigating and responding to the latest developments in order to get in front of a technology that is bound to grow and impact our lives and work now and in the future.
Language-based AI is the most advanced form of AI in use so far. Our industry has used predictive AI for years now. Predictive AI collects and analyses data to identify, understand patterns, and predict outcomes.
In the Commission we have used predictive AI in our machine translation engines for decades to search, retrieve and generate suggestions for our translators, who then assess and process the machine translation output to improve the linguistic quality of our documents.
But AI, as we know, it is going through another evolutionary cycle. Generative AI is a radical departure from the past.
It uses a combination of machine learning, including deep-learning algorithms to generate new content. Unlike predictive AI, generative AI harvests and learns from existing content to create new patterns and generate this new content. It is more efficient and creative. In fact, the results sound more fluent and typically the tone is confident and assured.
But it still depends on the availability, quality and quantity of basic linguistic data and how that data is harvested, curated, tagged, and processed. Garbage in will still produce garbage out, despite sounding more convincing than ever.
It is important to take a step back from the on-going hype and recognise that we are just at the beginning of a journey towards a destination that is not yet known to us.
Developments in Generative AI technology are moving at an impressive speed – a speed which not even its developers anticipated.
Only three months ago, in an article in Fortune magazine one of A.I.’s 3 ‘godfathers’, Yoshua Bengio, said he has regrets over his life’s work and that he feels “lost” because of the unpredictable direction AI is headed in. So, we have a narrowing window of opportunity to guide this technology responsibly.
Like any leap in technology, generative AI creates new unknowns. The key to nurturing and exploiting new technologies is to monitor and guide their development so that we can minimise potential risks and deliver benefits. They could lead to new opportunities, and we should be bold enough to explore but they could also misfire and shoot off in the wrong direction.
The inherent risks in using generative AI include liability, bias, disinformation, data hallucination and accuracy, and most definitely privacy concerns.
The technology in itself can be a multiplier of other risks if it is used without ethical guidance.
We need to identify and understand these opportunities and threats so that as regulators and industry alike, we can mitigate the risks using a range of tools and safeguards. That involves significant crystal-ball gazing and perhaps more conjecture than we would like. But it also involves objective facts, knowledge, and investigation. More important, it involves using the technology responsibly.
We need to ask ourselves what and how much decision-making power should we give AI tools. When and for what type of content? Questions that we ask ourselves daily are: which documents should we be using AI to translate or analyse? Why? With what checks? And at what risk?
The answers right now are still a matter of subjective opinion, judgement and sustained ongoing ethical reflection. We are only at the very beginning of the new AI era.
But the time to deal with these ethical questions is now.
- The Commission and AI
Before I turn to how we are addressing the specific issue of ethics in AI for language services, let me give you a brief overview of the Commission’s overall policy approach to AI.
Though the AI hype really took off in November last year, the Commission strategy on ‘Artificial Intelligence for Europe’ was adopted 5 years ago in 2018.
The strategy recognised that AI is already part of our lives and is not science fiction. That it can make our lives easier in so many ways and is transforming our society as well as our economy. But it also emphasized that AI brings new challenges for the future of work. And raises complex legal and ethical questions that need to be addressed.
That is why the Commission has fleshed out this strategy into a new package of measures that addresses these legal and ethical questions. It has proposed a robust regulatory framework that both encourages innovation and boosts funding while mitigating risk and ensuring safety, security, and transparency.
Let me outline the three key pieces of draft legislation from the EU AI package that confirm how ethical issues remain central to the EU approach on AI. They are all due to be adopted by Council and Parliament by the end of the year.
First, the proposal for an Artificial Intelligence Act focuses on ‘trustworthy AI’ and lays down a risk methodology to define the nature of obligations linked to developing, importing, and using AI in the EU. I will say more on this in a moment as this is a key piece of legislation.
Second, a more recent proposal that complements the AI Act is the Commission’s proposal for an AI Liability Directive, which aims to adapt private law to the needs of the digital economy.
It is intended to protect people and companies by making it easier to claim damages for harm caused by AI. It will apply to providers, operators and users of AI systems and broadly captures providers and users that are available or operate within the EU. So, if you are a company or association set up in the EU, keep an eye out for this, it could prove helpful.
Third, a new Defective Products Liability Directive will update the EU product liability framework to better reflect the digital economy and will explicitly include AI products within its scope.
Turning back to the EU Artificial Intelligence Act. This piece of legislation is the world’s first comprehensive legal framework for artificial intelligence and is already a blueprint for the whole world.
Our objective with this Act is to make Europe the global hub for trustworthy artificial intelligence by putting in place rules that govern its safe and ethical use. It is the first in the world to adopt a risk-based approach to AI, which means it sets different rules for different risk levels.
The Act is currently at the second stage of debate in the Parliament and the Council. The key elements of this Act are that:
- It classifies the level of risk AI could pose to public health and safety or fundamental rights into four risk tiers: unacceptable, high, limited, and minimal.
- It bans unacceptable exploitations of AI technologies, such as social scoring and facial recognition.
- It requires that high-risk AI systems need to fulfil strict obligations before they can be put on the market.
- It sets non-compliance penalties of up to 30 million EUR or 6% of global income; and
- It creates a European AI Board that will have an oversight and provide guidance to national authorities.
We have also set up a high-level expert group on artificial intelligence, which has produced Ethics guidelines for trustworthy AI. These guidelines state that trustworthy AI should be lawful, ethical, and robust.
This expert group has formulated 33 recommendations to guide trustworthy AI towards sustainability, growth, competitiveness, and inclusion.
The Commission also published a Communication on fostering the development of AI. This statement of political intention highlights the need for the Commission to accelerate the development of AI:
- By exploring the potential of AI and other emerging technologies.
- By enabling the Commission to be an early adopter of the new risk-based approach even before the other pieces of law come into force, and
- By investing further in developing the skills of Commission staff.
This approach opens opportunities for joint initiatives that the public and private sectors can develop together and may be something you could consider and get involved with too.
- Three ethical principles guiding our work.
So far, I have portrayed the regulatory framework that the European Commission is putting together to manage the ethical dilemmas in the use of AI.
Let me turn now to the translation profession, to the Commission’s Directorate-General for Translation (DGT) and to the ethical principles we have instilled as a response to this new twist in our digital transformation.
DGT is leading the Commission’s work to guide, explore, facilitate, and control the AI developments in a way that both respects and fosters our ethical standards and fundamental values.
As a dynamic and rapidly evolving field, AI demands imagination and caution in equal measure. We can nevertheless already set out three key ethical principles when using AI in translation, based on the general EU approach:
- to put people first (a human-centric approach)
- to be trustworthy and transparent, and
- to ensure data security and data privacy.
- Key ethical principle #1: putting people first.
Our first key ethical principle is to be human-centric and put people first.
Nothing new here. As I said, the EU first strategy for Artificial Intelligence from 2018 also places people at the centre of the development of AI – the human-centred approach. Nice, reassuring words. But what does this mean in practice?
One thing it means is the essential need to continue investing in the training of our staff. Not only so they can make the most out of AI, but also so they can use it wisely, critically assess the output and maintain a healthy awareness of potential pitfalls.
It is vital to train and develop the skillset of our staff to ensure that no one is left behind. That is why, in addition to the current in-house training courses on digital skills, we are looking at tailored AI training for our staff.
Second, we have issued guidelines for staff on how to use generative AI tools responsibly, safely, and ethically. They protect staff by empowering them to use AI in some cases and specifying when not.
- Legal texts (law-making) should never be generated by such tools.
- Confidential or personal data should not be used as a prompt to generate material.
- Any documents and especially communication material we produce should always be critically assessed for bias and inaccuracy; and
- Any text we generate should not breach intellectual property or copyright laws.
Third, we ensure that the AI tools we provide to our staff actually help to relieve them of many routine and repetitive tasks, giving our staff more time to focus on tasks where human expertise can bring the biggest value. We encourage feedback, and useful suggestions feed back into the technology in a constant, iterative process of improvement.
Fourth, we share secure resources and guidance with our sub-contractors, including the use of AI technology.
Fifth, we incorporate ethics into translator-training programmes for future translators who will join our industry.
For instance, the (EMT) European Master’s in Translation competence framework is designed to train students not only with a deep understanding of the translation process, but also with the ability to provide a translation service in line with the highest professional and ethical standards.
This includes the ability to critically assess the relevance of new IT resources and their impact on working practices. Demonstrating ‘data literacy’ is a core skill in this respect and a new element of many translator training curricula. At their meeting next month, the EMT universities will discuss the impact of AI on the way training is provided in higher education and how to assess students in the AI era.
They will also share good practices on the use of AI in the classroom. This is part of our broader work to promote new skills, such as increased AI literacy.
This might include greater familiarity with large language models such as ChatGPT, to understand where the underlying data comes from, and whether there are biases in the algorithms, or other flaws.
Critical thinking in translator training has never been more important.
Although language technology tools are more powerful than ever, language students and professionals need to refine critical thinking, healthy scepticism, and an ever-sharper awareness of nuance of meaning and tone.
- Key ethical principle #2: trustworthy and transparent AI
The second key ethical principle is to build trustworthy and transparent AI.
Trust is vital to the Commission’s ability to serve the European public. But it can never be taken for granted. As public servants, it is a privilege for us to be able to work towards building a peaceful and prosperous future. A future founded on our shared fundamental values and ethics.
We are conscious of the trust placed in us to produce language for new laws and policies that affect people’s daily lives. We must maintain public trust and confidence by taking care with the language we use. Including our use of AI.
So trustworthy AI – again, these are nice, reassuring words. But what do they mean in practice? Well, we work daily to maintain the trust of citizens:
- By exercising care and acute judgement – knowing when to use the new technology and when not to.
- By putting in place supervision, feedback, and correction processes, and
- By gaining experience – not just talking about it but learning by doing.
We are already providing AI-based services with human oversight to ensure they are trustworthy. And we recognise that AI may not be the answer to every question, and the right technology for every document or task.
To build a solid basis for making good judgement calls, we created a Commission-wide AI network, the AI@EC Network. The network is a community bringing together users and technical experts. It focuses on promoting and facilitating the practical application of the latest AI technology within the Commission, taking into account safety and ethical considerations.
DGT and me personally were assigned a leading role in the Commission-wide “AI network” to explore possibilities and concrete use cases of AI. Over 1300 colleagues from all European institutions participate in this network.
We have already launched our first pilot projects exploiting and building on our broad and extensive experience in neural technologies and machine translation.
In fact, we recently deployed a new service, branded as eSummary. It is an AI-powered service giving a multilingual automated overview of the content of a document, irrespective of the size of the document.
Next in line is a service we branded as eBriefing which also uses generative AI technology. We are currently testing eBriefing on how well it generates first drafts of policy briefings, LTT and other types of documents. And we have a number of other initiatives in the pipeline, like:
o detection of semantically similar documents, o evaluation sampling
o terminology extraction, and
o assisted editing.
Throughout the work we do in AI, we focus not on research but on real use cases in our everyday work; where AI has the potential to help our staff to speed up processes and realise additional productivity gains.
The last 6 months we have also gained access to the Luxembourg high-performance supercomputer “MeluXina” to develop and train our AI-powered models. This is a big step forward because it gives us the computing power to test LLMs with billions of parameters.
Just like our eTranslation, these new services are and will be available to European public administrations, local and regional authorities, small and medium-sized businesses in the EU, universities, NGOs, and Digital Europe projects.
- Key ethical principle #3: data security and data privacy
The last of our three key ethical principles when using AI in translation is: ensuring data security and data privacy. This principle runs through all our AI work. Here we can be very concrete.
All of our AI tools run in the same secure eTranslation environment – a fully secure environment in the cloud that we own and manage. We only use data that we own or that is open source.
And we do not collect or store any of this data. All requests and related data are deleted after processing. There is no sharing with third parties, and no access to such data by third parties. Our data management policy complies fully with the Data Protection Regulation that applies to all EU institutions and agencies.
We also have strict ethical data governance thanks to the oversight of data protection officers, a steering board, and the European Data Protection Supervisor.
- Conclusion: great power, potential and shared responsibility
Developments in AI technology are constant, on-going, and fast. As I said we cannot treat AI as our traditional IT tools and systems; AI is not traditional IT. We need to manage, explore, and exploit this innovative technology because it opens a great potential and vivid opportunities for many industries. Including our own industry and the economy as a whole. We need to join forces and work together to make the most of these opportunities; share experiences and identify potential through events like the one today.
I am proud that EU has shown global leadership by drafting a robust regulatory framework to guide AI for the benefit of all of us.
I would like to share with you a passage from the SOTEU address of our President, Ursula von der Leyen, last Wednesday: “Europe has now become a leader in supercomputing – with 3 of the 5 most powerful supercomputers in the world… This is why I can announce today a new initiative to open up our high-performance computers to AI start-ups to train their models… We need an open dialogue with those that develop and deploy AI… We will work with AI companies, so that they voluntarily commit to the principles of the AI Act. Now we should bring all of this work together towards minimum global standards for safe and ethical use of AI.”
We must never lose sight that our AI-powered services and products need to be controlled by a moral compass and ethical principles.
- We are only at the very beginning of the AI evolution.
We must be honest with ourselves and acknowledge that we simply do not know where AI will take us in 2-3 let alone 10 years’ time.
Using AI in translation is and will continue to be a learning process. But it will be much easier and less risky if we keep ethics front and centre. If we remain ethically vigilant and constantly on our guard.
And if we hardwire into our culture the fact that ethics can never ever be subordinated to short-term advantage.
We also recognise the enormous value in working together and learning from each other as our profession evolves once again.
With all the talent I see in the room today. And all the talent that will be celebrated in the language service awards later. We all have so much to learn from each other.
My wish and my approach to our role as a public-sector language service but also as regulators is that, as we use AI to power this next wave of innovation, we always remember to put people at the centre of everything that we do.