Artificial Intelligence (“AI”) is not a new phenomenon; it has been around for at least 50 years as possibly the grandest of all challenges for computer science. Recent developments have led to AI systems providing remarkable levels of progress and value in different areas – from robotics in manufacturing and supply chain, to social networks and e-commerce, and systems that underpin society such as health diagnostics. As with any technology there is an initial period of hype, with excessive expectations and then a period of reality and measurable results – we are at the beginning of such a period right now. Our comments below reflect our initial thinking on these issues, and concern machine learning systems that are trained with data sets and algorithms, and not the so-called Artificial General Intelligence.
As with similar technological developments in the past, it is important that the industry is left free to develop, and that technology evolves in time according to the needs of businesses and consumers. Intervening at such an early stage would have detrimental impact on the evolution of this technology, and should therefore be avoided. Nonetheless, there is a need for a strong ethical approach as to how AI should be applied, which should be properly set out at EU level (if not at global level).
For the development of AI technology, we consider essential that the algorithm used for AI purposes is transparent (in the sense of Recital 71 of the General Data Protection Regulation (“GDPR”), which means that where there is unintended bias, this bias can be addressed. Moreover, it is appropriate that these systems are subjected to extensive testing on the basis of appropriate data sets as such systems need to be “trained” to gain equivalence to human decision making.
With regard to liability, it is important among other aspects, to consider the complex supply chain in AI services. Software algorithms and data sets which are used to train the software are important elements but other aspects need to be considered, such as the purpose of the AI application, the sector specific norms are in place. In addition, algorithms should be inspected at a technical level, so that the reasons for malfunctions can be established.
Finally, we would also like to refer you to the following academic papers, which do not necessarily reflect our position on the relevant matters, but we considered them useful and interesting background for you to have in the context of your ongoing fact-finding study on AI: