Horizon has recently submitted separate written responses to the European Commission’s consultations on its Data Strategy and AI White Paper. As key parts of one of the EU’s most ambitious priority areas for 2019-2024, the outcomes of both consultations will set the tone for the bloc’s digital policies for what will be a critical period for the developments of data-driven technologies in the coming decade – if not decades.
As highlighted in both the Strategy and the White Paper, the immense economic and social potentials of data and AI will depend on the right set of rules creating a regulatory environment that will promote technological advances and at the same time build up trust among the public. A perfect balance between these two policy goals is perhaps not always straightforward, and certainly requires a combination of both the softer and the tougher regulatory interventions. One common pitfall for policymaking is the over-reliance on market-based solutions to address market failures, which may simply create more failures. As decades of experience in regulating privacy and competition in the digital world has shown, industrial self- or co-regulation do not always prove to be the most effective strategies.
It is therefore crucial for the EU’s approaches to data and AI to avoid counting entirely on measures designed to facilitate the operation of a free market. When it comes to the use of personal data, for example, the EU’s Data Strategy clearly favours data interoperability and portability. However, while easy access and transfers of user data may promote competition, the increase of consumer welfare will depend on respecting the principle of data minimisation, which is not strictly observed by many online services at the moment. As for valuable non-personal data, such as energy efficiency data for smart buildings, a different market failure may result in the under-supply of such data. The market-based solution envisaged in the Data Strategy, namely giving full control to data producers, will not fully address the lack of incentives in generating and sharing such data. In both cases, the aim to involve and motivate individuals by reducing transaction costs may well just fall short.
Some of these considerations also go for the regulation on AI. The White Paper, for example, proposes a voluntary labelling scheme for AI applications that are deemed low-risk, with a view to enabling users to recognise compliant products and services. Useful as this scheme might be in providing market information to consumers, this may also create a false sense of trust by disengaging consumers from deliberating on the broader and longer-term impact. Without robust baseline safeguards, “empowering” consumers to reward desirable commercial practices with market mechanisms could end up shifting the enforcement burdens to consumers and failing to achieve the policy goal. Consumer decisions cannot replace longitudinal impact assessment by policymakers, and more importantly, “informed” choices made by individuals are not necessarily in line with collective or social good, especially when AI may have profound (yet unknown) implications on such areas as public health, climate change and political debates. In this regard, innovative ways to harness both market- and non-market-based measures will be essential for effective regulation.
Our responses to the European Commission’s initiatives have underlined the need for a more accurate understanding of the precise challenges we are facing and a more open mind for various regulatory options. As Europe is currently standing at the digital crossroads and getting ready to race at full speed in the global digital economy, finding the right policy pathway forward, though difficult, will be of paramount importance.
Written by Dr Jiahong Chen, Research Fellow in IT Law, Horizon