Throughout this year, Artificial Intelligence (AI) has dominated the science policy agenda in Brussels and far beyond. Numerous studies, committees and public consultations [see footnote] culminated in the conclusion that we are basically part of an ongoing “social experiment”. The responsibility and accountability of the technologies in question will therefore entirely depend on how they are researched and designed, regulated and deployed.
Standard approaches toward risk assessment may not fully capture important ethical implications (many of which will not be quantifiable and some even entirely unobservable). Research and innovation funders and promoters therefore need to explicitly expect responsibility in AI programs and projects. How can we safeguard that AI research and innovation take democratic values sufficiently into account? How are citizens protected from impacts of AI of which they may not even be aware? What does this mean for applying principles of precaution? To which extent can we trust the research communities and industry to self-regulate itself when it comes to creating level playing fields?
Shoshana Zuboff, who coined the phrase ‘Surveillance Capitalism’, has warned against “marching naked into the digital century without the charters of rights, legal frameworks, regulatory paradigms, and institutions necessary to ensure a digital future that is compatible with democracy”.
How to ‘get dressed’ for the policy challenges described above, will be discussed with key actors in the field at our 2 hour online symposium (fully open to register).
The official program and more information can be found here.