Protection of AI/ML end users from ‘bad actors’: Challenges and policy responses
Open Access
Article
Conference Proceedings
Authors: Christian Stiefmueller, Christine Leitner, Stephen Kwan
Abstract: There is widespread agreement globally on the potentially huge benefits and risks associated with the adoption of AI/ML. In a previous article we have explored regulatory frameworks for AI/ML in major global jurisdictions through different lenses. We were interested in comparing how major global jurisdictions, such as the US, China and the EU, approached the challenge of reconciling the potentially disruptive impact of adopting this new technology with the responsibility of policymakers to take into account the interests, needs, and rights of their citizens. At the present time, there is no obvious global consensus on where the balance between the two should be struck. Most recently, an international declaration on the inclusive and sustainable use of AI was signed in Paris by sixty countries, including all EU member states and China, but not the US and the UK.In the absence of a political consensus among major jurisdictions and faced with often divergent regulatory approaches at the national and regional levels, efforts by national and international standard-setting organisations, such as ANSI’s federation of Standard Development Organisations (SDOs), CEN/CENELEC/ETSI, and ISO/IEC/ITU, are bearing the burden of the increasingly important tasks of establishing common AI standards at the technical level. Some of these organisations have created dedicated work streams that expressly seek to incorporate the perspective – and voice – of the citizen into the technology design, development and adoption process. This article will examine the most important of these initiatives and try to assess their possible contribution as well as the implicit limitations of their mandate and capacities.
Keywords: Artificial Intelligence, Machine Learning, Digital Governance, Regulatory Model, Beneficial AI, Bad Actors
DOI: 10.54941/ahfe1006416
Cite this paper:
Downloads
0
Visits
11