Reimagining Responsible AI Through Public Disputes and Consensus Building

Introduction

The rapid development of artificial intelligence (AI) technologies presents many opportunities to improve lives but also poses many risks if deployed irresponsibly. These risks can be mitigated by applying the principles of Responsible AI such as human centricity, explainability, transparency, accountability, safety, privacy, confidentiality, and auditability of these transformational technology applications.

Effectively governing AI in ways that the public can trust requires bridging divides and reaching consensus between key stakeholders who often have very different viewpoints, experience, and disciplinary expertise. The American Bar Association's Dispute Resolution Section's Public Disputes and Consensus Building (PD & CB) Committee and its members can play a vital role in convening conversations to find common ground on AI policies. The following issues must be addressed in this vital consensus building exercise.

Managing Risks Versus Enabling Innovation

A central tension in AI regulation is how to empower beneficial uses while curtailing harmful ones. Approaches like the EU's proposed AI Act aim to prohibit certain "unacceptable risk" applications like social scoring systems while enabling most other uses. Tech companies worry too many restrictions will stifle progress and competitiveness, but public interest groups counter that public safety should take priority. Facilitated constructive dialogue can help stakeholders determine where to draw lines and whether “regulatory sandboxes” could allow controlled testing of higher-risk AI.

 

Transparency, Explainability, and Accountability

Many believe critical AI systems should be transparent and explainable to build public trust. But some developers caution that required explainability risks could expose valuable IP and undermine predictive accuracy. Potential compromises include restricting explainability needs to higher-stakes uses or only requiring explanations that make sense to the impacted end-user. Responsible AI includes clearly documenting and communicating what data is used and how systems make decisions. This enhances explainability without compromising IP. Facilitating these critical conversations will require impartial third-party expertise and involvement.

 

Privacy, Data Rights and Usage

Debates around data privacy, ownership and usage are central for AI regulation. The public wants their data protected but also wants AI to benefit fields like healthcare. Developing shared data standards, improved anonymization techniques, and accountable data stewardship models could enable data access for social good while safeguarding privacy rights. Systems granting individuals more control over and compensation for use of their data merit exploration.

Respecting data privacy rights and only using data for agreed upon purposes are core to responsible AI. This helps assure ethical data usage.

 

Liability and Accountability

Attributing liability when AI systems cause harm can be legally and ethically complex. Competing perspectives exist on whether liability should fall more on developers, deployers or end users of AI, or if new liability models are needed. Dispute resolution mechanisms can also aid the involved parties in reaching agreeable solutions. Adhering to responsible AI development practices could provide some legal liability protection to developers by demonstrating diligence to avoid preventable harms.

 

Bias, Discrimination and Fairness

Public disputes frequently erupt over real or perceived biases in AI algorithms leading to unfair or unethical outcomes. But while most agree mitigating bias is crucial, there is less consensus on how to define, measure, and regulate fairness in AI systems. Combining input from computer and social scientists along with those representing impacted groups can help generate thoughtful, culturally aware approaches to developing and auditing AI systems. But debates will persist around how to weigh competing notions of fairness. Application of Responsible AI principles to mitigate bias can be achieved though facilitated sessions with PD & CB professionals.

 

Labor, Economic and Societal Impacts

The transformative effects of AI and automation on work and jobs also stir intense debate. Employee and labor groups fear displacements while businesses highlight opportunities for growth, new roles, and reskilling. Policy proposals run the gamut from universal basic income to new workers’ rights. Further analysis of where automation will likely hit hardest along with proposals to support both business dynamism and worker protections can help identify balanced solutions. Responsible rollout of AI means assessing and mitigating impacts on workers. Retraining programs demonstrate an ethical approach to this real-world dilemma. PD & CB professionals can support the process.

 

International Governance

Since AI development crosses national borders, many advocate for multilateral agreements on AI principles and regulations. But achieving global consensus can prove challenging. Exploring common ground between widely adopted guidelines like the OECD Principles on AI and civil society declarations like the Toronto Declaration can yield shared foundations. Advisory boards with diverse international representation may also enhance governance frameworks that balance local values and contexts with the need for global standards around emerging technologies. Global consensus on responsible AI principles could smoothly align with local regulations, while shared values enable cooperation.

 

Ethics Training and Professionalization

To prioritize ethical and responsible AI development, many recommend greater training, educational requirements, and certification processes for AI professionals. However, translating broad ethical principles into operational day to day practices remains difficult. Partnerships between ethicists, engineers, computer scientists and others to co-develop practical methods, case studies and evaluation criteria for trustworthy AI design should accelerate progress. Emerging roles like AI Ethics Officers and AI Dispute Resolution Officers can further embed ethical thinking within organizations dedicated to resolving AI disputes before they metastasize.

 

Responsible AI principles directly guide formal ethics educational programs to raise awareness.

 

Building Inclusive Public Participation and Engagement

For AI policies to be trusted, the public needs more inclusive ways to inform their development. But participatory exercises like surveys, community meetings and interactive web platforms must recognize barriers to engagement like language, digital access, and time limitations. Policymakers should also be transparent about how public input is incorporated so people feel heard. Investing in information resources like interactive AI simulators and data visualizations can also enrich discussions by making AI less abstract.

Inclusive public discussions and feedback inform the creation of responsible AI principles that represent shared values. PD&CB practitioners can lead these discussions.

 

Specific Contributions to be Made by PD&CB Professionals

There are many roles for PD&CB practitioners to play in the search for Responsible AI standards and regulatory action. They include:

  • Developing consensus-based standards and best practices for responsible AI development and use. Committee members could facilitate bringing together diverse stakeholders like tech companies, civil rights groups, academics etc. to establish shared ethical guidelines.

  • Public engagement and education around AI. The PD&CB could help foster informed public debates, surveys, and consensus-building exercises to gauge public opinion and values regarding AI policies.

  • AI dispute resolution systems. As AI is used more in areas like commerce, insurance and criminal justice, there will be a need for oversight and standards around use of AI in dispute resolution processes.

  • Liability frameworks and arbitration. The committee could study how liability should be attributed when AIs cause harm, and appropriate dispute resolution processes when liability is contested.

  • AI biases and disparate impacts. There is great concern about embedded biases and discriminatory impacts from AI. The committee could recommend ways to detect, measure and mitigate algorithmic bias.

  • Labor, economic, and social displacement. The committee could propose and gain feedback on policy options to support workers and communities negatively impacted by AI and automation.

  • International governance. Since AI development crosses borders, the committee could explore and build consensus around global norms, treaties and regulations governing AI.

  • Ethics training and certification. As ethical AI design grows in importance, the committee could devise standards, training programs and certification systems for ethical AI development practices.

Conclusion

The path forward on AI governance remains challenging but not insurmountable. Focusing on the issues at hand rather than ideological differences can identify balanced and context-specific policies. Venues supporting thoughtful multistakeholder dialogue and public participation will be integral to shape AI regulations that uphold human values while allowing AI innovation to benefit all of society. But achieving this requires taking time to understand differing views, priorities, and concerns. If done sincerely, workable solutions can emerge.

In summary, the cross-cutting principles and practices of responsible AI - fairness, accountability, transparency, safety, privacy, capability, and human control - help address many of the key issues and tensions associated with AI governance. They provide guardrails for minimizing harm. Grounding regulations and policies in these principles can make them more balanced, ethical, and publicly acceptable. Responsible AI provides a strong foundation for the challenging work of building consensus around AI's beneficial governance.

The diverse perspectives on AI regulation make consensus-building and dispute resolution skills invaluable. But productive conversations require gathering the right voices around the table. Leveraging its neutral position and reputation, an organization like the PD&CB Committee could convene policy labs or advisory boards with tech firms, government agencies, advocacy groups, academia and affected industries. Skilled moderators can foster empathetic listening and exploring creative compromises. Sustained engagement on building shared AI principles could form a basis for then tackling specific policy areas. 


Larry Bridgesmith J.D.

Executive Director Guardrail Technologies and Associate Professor Vanderbilt Law School

Larry brings consulting and training at the level of the Internet through emerging technologies such as blockchain, smart contracts, artificial intelligence, cryptocurrency and interoperable functionality.

LinkedIn Profile

https://www.linkedin.com/in/larrybridgesmith/
Previous
Previous

Aligning Responsible AI With The White House Executive Order On Ethical AI