The Facebook and Cambridge Analytica scandal that unfolded in 2018 highlighted the shortcomings of governments in understanding and regulating complex AI technologies. Public hearings in the United States revealed a significant lack of technical knowledge among policymakers, leading to concerns about their ability to create effective regulations.
Effective policy making requires an ability to anticipate AI progress accurately, so as to ensure the relevance of policies over time. However, this predictive ability is currently hindered by the disparity in AI knowledge between big tech companies and government regulators. This knowledge gap could result in suboptimal regulatory outcomes, as regulators grapple with the complexities of AI technologies.
To bridge the knowledge gap between tech companies and AI regulators, several measures can be taken:
Diversify recruitment strategies: Regulatory bodies should expand their recruitment strategies to include more individuals with backgrounds in AI and computer science more broadly. This would ensure a diverse talent pool with both policy and technical expertise.
Competitive compensation: Governments should explore ways to make public sector roles more competitive in terms of compensation, particularly for STEM-degree holders. This could help attract and retain top talent. A recent example in the US is the Office of Personnel Management’s approval of a Special Salary Rate (SSR) for federal employees working in specific IT and cybersecurity positions.
Specific training for civil servants: All AI regulators should complete a course on AI, resulting in an AI certificate for policymakers. Existing organisations like the Apolitical Foundation already run upskilling programs for policymakers on a diverse range of topics.
Prestigious fellowships and traineeships: This would provide opportunities for technical experts to gain policy skills and experience. Such programs would elevate the prestige of public sector work and facilitate the transfer of technical knowledge into regulatory bodies. Examples include Techcongress and the Horizon Fellowship in the US and an e-traineeship for IT-specific civil servants in The Netherlands.
Enhance collaboration: Encourage collaboration between the public and private sectors, as well as academia, to share knowledge and expertise. This would equip regulatory bodies to better understand and address AI-related challenges. Including knowledgeable civil society and think tank actors with good understanding of AI in public hearings could reduce reliance on information from big tech companies alone.
Bridging the talent gap between tech companies and AI regulators is crucial for crafting effective AI regulations. By addressing information asymmetry, refining hiring practices, and implementing the proposed solutions, governments can ensure that AI technologies are developed and deployed responsibly, thereby benefiting society as a whole. The stakes are high, and it’s crucial to invest in the right talent to navigate the complexities of AI governance for a safer and more equitable future.