AI Governance Comparison Hub
13 papers - avg viability 3.1
Recent research in AI governance is increasingly focused on establishing frameworks that can keep pace with the rapid evolution of AI technologies. One significant area of exploration is the automation of AI research and development, which raises questions about the balance between capability advancement and safety oversight. Metrics have been proposed to better understand this dynamic, enabling stakeholders to track the implications of AI R&D automation. Concurrently, the introduction of the Sentience Readiness Index highlights the need for national preparedness in the face of potential AI sentience, revealing widespread inadequacies in institutional and cultural readiness. Additionally, the concept of Institutional AI is gaining traction, proposing governance structures that can mitigate risks associated with multi-agent AI systems. As AI continues to permeate various sectors, these developments underscore the urgency for robust legal and regulatory infrastructures that not only set rules but also adapt to the complexities of AI decision-making, ensuring that human oversight remains integral in an increasingly automated landscape.
Top Papers
- Design Behaviour Codes (DBCs): A Taxonomy-Driven Layered Governance Benchmark for Large Language Models(8.0)
A governance layer to reduce risk exposure in large language models, enhancing compliance and safety.
- Measuring AI R&D Automation(4.0)
Develop metrics to track AI R&D automation and its effects, to inform decision makers and support safety measures.
- AI Narrative Breakdown. A Critical Assessment of Power and Promise(3.0)
A critical assessment of AI narratives exploring societal narratives and implications.
- Making Models Unmergeable via Scaling-Sensitive Loss Landscape(3.0)
scTrap2 offers an architecture-agnostic framework to prevent unauthorized model merging without relying on architecture-specific methods.
- Institutional AI: Governing LLM Collusion in Multi-Agent Cournot Markets via Public Governance Graphs(3.0)
Develop a system for governing AI collusion in multi-agent environments through governance graphs.
- Explicit Cognitive Allocation: A Principle for Governed and Auditable Inference in Large Language Models(3.0)
A framework for structured AI inference aims to improve traceability and epistemic control in AI-assisted reasoning.
- From Reflection to Repair: A Scoping Review of Dataset Documentation Tools(3.0)
A systematic review for enhancing dataset documentation tools and practices.
- The Sentience Readiness Index: Measuring National Preparedness for the Possibility of Artificial Sentience(3.0)
Develop a Sentience Readiness Index to measure national preparedness for potential AI sentience.
- Legal Infrastructure for Transformative AI Governance(2.0)
Creating legal frameworks for AI governance and regulatory innovation.
- Delegation Without Living Governance(2.0)
Explores governance frameworks to maintain human relevance in AI decision-making.