Democratizing AI: A Vision for Open, Inclusive, and Transformative Innovation

Cover Image for Democratizing AI: A Vision for Open, Inclusive, and Transformative Innovation

Overarching Principles:

  1. The inherent air gap between AI-generated outputs and human-executed actions must be recognized as the fundamental kill switch and preserved to ensure human control over AI systems.

  2. AI development and governance must prioritize open access to knowledge, tools, and resources to democratize innovation and prevent the entrenchment of narrow private interests.

  3. AI regulation should foster diversity, inclusivity, and broad participation in the development and oversight of AI systems to promote equitable benefits and mitigate risks.

Proposed Modifications:

  1. Add a new section (e.g., 22603.5) explicitly affirming the principle of the human-AI air gap as the essential kill switch. Suggested language: "No provision of this chapter shall be construed to erode the separation between AI-generated outputs and human-executed actions, or to grant AI systems direct control over high-risk decisions or physical systems."

  2. Amend Section 22602 to include definitions of "open-source AI" and "democratized AI innovation." Suggested language: "'Open-source AI' refers to AI systems whose source code, training data, and model architectures are made freely available for public scrutiny, collaboration, and reuse. 'Democratized AI innovation' refers to the development of AI systems through transparent, accessible, and inclusive practices that enable broad participation and benefit-sharing."

  3. Modify Section 22603 to create a safe harbor for open-source AI development. Suggested language: "Notwithstanding any other provision of this chapter, open-source AI projects that adhere to responsible and transparent development practices, as determined by the Frontier Model Division, shall be exempt from the requirements of Sections 22603(a)-(e)."

  4. Add a new subsection (e.g., 22603(g)) requiring covered AI developers to make their models, training data, and safety documentation openly available, with narrow exceptions. Suggested language: "Covered developers shall make their AI models, training data, and safety documentation openly available for public scrutiny and collaboration, subject only to narrowly tailored exceptions for legitimate trade secrets or individual privacy concerns, as determined by the Frontier Model Division."

  5. Amend Section 22604 to require operators of AI computing clusters to reserve a portion of their resources for open-source, academic, and non-profit AI projects. Suggested language: "Operators of AI computing clusters shall reserve at least [X%] of their computing resources for open-source, academic, and non-profit AI projects at subsidized or no cost. The Frontier Model Division shall determine the appropriate percentage and establish criteria for eligibility."

  6. Modify Section 22605 to mandate open APIs and interoperability standards. Suggested language: "AI developers and computing cluster operators shall provide open APIs, interoperability standards, and other technical interfaces, as determined by the Frontier Model Division, to facilitate third-party innovation, auditing, and collaboration around their AI systems."

  7. Add a new section (e.g., 22610) establishing a "Public AI Innovation Fund." Suggested language: "The Public AI Innovation Fund is hereby established to provide grants, scholarships, and other financial support for open-source AI projects, academic research, and community-driven AI initiatives, with a focus on underrepresented groups and diverse perspectives. The Fund shall be administered by the Frontier Model Division."

  8. Amend Section 11547.6 to require the Frontier Model Division to prioritize open, democratized AI innovation. Suggested language: "The Frontier Model Division shall prioritize the promotion of open, democratized AI innovation in all its activities, including its guidance, best practices, and advisory committee memberships."

  9. Modify Section 11547.6(c)(7) to empower the open-source AI advisory committee. Suggested language: "The open-source AI advisory committee shall play a leading role in shaping the Division's policies and recommendations, with a mandate to advocate for the interests of independent developers, researchers, and the broader public."

  10. Add a new subsection (e.g., 11547.7(d)) requiring the CalCompute initiative to support open-source, academic, and community-driven AI projects. Suggested language: "The CalCompute initiative shall dedicate a significant portion of its resources and expertise, as determined by the Frontier Model Division, to supporting open-source, academic, and community-driven AI projects, with a focus on enabling broad participation and diverse perspectives."

  11. Include a new section (e.g., 22611) encouraging state agencies to prioritize open-source AI solutions and collaborative innovation models. Suggested language: "All state agencies and public entities engaging with AI are encouraged to prioritize open-source solutions, public-private partnerships, and collaborative innovation models over proprietary or closed systems, to the extent practicable."

  12. Add a severability clause as Section 22612. Suggested language: "If any provision of this chapter or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application."

1. Introduction

The rapid advancement of artificial intelligence (AI) technologies presents both unprecedented opportunities and complex challenges for our society. AI has the potential to drive transformative innovation across a wide range of domains, from healthcare and education to transportation and environmental sustainability (Bughin et al., 2018; Chui et al., 2018). However, the development and deployment of AI systems also raise significant ethical, social, and governance challenges, including issues of fairness, transparency, accountability, and safety (IEEE, 2019; OECD, 2019).

As AI systems become increasingly sophisticated and ubiquitous, it is essential that we develop governance frameworks that not only mitigate potential risks but also actively promote the positive, transformative potential of AI. This imperative has been recognized by a growing number of international organizations, governments, and civil society groups. For example, the United Nations' Secretary-General has called for a "human-centered approach" to AI governance that prioritizes "inclusivity, transparency, and accountability" (Guterres, 2018). Similarly, the Organization for Economic Co-operation and Development (OECD) has adopted a set of principles for responsible AI that emphasize the importance of promoting inclusive growth, sustainable development, and well-being (OECD, 2019).

This white paper argues that the key to achieving these goals lies in democratizing AI innovation - creating an ecosystem that is open, inclusive, and collaborative, and that empowers a diverse range of stakeholders to participate in shaping the future of this powerful technology. This argument builds on a growing body of research and advocacy that highlights the importance of open and inclusive approaches to AI development and governance (Benkler, 2019; Fjeld et al., 2020; Whittaker et al., 2018).

For example, a recent report by the AI Now Institute at New York University argues that "the development of AI should be accompanied by the democratization of AI" and calls for "a more inclusive and participatory approach to AI governance" (Whittaker et al., 2018, p. 30). Similarly, a study by the Berkman Klein Center for Internet & Society at Harvard University emphasizes the need for "a more open and collaborative approach to AI development" that "prioritizes public participation, diversity, and inclusion" (Fjeld et al., 2020, p. 5).

The democratization of AI innovation is not only a moral imperative but also a practical necessity. As AI systems become more powerful and pervasive, their impacts will be felt across all segments of society. Ensuring that these impacts are positive and equitable will require the active participation and input of a wide range of stakeholders, including not only AI researchers and developers but also policymakers, civil society organizations, affected communities, and the general public (Kortz & Doshi-Velez, 2019; Musikanski et al., 2020).

Moreover, democratizing AI innovation can help to unlock the full transformative potential of this technology. By enabling a more diverse range of perspectives and experiences to shape the development and deployment of AI systems, we can foster a more vibrant, creative, and resilient innovation ecosystem (Cath et al., 2018; Dutton, 2018). This, in turn, can help to ensure that the benefits of AI are more widely shared and that the technology is developed in ways that align with a broad range of societal values and priorities.

In the following sections, this white paper will outline a set of overarching principles and specific policy proposals for democratizing AI innovation, with a focus on the proposed modifications to California's SB 1047. These proposals are grounded in a growing body of research and best practices from around the world and are intended to serve as a model for other jurisdictions seeking to promote open, inclusive, and transformative AI governance.

2. The Need for a New Approach

Current approaches to AI governance often focus narrowly on risk mitigation, such as preventing bias, ensuring privacy, and maintaining security. While these are indeed critical considerations, they fail to adequately address the broader societal implications of AI, including its potential to exacerbate existing inequalities, concentrate power in the hands of a few, and fundamentally reshape the nature of work and human interaction (Crawford et al., 2019; Zuboff, 2019).

Moreover, many existing AI governance frameworks are reactive rather than proactive, seeking to address problems only after they have emerged rather than steering the development of AI systems towards desired outcomes from the outset (Calo, 2017). This approach is insufficient given the rapid pace of AI development and the potentially irreversible nature of its impacts on society (Bostrom, 2017).

A growing body of research and advocacy argues that a more comprehensive and proactive approach to AI governance is needed - one that recognizes the inherent potential of AI to drive positive change and that seeks to actively promote the development and deployment of AI systems in ways that benefit all of humanity (Floridi et al., 2018; Jobin et al., 2019; Rahwan et al., 2019).

For example, the "AI4People" framework developed by the European Commission's High-Level Expert Group on AI emphasizes the need for "a human-centric approach to AI" that "empowers, benefits and protects humans" (Floridi et al., 2018, p. 694). Similarly, the "Montreal Declaration for Responsible AI" calls for AI systems to be developed in ways that "promote the well-being of all sentient creatures" and "serve the common good" (Montreal Declaration, 2018).

This shift towards a more proactive and human-centric approach to AI governance is also reflected in recent policy initiatives around the world. For example, the European Union's "Ethics Guidelines for Trustworthy AI" emphasize the need for AI systems to be "lawful, ethical, and robust" and to "empower human beings, allowing them to make informed decisions and fostering their fundamental rights" (European Commission, 2019, p. 2). Similarly, Canada's "Directive on Automated Decision-Making" requires government agencies to ensure that AI systems are developed and deployed in ways that are "consistent with core administrative law principles such as transparency, accountability, legality, and procedural fairness" (Government of Canada, 2019).

However, while these initiatives represent important steps towards more comprehensive and proactive AI governance, they often lack specific mechanisms for ensuring that their principles are translated into practice (Mittelstadt, 2019; Reisman et al., 2018). Moreover, they often fail to adequately address the need for more inclusive and participatory approaches to AI governance that empower a wider range of stakeholders to shape the development and deployment of AI systems (Kuhlmann et al., 2018; Stilgoe, 2018).

This white paper contends that a truly comprehensive and proactive approach to AI governance must be grounded in a set of clear, overarching principles that prioritize openness, inclusivity, and the preservation of human agency and control over AI systems. These principles must be operationalized through specific policy mechanisms that enable their practical implementation and enforcement. Moreover, they must be developed through inclusive and participatory processes that engage a wide range of stakeholders, including not only AI researchers and developers but also policymakers, civil society organizations, affected communities, and the general public (Acemoglu, 2021; Engstrom et al., 2020).

The following sections outline a set of overarching principles and specific policy proposals for realizing this vision of open, inclusive, and transformative AI governance, with a focus on the proposed modifications to California's SB 1047.

3. Overarching Principles

The proposed modifications to SB 1047 are guided by three overarching principles that should inform all aspects of AI governance: the preservation of human agency and control, the promotion of openness and inclusivity, and the fostering of diversity and broad participation.

3.1 The Human-AI Air Gap

The first principle is the recognition and preservation of the inherent "air gap" between AI-generated outputs and human-executed actions. This air gap serves as the fundamental "kill switch" that ensures human control over AI systems and prevents them from directly causing harm (Hadfield-Menell & Hadfield, 2019).

As AI systems become more sophisticated and autonomous, there is a growing risk that they may be granted direct control over high-stakes decisions or physical systems, potentially leading to unintended and irreversible consequences (Scharre, 2018). Preserving the human-AI air gap is essential to mitigating this risk and ensuring that humans remain the ultimate decision-makers and moral agents in all contexts (Bryson, 2018).

This principle has important implications for AI governance. It suggests that AI systems should be designed and deployed in ways that maintain a clear separation between their outputs and the actions taken by human users, and that prohibit them from directly executing high-stakes decisions or controlling physical systems without human oversight and intervention (Crootof, 2019).

3.2 Openness and Inclusivity

The second principle is the promotion of openness and inclusivity in AI development and governance. This principle recognizes that the benefits and risks of AI are not evenly distributed across society, and that ensuring equitable outcomes requires the active participation and empowerment of a diverse range of stakeholders (Young et al., 2019).

Openness in AI development refers to the use of transparent, accessible, and collaborative practices that enable public scrutiny, accountability, and participation (Whittlestone et al., 2019). This includes the use of open-source software, open data, and open scientific practices, as well as the promotion of knowledge sharing and technology transfer (Benkler, 2019).

Inclusivity in AI governance refers to the active engagement and empowerment of marginalized and underrepresented groups in the development and oversight of AI systems (West et al., 2019). This includes the participation of diverse stakeholders in policy-making processes, the use of participatory and community-driven approaches to AI development, and the promotion of diversity and inclusion in the AI workforce (Buolamwini & Gebru, 2018).

3.3 Diversity and Broad Participation

The third principle is the fostering of diversity and broad participation in AI development and governance. This principle recognizes that the development and deployment of AI systems should be guided by a wide range of perspectives, values, and interests, and that narrowly constituted groups should not be allowed to dominate the process (Pasquale, 2020).

Diversity in AI development refers to the inclusion of individuals from different demographic backgrounds, disciplines, and sectors in the creation and testing of AI systems (West & Allen, 2020). This includes not only diversity in terms of race, gender, and ethnicity but also diversity in terms of age, disability status, socioeconomic background, and other factors.

Broad participation in AI governance refers to the involvement of a wide range of stakeholders in the development and oversight of AI policies and regulations (Wallach & Marchant, 2019). This includes not only AI researchers and industry representatives but also policymakers, civil society organizations, labor unions, consumer advocates, and members of the general public.

Taken together, these three overarching principles provide a foundation for the development of more specific policy proposals aimed at democratizing AI innovation and ensuring that the benefits of this transformative technology are broadly shared. The following section outlines a set of such proposals, with a focus on the specific modifications to SB 1047 that could help to operationalize these principles in practice.

4. Key Proposals

To operationalize the overarching principles outlined above, we propose a set of specific modifications to SB 1047. These modifications aim to create a legal framework that promotes open, inclusive, and participatory AI development while ensuring robust safeguards against potential risks and harms.

4.1 Affirming the Human-AI Air Gap

To enshrine the principle of the human-AI air gap, we propose adding a new section to SB 1047 that explicitly prohibits the deployment of AI systems in ways that would erode the separation between AI-generated outputs and human-executed actions. This section would require AI systems to be designed and deployed in ways that maintain human control and oversight, particularly in high-stakes domains such as criminal justice, healthcare, and national security (Crootof, 2019; Hadfield-Menell & Hadfield, 2019).

Specifically, this new section could require AI systems to be designed with clear "human-in-the-loop" safeguards that prevent them from directly executing high-stakes decisions or actions without human review and approval. It could also mandate the use of "explainable AI" techniques that make the reasoning and decision-making processes of AI systems transparent and understandable to human users (Wachter et al., 2017).

4.2 Promoting Open-Source AI Development

To promote openness and transparency in AI development, we propose modifying Section 22603 of SB 1047 to create a safe harbor for open-source AI development. This safe harbor would exempt open-source AI projects from some of the bill's more onerous compliance requirements, provided they adhere to certain basic principles of responsible and transparent development (Benkler, 2019).

These principles could include the use of open-source licenses, the publication of model architectures and training data, and the adherence to ethical guidelines and best practices for responsible AI development (Fjeld et al., 2020). The goal would be to create incentives for AI developers to adopt open and transparent practices while still ensuring a baseline level of safety and accountability.

4.3 Mandating Transparency and Accountability

To further promote transparency and accountability in AI development, we propose adding a new subsection to Section 22603 that would require covered entities to disclose key information about their AI systems to the public. This could include information about the purpose and intended use of the system, the data used to train it, the key performance metrics and limitations of the system, and any known risks or potential biases (Gebru et al., 2018; Mitchell et al., 2019).

This subsection could also require covered entities to conduct and publish regular audits and impact assessments of their AI systems, to ensure they are operating as intended and not causing unintended harms (Raji et al., 2020). These audits and assessments could be conducted by independent third parties and made available to the public and to relevant regulatory agencies.

4.4 Fostering Inclusive and Participatory Governance

To foster inclusive and participatory AI governance, we propose modifying Section 11547.6 of SB 1047 to require the Frontier Model Division to establish a multi-stakeholder advisory council. This council would be composed of representatives from a diverse range of stakeholder groups, including AI researchers, industry representatives, civil society organizations, labor unions, consumer advocates, and members of marginalized and underrepresented communities (Young et al., 2019).

The council would be tasked with providing input and recommendations on the development and implementation of AI policies and regulations, as well as with monitoring the social and ethical implications of AI development more broadly. It would be required to hold regular public meetings and to solicit input from the broader public through various participatory mechanisms, such as public hearings, online consultations, and citizen assemblies (Kuhlmann et al., 2019).

4.5 Establishing a Public AI Research Fund

To support open and inclusive AI research and development, we propose adding a new section to SB 1047 that would establish a Public AI Research Fund. This fund would provide grants and other forms of financial support for open-source AI projects, academic research, and community-driven initiatives aimed at promoting responsible and beneficial AI development (Benkler, 2019; Fjeld et al., 2020).

The fund would prioritize projects that aim to address social and ethical challenges related to AI, such as bias and discrimination, privacy and security, and the future of work. It would also prioritize projects that involve collaboration between different stakeholder groups and that aim to build capacity and expertise in underrepresented communities (Young et al., 2019).

Taken together, these proposals aim to create a legal and institutional framework that promotes open, inclusive, and participatory AI development while ensuring robust safeguards against potential risks and harms. They provide specific mechanisms for operationalizing the overarching principles of human control, openness and inclusivity, and diversity and broad participation in the context of SB 1047.

5. Anticipated Impact

The proposed modifications to SB 1047 have the potential to significantly reshape the landscape of AI development and governance in California and beyond. By creating a legal framework that promotes open, inclusive, and participatory AI innovation while ensuring robust safeguards against potential risks and harms, these modifications could help to steer the development and deployment of AI systems in a more responsible and beneficial direction.

5.1 Promoting Responsible AI Development

The proposed safe harbor for open-source AI development, combined with the mandatory transparency and accountability requirements for covered entities, could create strong incentives for AI developers to adopt more responsible and ethical practices. By requiring developers to disclose key information about their systems and to conduct regular audits and impact assessments, these provisions could help to surface potential risks and biases early in the development process and create opportunities for public scrutiny and accountability (Fjeld et al., 2020; Raji et al., 2020).

At the same time, the proposed Public AI Research Fund could help to support the development of AI systems that are explicitly designed to address social and ethical challenges and to promote the public good. By prioritizing projects that involve collaboration between different stakeholder groups and that aim to build capacity in underrepresented communities, this fund could help to diversify the AI research and development landscape and ensure that a wider range of perspectives and interests are represented (Benkler, 2019; Young et al., 2019).

5.2 Ensuring Human Control and Oversight

The proposed prohibition on AI systems that erode the human-AI air gap, combined with the requirement for clear human-in-the-loop safeguards and explainable AI techniques, could help to ensure that humans remain in control of high-stakes decisions and actions even as AI systems become more sophisticated and autonomous. By maintaining a clear separation between AI-generated outputs and human-executed actions, these provisions could help to prevent unintended and potentially catastrophic consequences, particularly in domains such as criminal justice, healthcare, and national security (Crootof, 2019; Hadfield-Menell & Hadfield, 2019).

At the same time, the requirement for explainable AI techniques could help to make the reasoning and decision-making processes of AI systems more transparent and understandable to human users, thereby facilitating more effective human oversight and accountability. This could be particularly important in contexts where AI systems are being used to make decisions that have significant impacts on individuals' lives, such as in hiring, lending, and criminal sentencing (Wachter et al., 2017).

5.3 Fostering Inclusive and Participatory Governance

The proposed multi-stakeholder advisory council, combined with the requirement for regular public meetings and participatory mechanisms, could help to foster a more inclusive and participatory approach to AI governance. By bringing together representatives from a diverse range of stakeholder groups and soliciting input from the broader public, these provisions could help to ensure that a wider range of perspectives and interests are considered in the development and implementation of AI policies and regulations (Kuhlmann et al., 2019; Young et al., 2019).

This could be particularly important for ensuring that the benefits and risks of AI are distributed equitably across society and that the voices of marginalized and underrepresented communities are heard in the governance process. By creating formal channels for public participation and deliberation, these provisions could help to build public trust and legitimacy in AI governance and ensure that the development and deployment of AI systems are guided by democratic values and the public interest (Stilgoe, 2018).

5.4 Setting a Global Precedent

Finally, the proposed modifications to SB 1047 could help to set a global precedent for responsible and inclusive AI governance. As one of the world's leading hubs of AI research and development, California has the potential to influence the trajectory of AI innovation not only within its own borders but around the world. By adopting a legal framework that prioritizes openness, inclusivity, and human control, California could inspire other jurisdictions to follow suit and contribute to the development of global norms and standards for responsible AI (Calo, 2017; Wallach & Marchant, 2019).

This could be particularly important in the context of growing concerns about the potential risks and harms of AI systems, including their potential to exacerbate social inequalities, undermine democratic institutions, and threaten human rights (Crawford et al., 2019; Zuboff, 2019). By demonstrating that it is possible to promote rapid and transformative AI innovation while still ensuring robust safeguards and democratic accountability, California could help to build a global consensus around the need for responsible and inclusive AI governance (Floridi et al., 2018).

6. SB 1047: A Significant Step

The rapid advancement of artificial intelligence presents both immense opportunities and profound challenges for our society. As we seek to harness the transformative potential of this technology while mitigating its risks and ensuring that its benefits are broadly shared, it is essential that we develop governance frameworks that are open, inclusive, and grounded in democratic values.

The proposed modifications to SB 1047 represent a significant step in this direction. By creating a legal framework that promotes open and participatory AI development, ensures robust safeguards against potential harms, and fosters inclusive and democratic governance, these modifications could help to steer the trajectory of AI innovation in a more responsible and beneficial direction.

While the challenges posed by AI are complex and multifaceted, the principles and proposals outlined in this white paper offer a promising path forward. By embracing openness, inclusivity, and human control as core values in AI governance, we can work to ensure that the development and deployment of AI systems are guided by the public interest and the common good.

Ultimately, realizing the full potential of AI as a tool for social progress and human flourishing will require ongoing collaboration and dialogue between policymakers, researchers, industry leaders, civil society organizations, and the broader public. The proposed modifications to SB 1047 provide a foundation for this collaborative effort and a model for responsible and inclusive AI governance that can inspire similar efforts around the world.

As we move forward in this critical endeavor, let us be guided by a commitment to democracy, transparency, and the inherent dignity and worth of every human being. With these values at the forefront, we can work to create an AI future that is not only innovative and transformative but also just, equitable, and grounded in the public good.

7. Next Steps and Recommendations

While the proposed modifications to SB 1047 represent a significant step towards more responsible and inclusive AI governance, they are only the beginning of what must be an ongoing and multifaceted effort. Realizing the full potential of these proposals will require sustained collaboration and commitment from a wide range of stakeholders, as well as continued research and innovation in the field of AI governance.

To that end, we offer the following recommendations for policymakers, researchers, industry leaders, and civil society organizations looking to build on the ideas and principles outlined in this white paper:

  1. Engage in broad public consultation and dialogue: The development and implementation of AI policies and regulations should be informed by the needs, concerns, and perspectives of a wide range of stakeholders, including those who are most likely to be affected by the deployment of AI systems. Policymakers should engage in broad public consultation and dialogue, using a variety of participatory mechanisms such as public hearings, online consultations, and citizen assemblies, to ensure that diverse voices are heard and considered (Nemitz, 2018).

  2. Support interdisciplinary research on AI governance: Effective AI governance will require ongoing research and innovation across a range of disciplines, including computer science, law, ethics, social science, and public policy. Policymakers should support interdisciplinary research initiatives that bring together experts from these fields to develop new tools, methods, and frameworks for responsible and inclusive AI governance (Calo, 2017).

  3. Foster international cooperation and coordination: The challenges posed by AI are global in scope and will require international cooperation and coordination to address effectively. Policymakers should work to foster collaboration and dialogue across borders, including through multilateral forums such as the United Nations, the Organization for Economic Cooperation and Development (OECD), and the G20 (Jelinek et al., 2020).

  4. Invest in public education and awareness: Ensuring broad public understanding and engagement with AI governance will be critical to building trust and legitimacy in these efforts. Policymakers should invest in public education and awareness campaigns to help citizens understand the opportunities and challenges posed by AI, as well as the role of governance in shaping its development and deployment (Kuhlmann et al., 2019).

  5. Continuously monitor and assess the impact of AI systems: The rapid pace of AI development means that governance frameworks will need to be flexible and adaptable to keep up with new challenges and opportunities. Policymakers should establish mechanisms for continuously monitoring and assessing the impact of AI systems on society, including through ongoing data collection, analysis, and public reporting (Raji et al., 2020).

  6. Promote diversity and inclusion in AI development and governance: Ensuring that the benefits of AI are broadly shared and that its risks are mitigated will require the active participation and leadership of individuals and communities that have historically been underrepresented in technology development and policymaking. Industry leaders and policymakers should take proactive steps to promote diversity and inclusion in AI development and governance, including by supporting initiatives that aim to build capacity and expertise in underrepresented communities (West et al., 2019).

  7. Prioritize the public interest and common good: Ultimately, the goal of AI governance should be to ensure that the development and deployment of AI systems are guided by the public interest and the common good, rather than narrow commercial or political interests. Policymakers and industry leaders should prioritize approaches to AI governance that place the needs and well-being of citizens and society at the center, and that are grounded in democratic values such as transparency, accountability, and inclusivity (Floridi et al., 2018).

By taking these steps and building on the principles and proposals outlined in this white paper, we can work towards an AI future that is not only transformative and innovative but also responsible, inclusive, and grounded in the public good. The road ahead will not be easy, but with sustained collaboration, commitment, and leadership from all stakeholders, we can rise to the challenge and ensure that the benefits of this powerful technology are broadly shared and its risks are effectively mitigated.

8. Conclusion

The rapid advancement of artificial intelligence presents us with a unique opportunity to shape the future of this transformative technology in ways that promote the public good and ensure that its benefits are broadly shared. As we have seen throughout this white paper, realizing this vision will require a commitment to open, inclusive, and participatory approaches to AI development and governance, as well as a willingness to prioritize the needs and well-being of citizens and society over narrow commercial or political interests.

The proposed modifications to SB 1047 offer a promising model for what this kind of responsible and inclusive AI governance could look like in practice. By creating a legal framework that promotes transparency, accountability, and human control over AI systems, while also fostering diversity and inclusion in AI development and governance, these modifications represent a significant step towards a more democratic and equitable AI future.

However, as we have also seen, the challenges posed by AI are complex and multifaceted, and will require ongoing collaboration, research, and innovation to address effectively. Policymakers, industry leaders, researchers, and civil society organizations all have important roles to play in this effort, and must work together to develop governance frameworks that are flexible, adaptable, and responsive to the rapidly evolving landscape of AI development and deployment.

Ultimately, the success of these efforts will depend on our ability to keep the public interest and common good at the center of our decision-making, and to ensure that the voices and perspectives of all stakeholders - especially those who have historically been marginalized or underrepresented in technology development and policymaking - are heard and considered.

If we can rise to this challenge, however, the potential benefits are immense. By harnessing the power of AI in ways that are responsible, inclusive, and grounded in democratic values, we can unlock new possibilities for social progress, scientific discovery, and human flourishing, while also mitigating the risks and challenges posed by this powerful technology.

The road ahead will not be easy, but with a commitment to collaboration, transparency, and the inherent dignity and worth of every human being, we can work towards an AI future that is not only innovative and transformative but also just, equitable, and grounded in the public good. Let us seize this opportunity, and work together to build a brighter, more inclusive, and more hopeful future for all.

Appendices

Proposed Amendments to SB 1047

  1. Section 22603.5 - Human-AI Air Gap a. Add a new section explicitly affirming the principle of the human-AI air gap as the essential kill switch.

    • Alternative: Include this principle in the legislative findings and declarations section of the bill. b. Prohibit provisions that erode the separation between AI-generated outputs and human-executed actions or grant AI systems direct control over high-risk decisions or physical systems.

    • Variation: Specify a list of high-risk domains (e.g., criminal justice, healthcare, finance) where the human-AI air gap must be maintained.

    • Alternative: Require a mandatory human review and approval process for AI-generated outputs in high-risk domains.

  2. Section 22602 - Definitions a. Include definitions of "open-source AI" and "democratized AI innovation" that emphasize transparency, accessibility, and inclusive practices.

    • Variation: Provide more detailed criteria for what constitutes open-source AI (e.g., permissive licensing, public documentation, open data).

    • Alternative: Include a definition of "closed-source AI" and specify restrictions or requirements for such systems.

  3. Section 22603 - Safe Harbor for Open-Source AI a. Create a safe harbor for open-source AI development, exempting projects from certain compliance requirements if they adhere to responsible development practices.

    • Variation: Specify a list of responsible development practices (e.g., transparency, accountability, privacy protection) that qualify for the safe harbor.

    • Alternative: Establish a tiered safe harbor with different levels of exemption based on the degree of openness and responsible practices.

  4. Section 22603(g) - Mandatory Disclosure of AI Models and Data a. Require covered AI developers to make their models, training data, and safety documentation openly available for public scrutiny and collaboration, with narrow exceptions for trade secrets or privacy concerns.

    • Variation: Specify a process for developers to request exceptions and for the Frontier Model Division to review and approve them.

    • Alternative: Require disclosure only to a designated regulatory body or independent auditor, rather than to the general public.

  5. Section 22604 - Equitable Access to AI Computing Resources a. Require operators of AI computing clusters to reserve a portion of their resources for open-source, academic, and non-profit AI projects at subsidized or no cost.

    • Variation: Specify a minimum percentage (e.g., 10%, 20%, 30%) of computing resources that must be reserved for such projects.

    • Alternative: Establish a public-private partnership to create a dedicated computing cluster for open-source and non-profit AI projects.

  6. Section 22605 - Open APIs and Interoperability Standards a. Mandate that AI developers and computing cluster operators provide open APIs, interoperability standards, and other technical interfaces to facilitate third-party innovation, auditing, and collaboration.

    • Variation: Specify a set of minimum standards or protocols that must be supported (e.g., REST APIs, containerization, federated learning).

    • Alternative: Require AI developers to participate in industry-wide efforts to develop and adopt open standards and interfaces.

  7. Section 22610 - Public AI Innovation Fund a. Establish a Public AI Innovation Fund to provide grants, scholarships, and other financial support for open-source AI projects, academic research, and community-driven initiatives, with a focus on underrepresented groups and diverse perspectives.

    • Variation: Specify a minimum annual budget allocation for the fund (e.g., $10 million, $50 million, $100 million).

    • Alternative: Create tax incentives or other financial benefits for private sector entities that contribute to the fund or support open-source AI projects.

  8. Section 11547.6 - Frontier Model Division Responsibilities a. Require the Frontier Model Division to prioritize the promotion of open, democratized AI innovation in all its activities, including guidance, best practices, and advisory committee memberships.

    • Variation: Specify a set of metrics or indicators that the division must use to measure and report on its progress in promoting open AI innovation.

    • Alternative: Establish a separate office or unit within the division focused specifically on open-source AI and democratized innovation.

  9. Section 11547.6(c)(7) - Open-Source AI Advisory Committee a. Give the open-source AI advisory committee a leading role in shaping the division's policies and recommendations, with a mandate to advocate for the interests of independent developers, researchers, and the broader public.

    • Variation: Specify a minimum number or percentage of committee members who must represent open-source AI communities or public interest groups.

    • Alternative: Require the committee to hold regular public hearings and solicit input from a broad range of stakeholders.

  10. Section 11547.7 - CalCompute Initiative a. Require the CalCompute initiative to dedicate a significant portion of its resources and expertise to supporting open-source, academic, and community-driven AI projects, with a focus on enabling broad participation and diverse perspectives.

    • Variation: Specify a minimum percentage (e.g., 25%, 50%, 75%) of CalCompute resources that must be allocated to open-source and non-profit AI projects.

    • Alternative: Establish a separate grant program within CalCompute specifically for open-source AI projects and research.

  11. Section 22611 - State Agencies and Open-Source AI a. Encourage all state agencies and public entities engaging with AI to prioritize open-source solutions, public-private partnerships, and collaborative innovation models over proprietary or closed systems, to the extent practicable.

    • Variation: Require state agencies to provide written justification for using closed-source AI systems and to obtain approval from the Frontier Model Division.

    • Alternative: Establish a preference or bonus points in state procurement processes for AI vendors that use open-source software or contribute to open-source projects.

  12. Section 22612 - Severability Clause a. Include a severability clause stating that if any provision of the bill is held invalid, the remainder of the bill shall remain in effect to the fullest extent possible.

    • Alternative: Include a more specific severability clause tailored to the open-source AI and democratized innovation provisions of the bill.

Additional proposed amendments and variations to consider for SB 1047:

  1. Section 22613 - AI Transparency Reporting a. Require covered AI developers to publish annual transparency reports detailing their AI systems' performance, safety record, and any incidents or unintended consequences.

    • Variation: Specify a standardized format or template for transparency reports to ensure consistency and comparability across companies.

    • Alternative: Require transparency reports to be audited or certified by an independent third party before publication.

  2. Section 22614 - AI Ethics Training and Education a. Mandate that all employees and contractors of covered AI developers who work on AI systems undergo regular training and education on AI ethics, safety, and responsible development practices.

    • Variation: Specify a minimum number of hours or frequency of training (e.g., 4 hours per year, every 6 months).

    • Alternative: Require AI developers to collaborate with academic institutions or non-profit organizations to develop and deliver AI ethics training programs.

  3. Section 22615 - Public Participation in AI Governance a. Establish mechanisms for public participation and input in the development and implementation of AI governance policies, such as public hearings, online consultations, and citizen advisory boards.

    • Variation: Require the Frontier Model Division to respond in writing to all public comments and explain how they were considered in the final policy decisions.

    • Alternative: Create a dedicated online platform for public engagement and collaboration on AI governance issues, with tools for discussion, feedback, and idea generation.

  4. Section 22616 - AI Liability and Accountability a. Clarify the legal liability and accountability framework for AI-related harms or accidents, including provisions for compensation, remediation, and penalties for negligence or willful misconduct.

    • Variation: Establish a strict liability standard for AI developers in cases of serious harm or injury caused by their systems.

    • Alternative: Create a mandatory insurance or compensation fund for AI-related harms, funded by contributions from AI developers and operators.

  5. Section 22617 - AI Environmental Impact Assessment a. Require covered AI developers to conduct and publish environmental impact assessments of their AI systems, including energy consumption, carbon footprint, and potential effects on climate change.

    • Variation: Specify a standardized methodology or criteria for conducting environmental impact assessments of AI systems.

    • Alternative: Offer tax incentives or other benefits for AI developers that demonstrate significant reductions in their environmental impact or use renewable energy sources.

  6. Section 22618 - AI Workforce Transition and Support a. Establish programs and funding to support workers who may be displaced or negatively affected by the adoption of AI systems, such as job training, career counseling, and income support.

    • Variation: Require AI developers to contribute a portion of their revenue or profits to a workforce transition fund.

    • Alternative: Offer tax credits or other incentives for companies that invest in retraining and upskilling their workers to work alongside AI systems.

  7. Section 22619 - AI Research and Development Tax Credit a. Establish a tax credit for qualified research and development expenses related to open-source, responsible, and inclusive AI development.

    • Variation: Specify a percentage or maximum amount of the tax credit (e.g., 20% of qualified expenses, up to $1 million per year).

    • Alternative: Make the tax credit refundable or transferable to benefit smaller startups and non-profit organizations.

  8. Section 22620 - AI Procurement Preferences a. Establish procurement preferences or set-asides for state agencies and public entities to purchase or use AI systems that are open-source, transparent, and aligned with responsible development principles.

    • Variation: Specify a minimum percentage of AI procurement budgets that must be allocated to open-source or responsible AI systems.

    • Alternative: Require state agencies to justify in writing any decisions to purchase or use closed-source or proprietary AI systems, with approval from the Frontier Model Division.

These additional proposed amendments and variations aim to further strengthen SB 1047's provisions for transparency, accountability, public participation, and responsible AI development. By offering a range of options and alternatives, this appendix aims to provide legislators with a flexible and adaptable framework for refining and implementing the bill's vision of open, inclusive, and democratized AI innovation.

References

Acemoglu, D. (2021). Harms of AI. National Bureau of Economic Research.

Benkler, Y. (2019). Don't let industry write the rules for AI. Nature, 569(7755), 161-161.

Bostrom, N. (2017). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Bryson, J. J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15-26.

Bughin, J., Seong, J., Manyika, J., Chui, M., & Joshi, R. (2018). Notes from the AI frontier: Modeling the impact of AI on the world economy. McKinsey Global Institute.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1-15.

Calo, R. (2017). Artificial intelligence policy: A primer and roadmap. UC Davis Law Review, 51, 399-435.

Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the 'good society': The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505-528.

Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., Nel, P., & Malhotra, S. (2018). Notes from the AI frontier: Applications and value of deep learning. McKinsey Global Institute.

Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., ... & Whittaker, M. (2019). AI now 2019 report. AI Now Institute at New York University.

Crootof, R. (2019). Jurisprudential space junk: Treaties and new technologies. In Risk and the regulation of uncertainty in international law (pp. 63-82). Oxford University Press.

Dutton, T. (2018). An overview of national AI strategies. Politics + AI.

Engstrom, D. F., Ho, D. E., Sharkey, C. M., & Cuéllar, M. F. (2020). Government by algorithm: Artificial intelligence in federal administrative agencies. Administrative Conference of the United States.

European Commission. (2019). Ethics guidelines for trustworthy AI. European Commission.

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for datasets. arXiv preprint arXiv:1803.09010.

Government of Canada. (2019). Directive on automated decision-making. Government of Canada.

Guterres, A. (2018). Address to the General Assembly on the Secretary-General's priorities for 2019. United Nations Secretary-General.

Hadfield-Menell, D., & Hadfield, G. K. (2019). Incomplete contracting and AI alignment. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 417-422.

IEEE. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.

Jelinek, T., Wallach, W., & Kerimi, D. (2020). Policy brief: The creation of a G20 coordinating committee for the governance of artificial intelligence. AI Ethics, 1-10.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

Kortz, M., & Doshi-Velez, F. (2019). Accountability of AI under the law: The role of explanation. Berkman Klein Center for Internet & Society.

Kuhlmann, S., Stegmaier, P., & Konrad, K. (2019). The tentative governance of emerging science and technology—A conceptual introduction. Research Policy, 48(5), 1091-1097.

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220-229.

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507.

Montreal Declaration. (2018). Montreal declaration for responsible AI. Université de Montréal.

Musikanski, L., Rakova, B., Bradbury, J., Phillips, R., & Manson, M. (2020). Artificial intelligence and community well-being: A proposal for an emerging area of research. International Journal of Community Well-Being, 3(1), 39-55.

Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089.

OECD. (2019). Recommendation of the Council on Artificial Intelligence. Organisation for Economic Co-operation and Development.

Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Harvard University Press.

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., ... & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486.

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., ... & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33-44.

Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute.

Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. W. W. Norton & Company.

Stilgoe, J. (2018). Machine learning, social learning and the governance of self-driving cars. Social Studies of Science, 48(1), 25-56.

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99.

Wallach, W., & Marchant, G. E. (2019). Toward the agile and comprehensive international governance of AI and robotics. Proceedings of the IEEE, 107(3), 505-508.

West, S. M., & Allen, R. (2020). Discriminating systems: Gender, race, and power in AI. AI Now Institute.

West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute.

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., ... & Schwartz, O. (2018). AI now report 2018. AI Now Institute at New York University.

Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 195-200.

Young, M., Magassa, L., & Friedman, B. (2019). Toward inclusive tech policy design: A method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology, 21(2), 89-103.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile books.

About Q8 Empowering Change

Q8 Empowering Change is a 501(c)3 nonprofit social good AI research organization dedicated to empowering change with information for the benefit of all humanity. The organization's mission is rooted in a deep belief in the transformative power of open and decentralized access to information, and in the potential of AI and other emerging technologies to create a more equitable, inclusive, and sustainable future for all.

Founded by Rowan Brad Gudzinas, a leading expert in AI with over 20 years of experience in systems intelligence, Rowan has been at the forefront of efforts to shape the future with some of the largest organizations in the world, including the US Federal Government, AARP, and Deloitte.

At the heart of Q8's work is a commitment to making knowledge and resources freely and readily accessible to every human everywhere. This commitment is reflected in initiatives like AI 411, which aims to provide clear, concise, and actionable information about AI to the general public, and in research reports like "AI Frontier," which explores the cutting-edge developments and long-term implications of AI technology.

Q8 Empowering Change recognizes that the rapid advancement of AI and other emerging technologies presents both immense opportunities and complex challenges for our society. To ensure that these technologies are developed and deployed in ways that benefit all of humanity, the organization advocates for a set of core values and principles, including:

  1. Open and decentralized access to information and resources, to ensure that the benefits of AI and other emerging technologies are broadly shared and not concentrated in the hands of a few powerful actors.

  2. Inclusive and participatory governance mechanisms, to ensure that the development and deployment of AI and other emerging technologies are guided by the needs and values of diverse communities and stakeholders.

  3. Responsible and ethical innovation practices, to ensure that the risks and potential negative impacts of AI and other emerging technologies are carefully considered and mitigated.

  4. Collaboration and knowledge-sharing across disciplines and sectors, to foster a vibrant and dynamic ecosystem of research, development, and deployment that is responsive to the needs and aspirations of all.

Through its research, advocacy, and public engagement efforts, Q8 Empowering Change seeks to advance these values and principles, and to help shape a future in which the transformative power of AI and other emerging technologies is harnessed for the benefit of all humanity.

Central to this work is a recognition of the vital importance of access to information and knowledge in driving positive social change. By making AI and other emerging technologies more accessible and understandable to the general public, and by fostering a culture of openness, transparency, and collaboration in their development and deployment, Q8 Empowering Change believes that we can unlock the vast potential of these technologies to address the most pressing challenges facing our world today.

For more information about Q8 Empowering Change and its mission and values, please visit our website at www.Q08.org. There, you will find a wealth of research, analysis, and resources on AI ethics and governance, as well as information on our ongoing projects and initiatives, including:

  • AI 411, a platform for providing clear, concise, and actionable information about AI to the general public (http://www.q08.org/ai-411)

  • AI Frontier - a series exploring the cutting-edge developments and long-term implications of AI technology (http://www.q08.org/ai-frontier)

  • A comprehensive framework for responsible and inclusive AI governance, emphasizing openness, transparency, and democratic participation

  • Tools and resources for individuals, communities, and organizations looking to engage with and shape the development and deployment of AI and other emerging technologies

At Q8 Empowering Change, we believe that by working together to advance the values of openness, inclusivity, responsibility, and collaboration, we can create a future in which the transformative power of AI and other emerging technologies is harnessed for the benefit of all humanity. We invite you to join us in this critical endeavor, and to help shape a brighter, more equitable, and more sustainable future for all.