Ai Paper
AI Governance in an Anarchical World:Lessons from Open-Source and Proprietary Software
Introduction Link to heading
On June 17th, 2017, Ukraine was hit by a cyber attack that not only disrupted the everyday functions of civilian life but, more importantly, opened the government’s eyes to the destructive force that a cyber attack could inflict on the country. By utilizing social engineering and vulnerabilities within the state accounting software, “Sandworm” (a Russian nation-state hacker group) was able to disrupt the country by compromising the Windows machines of a majority of businesses in the country (CISA, 2018). The prevalence of these types of attacks is only increasing, and setting rules now can be a step towards mitigating these. However, the differing means by which the United States and other nations approach this issue have made setting these rules more difficult, and the sophistication of these disruptive, dual-use technologies adds to the complexity.
Just as cybersecurity threats exposed the risks of unmanaged software infrastructure, AI’s global development poses similar governance risks. With the current development of AI models, there are different models of international cooperation, ranging from open-source collaboration to tightly controlled proprietary systems. These models are similar to past software development patterns but also introduce new challenges, such as global governance, military usage, and ensuring that technological advancements do not exacerbate inequality. The focus of my research is on answering the question of what governance challenges arise from the differing forms of international cooperation in AI development, and how they compare to past open-source and proprietary software conflicts? For academics, this offers a look into how a decentralized organization of random people can collaborate and work to produce outcomes better than structured organizations. For companies currently developing AI models, this can serve as an argument to open-source their work and allow for a more transparent approach to development. I believe that if AI governance prioritizes collaboration and open standards rather than national security concerns, it can accelerate innovation while balancing security risks.
Background Link to heading
Technology governance can be defined as the systematic management and allocation of resources related to technology development and use across various sectors. In the case of AI, the resources associated would mainly be graphics cards and data, as they can reduce compute times by a large margin. For powerful dual-use technologies, which leverage public assets for innovation and private assets for security and user control, proper governance is essential to reduce risks. Governance ensures that both public and private technologies are managed responsibly.
AI governance creates oversight mechanisms that help society balance innovation with safety, preventing misuse (such as autonomous weapons or mass surveillance) while enabling positive applications like medical diagnostics and climate modeling. Through regulatory frameworks, ethical guidelines, and multistakeholder involvement, AI governance builds public trust and addresses unintended consequences before they cause widespread harm (European Commission, 2020). It also promotes equitable access to AI benefits and establishes standards for global coordination, which is essential when AI systems operate across national boundaries. Without robust governance systems, powerful AI technologies may develop in ways that amplify risks like privacy violations, algorithmic bias, or labor market disruption, concentrating benefits among technology companies and leaving vulnerable populations to bear disproportionate harm.
A collaborative framework where a community of developers wants to create better technology can offer an alternative to a proprietary model. As Singh, Verma, and Kumar (2017) note, open-source software is typically more secure than its counterparts, has fewer bugs, and has a faster repair rate, with its users also being the developers who can test all of its functionalities. The historical relationship between open-source and proprietary software development is long and revolves around what it means to own something. According to Weber (2005), in open source terminology, it deals with the right to distribute a work, while proprietary is the right to exclude. Freedom is the key factor here with the work being the source code of the work being open, public, and nonproprietary, allowing for adaptations to fit your needs or study how it works.
When it comes to open-source technology development, the inception of the project starts with a single individual or a small group of individuals who work to create software and share the source code for others to also work on (Midha & Palvia, 2012). Through sites such as GitHub, developers open their work to others to help in its development. Allowing them to make a copy (called forking) and suggest changes to help improve it (through something called a pull request). You can report bugs, ask questions, or help others by answering theirs. GitHub makes it easy to keep track of changes, work with others, and test your code. Trust without a central authority, as seen in open-source communities like GitHub, is both a strength and a vulnerability. This same openness creates opportunities for bad actors to infiltrate and exploit the system, as shown in the XZ Utils case.
Many of the systems for critical infrastructure rely on this model of development, like the XZ Utils repository (repos). These repos and the maintainers who officially keep these going are a community bound by trust and emails. The XZ Utils repo is a tool used in Linux computers to compress and decompress files; these computers typically run behind the scenes and run most servers. As Goodin (2024) reported, in early 2024, malicious code was discovered in XZ Utils that allowed attackers to execute remote code on affected systems. The backdoor was introduced by an individual using the alias “Jia Tan,” who had been contributing to the XZ project since 2021. Over time, Jia Tan gained the trust of the community and eventually gained access. In early 2024, they committed the malicious code to the project (Lyden, 2024).
Currently, the landscape around AI cooperation is divided, with various international frameworks emerging to guide AI development and governance. Based on current international initiatives and policy directions, we can anticipate the formation of multi-stakeholder agreements similar to the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, which would represent a global declaration by numerous countries and international stakeholders aimed at guiding the development and deployment of AI in a way that benefits humanity and the environment (European Commission, 2020). Such frameworks would likely emphasize the importance of ensuring that AI is ethical, human-centric, inclusive, safe, and transparent, while also supporting sustainable development goals.
For the United States, there has been a concerted effort to remove barriers to dominance in the field of AI. Following a hegemonic doctrine to dominate the field and look to be the leaders in the field, the U.S. government has consistently prioritized AI development as a strategic national interest. Current policy directions indicate a focus on ensuring that AI development in the U.S. remains free from ideological influence and maximizes innovation, economic growth, national security, and global leadership (CISA, n.d.).
Recent actions by the U.S. executive and legislative branches reflect a growing recognition of the critical role artificial intelligence plays in both national security and innovation. These initiatives, which guide the Cybersecurity and Infrastructure Security Agency (CISA) AI roadmap, demonstrate a coordinated effort to foster responsible AI development while protecting federal systems and critical infrastructure (CISA, n.d.). For example, the Department of Homeland Security’s Policy Statement 139-06, issued in August 2023, underscores that AI must be acquired and used in a manner consistent with constitutional and legal standards, highlighting the government’s emphasis on ethical and lawful AI use.
On the innovation front, a $140 million investment by the National Science Foundation in May 2023 established new National AI Research Institutes, aimed at strengthening the country’s AI R&D infrastructure and cultivating a diverse AI workforce (CISA, n.d.). Meanwhile, the NIST AI Risk Management Framework, released in January 2023, provides voluntary guidance for public and private organizations to identify and mitigate AI-specific risks, ensuring that trust, transparency, and accountability are embedded into AI systems. Earlier foundations for this national effort include the AI in Government Act of 2020, which created the AI Center of Excellence and directed agencies to follow consistent, risk-aware practices when adopting AI technologies. Together, these measures shape a national AI strategy that balances innovation with safeguards, supports cross-sector collaboration, and ensures that AI technologies used by the federal government are secure, ethical, and effective in protecting the nation’s critical assets.
The hold that the United States has on the market currently is from the domestic corporations that have invested heavily in this technology. The United States holds a dominant position in the global artificial intelligence (AI) market, largely due to substantial investments from domestic corporations and private equity firms. In 2023, U.S. companies invested over $67 billion in AI, representing more than 70% of global private funding for AI that year (Pymnts, 2023). Key players in this investment include companies like Nvidia, which dominates the AI chip market, controlling 70% to 95% of global sales of AI chips (American Action Forum, 2023).
Moreover, industry analysts suggest that major AI companies are bolstering their infrastructure by expanding data center capacities, aimed at reducing reliance on external cloud services (Investment Council, 2023). Since 2020, private equity firms have invested over $200 billion in AI-related infrastructure, such as data centers and semiconductor manufacturing, further solidifying the U.S.’s leadership in AI.
Additionally, the U.S. government has enacted supportive policies to further advance AI development, and American institutions remain central to AI innovation. As O’Brien (2024) reports, the U.S. is producing nearly 40% of the world’s most-cited AI research papers, with top universities like MIT, Stanford, and Carnegie Mellon contributing a continuous supply of highly skilled professionals to the industry. This strong foundation, combined with significant private sector investments, has allowed the U.S. to maintain its global dominance in AI research and development.
However, the rise of new players on the global stage, such as DeepSeek, is beginning to challenge this leadership. DeepSeek, a Chinese AI startup founded in 2023, has disrupted the AI market with its open-source model, DeepSeek-R1. While precise figures are debated, McDermott (2025) discusses DeepSeek’s approach to cost-efficient AI development compared to U.S. competitors. Industry analysts suggest that models like DeepSeek-R1 may require significantly less investment than established competitors (O’Brien, 2024), positioning it as a compelling alternative to proprietary models like OpenAI’s GPT series.
The story of DeepSeek and ChatGPT closely follows the trend of software development from disruption to a more conservative and private response. Historically, open-source and proprietary software have coexisted, with open-source solutions offering adaptability and innovation, while proprietary systems often lead to commercial deployment due to ease of use and integration (Weber, 2005). This tension is amplified in AI development, where concerns about safety, regulation, and ethical use are increasingly at the forefront. As AI systems become embedded in critical infrastructure and national defense, their governance becomes not just a technological issue, but a geopolitical one.
In this context, international collaboration is more important than ever. AI has emerged as both a driver of global progress and a potential flashpoint in international relations. According to the Carnegie Endowment for International Peace (2024), countries like the United States have partnered with defense contractors to develop AI-powered surveillance and targeting systems, raising urgent concerns about the militarization of AI. Yet, the world currently lacks cohesive international norms regulating these developments. The United Nations Security Council has acknowledged the potential dangers of unchecked military AI, holding debates on the issue and advocating for a global governance framework (UN Security Council, 2024). For instance, discussions surrounding U.S. sanctions on China for its AI-related activities have highlighted the fragmented and reactive nature of current international policy (Cath, 2021).
This landscape underscores the urgent need for policymakers to establish collaborative frameworks that promote responsible AI development. A unified set of international rules or ethical standards could help ensure that innovation proceeds without sacrificing human rights, global security, or economic equity. Current debates increasingly focus on mitigating algorithmic bias, safeguarding human autonomy, and preventing the misuse of AI for authoritarian or violent ends (Lind & Park, 2023; Cowls et al., 2024).
Furthermore, the societal impact of AI cannot be ignored. As automation and intelligent systems reshape global labor markets, governments must act to prevent rising inequality and support workers displaced by technological change. The International Monetary Fund warns that up to 40% of jobs may be affected globally, especially in advanced economies (International Monetary Fund, 2024), while the Organisation for Economic Co-operation and Development notes that AI adoption may also contribute to wage suppression and inequality (OECD, 2024). Thoughtful policy, proactive education initiatives, and international cooperation will be essential in navigating this transformative era. The future of AI is not merely a technological question; it is a matter of global values, leadership, and shared responsibility.
Analysis Link to heading
To explore governance challenges in international AI cooperation, this paper adopts a comparative historical approach, examining past and present models of software development. It will analyze three key case studies: GitHub’s evolution & regional responses, Android’s ecosystem, and the development of Cloud infrastructure. The analysis will be based on the variables of openness, the governance approach, security concerns, cross-border collaboration, and the market impact.
Openness has long been a defining characteristic of GitHub and other open-source ecosystems. It facilitates decentralized innovation, transparent development, and inclusive participation. However, this openness is not absolute. As Cath (2021) discusses, the 2013 block of GitHub in China illustrated how governments can selectively restrict access in the name of political stability and cybersecurity. In response, China encouraged the development of Gitee, a domestically governed alternative. Gitee maintains a level of openness but within a framework that aligns with national content regulation and data sovereignty. This strategic containment of openness challenges the idealistic view of open source as universally accessible and uncensored.
GitHub’s pivotal role in anti-censorship efforts has further complicated its status in China. The platform has been influential in hosting tools that bypass censorship (Feng, 2024), making it a recurring target. It was briefly blocked in 2013 and suffered a massive distributed denial-of-service (DDoS) attack in 2015, later traced to servers associated with Chinese state telecom operators (Tan, 2024). More recently, a temporary block in 2024 was reportedly linked to government crackdowns on train ticket purchasing plugins hosted via GitHub (Tan, 2024). Although technically adept users are often able to circumvent blocks through VPNs and other tools, such disruptions illustrate the state’s broader campaign to curtail open digital spaces while maintaining access to crucial development resources (Feng, 2024).
The governance approach further distinguishes GitHub’s global role. As a U.S.-based platform, GitHub is bound by American law, including export controls and trade sanctions. This legal framework has led to service restrictions in sanctioned countries like Iran and Syria, where developers have often used GitHub for apolitical, educational, or entrepreneurial purposes, as noted by CISA (n.d.) in their global technology governance analyses. GitHub has sought to strike a balance by providing partial access through public repositories while restricting premium services. However, this hybrid governance model has limitations, especially when it fails to meet the needs of developers in politically sensitive regions. GitHub has attempted to implement a balanced governance strategy, creating partial access through public repositories while restricting premium services. However, this hybrid governance approach has limitations, particularly when it fails to accommodate the needs of developers in politically sensitive regions.
Security concerns add another dimension. As reported by the Cybersecurity and Infrastructure Security Agency (2018) and highlighted in recent reports on backdoors in popular utilities (Goodin, 2024), open-source platforms are both a strength and a vulnerability. GitHub must continuously monitor for malicious code, ensure repository integrity, and manage potential exploitation. Furthermore, its widespread use by AI researchers and startups places additional pressure on maintaining cybersecurity as a core operational priority.
Cross-border collaboration, once the hallmark of GitHub’s success, has come under strain. Restrictions, national forks like Gitee, and diverging legal frameworks are fragmenting what was once a unified global development community. Still, GitHub remains central to international collaboration, especially in AI and software infrastructure. The European Commission (2020) has emphasized the need for platforms that enable ethical, secure, and inclusive technological development. GitHub’s challenge is to remain an enabler of this collaboration amid rising digital sovereignty movements and strategic decoupling.
Lastly, the market impact of GitHub’s evolving role is significant. Open-source platforms lower barriers to entry, drive innovation, and underpin much of the modern AI economy. The United States’ dominance in generative AI, as highlighted by Pymnts (2023), is in part due to the robust open-source ecosystem GitHub supports. However, geopolitical tensions and regulatory fragmentation threaten this momentum by isolating regional markets and encouraging redundancy in infrastructure (United Nations University, 2024)
In sum, GitHub’s experience illustrates the tension between openness and control, the complexity of global platform governance, and the fragile nature of cross-border collaboration. It also reveals how security and market dynamics are inextricably linked to geopolitical alignments. As governments assert greater control over digital platforms, GitHub and its alternatives will continue to play a pivotal role in shaping the future of global innovation ecosystems.
The evolution of the Android ecosystem offers a powerful case study of how a dominant proprietary platform can splinter into regional variants under the pressure of geopolitical, legal, and economic forces. At its core, Android is open-source. Google’s Android Open Source Project (AOSP) forms the foundation, but the openness is highly conditional. While AOSP is accessible, the most valuable layers, Google Mobile Services (GMS), including Gmail, and Google Maps, are proprietary. When U.S. sanctions in 2019 restricted Huawei’s access to these services, it catalyzed the creation of HarmonyOS, a distinct mobile operating system developed to reduce dependency on American technologies (U.S. Department of State, 2020). However, HarmonyOS, despite its ambitions, has struggled to gain international traction due to compatibility challenges and a lack of global developer adoption, showing that even with access to the Android base, full platform independence remains difficult to achieve.
Security concerns further complicate the ecosystem. Android’s openness makes it attractive for modification, but also creates vulnerabilities. As the Cybersecurity and Infrastructure Security Agency (2018) notes in their broader cybersecurity analyses, forked versions, whether from Huawei, Xiaomi, or the AOSP community, must rely on alternative app stores and security infrastructures, which vary in quality and scrutiny. The fragmentation makes patching security issues and standardizing updates more difficult, increasing the surface area for threats, as highlighted by Goodin (2024) in discussions of security vulnerabilities in widely-used software.
While Android was originally envisioned as a globally unified mobile platform, geopolitical friction, such as the U.S.-China tech decoupling, has eroded that vision. Chinese OEMs are increasingly investing in localized app stores, and services to maintain sovereignty over their mobile ecosystems, while also pushing for app development that aligns with regional regulatory standards. This splintering has created technical and regulatory silos that undermine international compatibility and developer interoperability. Although Android remains the world’s most-used mobile OS, the emergence of regional forks such as HarmonyOS and custom Android ROMs reflects broader efforts to challenge the tech hegemony. These shifts mirror efforts in the AI space, where strategic autonomy is becoming a policy priority (Carnegie Endowment for International Peace, 2024).
In conclusion, the Android ecosystem illustrates how a dominant yet semi-open platform can be both a vehicle for global standardization and a canvas for regional divergence. Security tensions and geopolitical rivalries have reshaped Android into a multi-regional system, demonstrating that even the most entrenched digital infrastructures are vulnerable to fragmentation under the weight of national interests and regulatory pressures.
The development of global cloud infrastructure underscores the increasing regionalization of digital technologies, where data sovereignty, technological autonomy, and geopolitical considerations shape the design and deployment of cloud ecosystems. The competition among AWS (U.S.), Alibaba Cloud (China), and GAIA-X (Europe) demonstrates how regional priorities and constraints influence the evolution of cloud platforms. Cloud infrastructure is inherently less open than traditional software ecosystems, given its reliance on many factors such as proprietary hardware and managed services. AWS, as the global leader, offers powerful APIs and services, but within a tightly controlled proprietary ecosystem, as evidenced in analyses from the American Action Forum (2023) on market dominance. Alibaba Cloud follows a similar model, although it’s deeply integrated with China’s regulatory and economic frameworks, reflecting a blend of commercial and state interests, as discussed in regional technology analyses by Feng (2024) and Tan (2024).
GAIA-X, in contrast, emerged as a European response rooted in the value of openness and interoperability. While still in development, it aims to create a federated, standards-based cloud ecosystem that promotes data portability, transparency, and reversibility. As detailed by the European Commission (2020), it positions itself as a more open alternative to hyperscaler lock-in, prioritizing European values like user control, privacy, and fair access. The governance structures of cloud ecosystems reflect the strategic goals of their regions. AWS and Alibaba Cloud are centralized platforms governed by private corporations with strong links to their respective national economies and policies. U.S. cloud providers operate under frameworks like the CLOUD Act, which allows American authorities access to data stored by U.S. companies even if hosted abroad, raising global concerns about surveillance and compliance (U.S. Department of Justice, n.d.).
In contrast, GAIA-X’s governance is consortium-based, involving multiple stakeholders, governments, companies, and research institutions who co-develop the technical and ethical standards. This decentralized governance model is an attempt to embed digital sovereignty and democratic oversight into the cloud infrastructure itself, making it a tool of policy as much as technology. The rise of regionally distinct cloud ecosystems has complicated cross-border digital collaboration. The rise of regionally distinct cloud ecosystems has complicated cross-border digital collaboration, a challenge highlighted in analyses by the United Nations University (2024) on global digital governance. Businesses operating internationally must often architect their systems to comply with fragmented legal and operational requirements, such as data localization laws in China, the GDPR in Europe, or U.S. export control policies. Developers are increasingly forced to maintain multiple versions of the same service or to select cloud providers based on geography, not just technical merit. GAIA-X explicitly seeks to enable cross-border services within Europe, but even this collaboration is constrained by the diverse regulatory and national interests of member states. Meanwhile, cooperation between Chinese and Western cloud providers is increasingly rare, reflecting broader technological decoupling trends noted by the Carnegie Endowment for International Peace (2024).
The fragmentation of cloud markets has significant implications for innovation. On one hand, it fosters regional competition, standards development, and resilience. On the other hand, it limits global interoperability and raises costs for startups and SMEs, who must navigate a patchwork of technical and legal systems to scale internationally. Cloud infrastructure development offers a clear example of how global platforms are being fractured by regional imperatives. The contest between AWS, Alibaba Cloud, and GAIA-X highlights the interplay between technological leadership, national interests, and normative values.
Policy implications and recommendations Link to heading
To bridge the gap between the challenges identified and actionable policies, I am proposing several recommendations. First, regarding the tension between openness and control, particularly in platforms like GitHub, it is essential to develop hybrid open-source models that balance transparency with national security needs. Governments and organizations should consider establishing national open-source repositories that comply with local laws while still enabling global collaboration. Such a model would mitigate concerns about vulnerabilities in open-source platforms, allowing countries to safeguard sensitive information while still benefiting from collective problem-solving. This approach would also ensure that the benefits of open-source innovation are preserved without compromising national security interests (Cath, 2021; Goodin, 2024).
Second, in addressing the fragmentation of ecosystems like Android, one key recommendation is to establish global interoperability standards for AI datasets to allow for scrutiny. The Android ecosystem has been deeply impacted by geopolitical tensions and regional forks, resulting in technical and regulatory barriers that limit cross-border collaboration. By creating universal data privacy standards that can be adopted across regions, the AI industry would be able to facilitate seamless collaboration while respecting local laws and governance frameworks. A certification body could ensure compliance with these standards, enabling both local market requirements and international compatibility. Such a mechanism would ease the burden on developers and ensure more efficient and secure cross-border collaboration (American Action Forum, 2023).
International agreements on data standards are crucial to facilitate these secure and compliant data transfers across borders. These standards would ensure that businesses can operate internationally while adhering to local regulations, striking a balance between global cooperation and regional privacy protections. At the same time, national regulations could continue to safeguard data privacy and security, promoting confidence in cross-border cloud services without undermining the integrity of regional laws (European Commission, 2020; United Nations Security Council, 2024).
Finally, a networked governance approach should be adopted to ensure that global platforms are developed inclusively and ethically. This would involve a collaborative decision-making process that includes governments, businesses, civil society, and technological experts. A council, for instance, could be tasked with regulating digital platforms in a way that respects both global collaboration and regional legal frameworks. This body would help create governance standards that address ethical concerns, security risks, and market impacts, ensuring that emerging technologies like AI evolve in a way that benefits global innovation while respecting regional priorities (Cowls et al., 2024; O’Brien, 2024).
In sum, these recommendations directly respond to the challenges identified in the case studies, fostering international collaboration, market diversity, and security, while respecting the growing demands for digital sovereignty. By adopting these policies, governments and stakeholders can help manage the complexities of AI governance, balancing innovation with accountability (Pymnts, 2023; Lyden, 2024).
Potential critiques of open-source AI models often center on concerns over national security, as the increased transparency of these systems could expose vulnerabilities that might be exploited by malicious actors. However, this research argues that open-source collaboration can enhance security by leveraging collective problem-solving to identify and address weaknesses, rather than creating new risks. Open-source communities, through their diverse participation and iterative development, can detect vulnerabilities earlier and more effectively than closed, proprietary systems. Another common critique of open-source models is that proprietary systems provide businesses with greater reliability and control, offering more predictability in performance and product delivery. While this might hold in certain contexts, historical evidence suggests that open-source models often outperform proprietary alternatives in adaptability, long-term development, and innovation (Weber, 2005; Midha & Palvia, 2012). For example, open-source projects like Linux and Apache have continually evolved and driven technological progress in ways that proprietary systems could not match due to their more rigid and centralized nature.
Although this study primarily focuses on AI, its implications extend to emerging technologies such as robotics, quantum computing, and cybersecurity. As these fields advance, the increasing sophistication of cyberattacks underscores the urgent need for robust governance frameworks to manage both security risks and ethical concerns (Carnegie Endowment for International Peace, 2024). Effective governance in these contexts should take a networked approach that controls the fundamental facets of technological deployment, including ensuring ethical use and security standards. For AI, data privacy laws like the General Data Protection Regulation (GDPR) provide important areas for policy intervention. Such regulations limit the data available for training models, not to stifle innovation, but to ensure that sensitive data cannot be repurposed for harmful purposes, such as surveillance or exploitation (European Commission, 2020). In this way, governance frameworks can protect against misuse while promoting continued progress in AI development, balancing innovation with safety and responsibility.
Bibliography Link to heading
American Action Forum. (2023). The DOJ and Nvidia: AI market dominance and antitrust concerns. https://www.americanactionforum.org/insight/the-doj-and-nvidia-ai-market-dominance-and-antitrust-concerns/
Carnegie Endowment for International Peace. (2024). Governing military AI amid a geopolitical minefield. https://carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield/
Cath, C. (2021). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. SAGE Open, 11(3). https://doi.org/10.1177/20539517211039493
Cybersecurity and Infrastructure Security Agency. (2018, February). Petya ransomware. https://www.cisa.gov/news-events/alerts/2017/07/01/petya-ransomware
Cowls, J., King, T. C., Taddeo, M., & Floridi, L. (2024). Addressing algorithmic bias: A framework for ethical AI development. Ethics and Information Technology, 26(1), 45–62. https://doi.org/10.1007/s10676-024-09746-w
Cybersecurity and Infrastructure Security Agency. (n.d.). Recent U.S. efforts on AI policy. Retrieved April 22, 2025, from https://www.cisa.gov/ai/recent-efforts
European Commission. (2020). White paper on artificial intelligence: A European approach to excellence and trust (COM(2020) 65 final). https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en
Feng, E. (2024, April 17). GitHub has become a haven for China’s censored internet users. LAist. https://laist.com/news/npr-news/github-has-become-a-haven-for-chinas-censored-internet-users
Goodin, D. (2024, March 29). Backdoor found in widely used Linux utility breaks encrypted SSH connections. Ars Technica. https://arstechnica.com/security/2024/03/backdoor-found-in-widely-used-linux-utility-breaks-encrypted-ssh-connections/
International Monetary Fund. (2024). AI and the future of work: Navigating automation and displacement. https://www.businessinsider.com/we-asked-gpt4-summarize-imf-profound-concerns-about-gen-ai-2024-6
Investment Council. (2023). Private equity propels America to the front of the AI race. https://www.investmentcouncil.org/private-equity-propels-america-to-the-front-of-the-ai-race/
Lind, A., & Park, Y. J. (2023). Egocentric bias and perceptions of AI fairness. International Journal of Communication, 17, 2045–2062. https://ijoc.org/index.php/ijoc/article/view/20806
Lyden, J. (2024, April 11). One engineer may have saved the world from a massive cyberattack. NPR. https://www.npr.org/2024/04/11/1244174104/one-engineer-may-have-saved-the-world-from-a-massive-cyber-attack
McDermott, J. (2025, February 6). DeepSeek says it built its chatbot cheap. What does that mean for AI’s energy needs and the climate? AP News. https://apnews.com/article/deepseek-ai-china-climate-fossil-fuels-00c594310b22afbf150559d08b43d3a5
Metz, C., & Tobin, M. (2025, January 27). What is DeepSeek? And how is it upending A.I.? The New York Times. https://www.nytimes.com/2025/01/27/technology/what-is-deepseek-china-ai.html
Midha, V., & Palvia, P. (2012). Factors affecting the success of open source software. Journal of Systems and Software, 85(4), 895–905. https://doi.org/10.1016/j.jss.2011.11.030
O’Brien, M. (2024, April 9). U.S. ahead in AI innovation, easily surpassing China in Stanford’s new ranking. Associated Press. https://apnews.com/article/us-china-ai-standings-stanford-report-2024
Organisation for Economic Co-operation and Development. (2024). Artificial intelligence and wage inequality. https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/04/artificial-intelligence-and-wage-inequality_563908cc/bf98a45c-en.pdf
Pymnts. (2023). United States leads the world in generative AI investment. https://www.pymnts.com/news/artificial-intelligence/2023/united-states-leads-world-generative-ai-investment-innovation-implementation/
Singh, S., Verma, M., & Kumar, N. (2017). Open source software vs proprietary software. International Journal of Scientific & Engineering Research, 8(12), 735–741. http://www.ijser.org/researchpaper/Open-Source-Software-vs-Proprietary-Software.pdf
Tan, C. (2024, April 15). GitHub is getting blocked in parts of China. Tech in Asia. https://www.techinasia.com/github-blocked-china
Techloy. (2024, April 19). Huawei set to push HarmonyOS globally. Techloy. https://www.techloy.com/huawei-set-to-push-harmonyos-globally/
The White House. (2025, January). Removing barriers to American leadership in artificial intelligence. https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/
U.S. Department of Justice. (n.d.). Cloud Act resources. U.S. Department of Justice. https://www.justice.gov/criminal/cloud-act-resources
U.S. Department of State. (2020, August 17). The United States further restricts Huawei access to U.S. technology. U.S. Department of State Archive (2017–2021). https://2017-2021.state.gov/the-united-states-further-restricts-huawei-access-to-u-s-technology/
United Nations Security Council. (2024). Security Council debates AI use in conflict zones. https://press.un.org/en/2024/sc15946.doc.htm
United Nations University. (2024). The militarization of AI has severe implications for global security. https://unu.edu/article/militarization-ai-has-severe-implications-global-security-and-warfare
Weber, S. (2005). The success of open source. Harvard University Press.