September 4, 2024

How New Jersey Could Become a Safe Haven for Open Source AI

As California’s restrictive AI laws threaten innovation, New Jersey has a unique opportunity to become a safe haven for open source AI

AI

Contributed by Justin Trugman - Cofounder & Head of Technology

AI innovation in the United States is at a critical juncture, with open source AI facing unprecedented regulatory challenges. While there’s broad consensus on the need for AI regulation, many current and proposed policies are poorly crafted and threaten to stifle innovation rather than guide it. This mishandling could cost the Western world its edge in AI, especially as China rapidly advances. Although the US models still retain a decisive lead in user feedback-based benchmarks, like the LMSYS Chatbot Arena, China is making significant headway in other areas like the GAIA benchmark (as of time of this article’s writing 9/4/2024) and is contributing cutting-edge research to platforms like HuggingFace.

As the US risks falling behind, the question isn’t whether we should regulate AI, but how to regulate AI without crippling progress. New Jersey, with its emerging AI Hub based out of Princeton University, has a unique opportunity to become a safe haven for open source AI—fostering innovation, protecting developers, and setting a standard for sensible AI regulation that balances risk and technological advancement.

To understand the challenges open source AI is facing, let's examine three major pieces of regulation currently threatening the industry: 

  • Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence from the Biden-Harris administration, which has been in effect since October 30, 2023.
  • California’s SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which recently passed the state assembly and is now awaiting Governor Newsom’s decision to either sign or veto.
  • California’s AB 3211, the California Digital Content Provenance Standards, which luckily has been recently “ordered to inactive file” after recent amendments. This bill has set a troubling precedent for misguided AI regulation, paving the way for potentially disastrous future policies.

Executive Order: Over-Regulating the Basics

The government has now decided to regulate math. According to the Executive Order from the Biden-Harris administration, AI models with a “quantity of computing power greater than 1026 integer or floating-point operations”—essentially solving over 1026 math problems per second—will need to jump through a series of bureaucratic hoops. 

It’s almost as if we’re reliving the days when PlayStation 2 consoles got caught up in outdated export regulations. Not because of any specific threat they posed, but because their processors fell within regulations designed to prevent “high-speed” processors from being repurposed for missile guidance systems by adversaries like Saddam Hussein. While the PlayStation 2’s processor speeds seem laughably slow by today’s standards, this serves as a clear example of how imposing computational limits based on current technology can quickly become irrelevant and stifle future innovation.

​​By imposing these computational limits, the order doesn’t just create unnecessary hurdles; it inadvertently punishes smaller developers and open source communities who lack the resources to comply with these complex requirements. Instead of fostering a supportive environment for AI innovation, this approach risks pushing talent and breakthroughs out of the US, and therefore ceding our competitive advantage to countries with fewer regulatory burdens.

SB 1047: Misplaced Liability and Its Threat to Open Source

California’s SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is another example of misguided regulation that poses a serious threat to the AI landscape, particularly for open source projects. A major flaw in this bill is that it holds AI developers liable for how their models are used, even if the models’ safety measures are bypassed. For instance, if a terrorist were to modify a Toyota Mirai by bypassing its safety systems to turn it into a hydrogen bomb—similar to how modifications were made in Ukraine—Toyota wouldn’t be held responsible. However, under SB 1047, if someone bypasses the safety mechanisms of an AI model to perform illegal activities, the original developers would still be liable. This kind of liability is not only unprecedented but also fundamentally misunderstands the history of open source software, where control over how software is used or modified post-release is inherently limited.

The bill also extends the Biden-Harris administration’s regulation of AI models that exceed certain computational thresholds by adding a vague clause that covers the amount of money spent training the model. In other words, even if you develop an efficient AI model that technically doesn’t fall within the federal limits, you could still be dragged into compliance just because your model cost a certain amount to train. This essentially punishes efficiency and puts open source contributors under a constant threat of arbitrary regulation. Additionally calculating the amount of money spent on training a model is non-trivial. Should the number include synthetic data generated for training the model? Does postraining and fine-tuning the model count? How about the costs of experimentation and failed training runs? The bill does not make it clear what should be included and how it should be weighted to compute the “cost of training” which will result in mass confusion from companies training AI models.

In its attempt to safeguard AI, SB 1047 instead creates a highly restricted environment, where innovation will be stifled by legal uncertainty. Developers, particularly in the open source community, would be left to navigate a legal minefield that seems designed more to discourage AI advancement than to support it.

AB 3211: Near-Impossible Standards and the Threat to Innovation

AB 3211 seems to be a reaction to the use of AI in disgusting applications like non-consensual deep fake sexual content. While well-intentioned, this bill falls flat on its head in execution.

The bill aims to increase transparency around generative AI outputs by requiring developers to maintain a public database of digital fingerprints for any content that could be mistaken for human-created. This standard is practically unfeasible, especially for open source projects where models can be freely downloaded and modified.

Instead of imposing near-impossible standards on developers, the focus should be on regulating applications of AI Technology that cause harm to society like what the US Senate has already done with The Defiance Act, which allows the victims of deep fake sexual abuse to pursue legal action against their perpetrators.

This is the trend that numerous AI leaders like Yann LeCun, Marc Andreessen, Ben Horowitz, Andrew Ng, and many more have been advocating for—regulating the harmful uses of AI rather than the technology itself. By targeting specific harmful applications, as The Defiance Act does, we can protect innovation and ensure that AI continues to develop in ways that benefit society.

AB 3211’s overly stringent requirements risk setting a precedent of regulations that stifle innovation by making compliance so burdensome that only the largest players can keep up, sidelining open source contributors and small startups that drive much of today’s AI advancements.

How New Jersey Should Position Itself to Take Advantage of the Regulatory Havoc

The chaos resulting from these misguided regulations presents a unique opportunity for New Jersey to set itself apart with sensible AI policies that foster innovation and support the open source community. New Jersey can establish itself as a safe haven for AI innovation by crafting regulations that focus on the harmful applications of AI rather than stifling the technology itself. 

Much like how France vehemently defends Mistral against restrictive EU regulations, New Jersey should also strongly oppose the nonsensical regulations imparted by the Federal Government and proposed by California that affect the entire AI industry and thereby create a bastion of safety to innovate on AI within the Garden State through the recently established Princeton AI Hub. 

The AI Hub should be positioned to welcome these weary Open Source AI leaders by offering tax credits to significant contributors to frontier open source AI projects, enhancing the NJ EDA Investor Match program to support cutting-edge AI startups, and investing in Nuclear infrastructure co-located with AI datacenters to make energy costs for running AI servers more affordable.

By positioning itself as a state that values innovation and transparency, New Jersey can put in place practical regulation, with favorable laws for training and developing AI models. Paving the way for New Jersey to become the default choice of venue in AI Model licenses–ironically mirroring how Delaware took the title of preferred incorporation venue from New Jersey

This is New Jersey’s moment to lead by example, ensuring that AI continues to evolve in a way that benefits society and maintains the United States’ competitive edge on the global stage. By establishing a future where AI development is both safe and unrestricted by outdated fears, New Jersey can secure its place as the leading hub for AI innovation in the country.

Many thanks to Aaron Price, Mark Yackanich, and Dean Ball for reviewing this piece and providing valuable feedback.