4 min read

There are a lot of ideas floating around in Augusta about how to regulate large language models, commonly referred to as artificial intelligence. You’ve all heard the brand names — ChatGPT, Anthropic, Grok. It’s the hot-button issue of the year, all over the country, at the state and federal level.

State legislators want to study the effects of it on public education, protect kids generally and enact a whole slew of regulations generally. They want to deny prevent AI from being used to deny health campaigns, keep landlords from using it for setting rents and stop candidates from using it in campaign ads.

They also want to restrict how the state uses artificial intelligence to make policy decisions. Some of these bills have been killed, some have been carried on and some are still being considered — but it establishes a pattern.

The pattern, here in Maine, is that Democrats want to restrict or overregulate artificial intelligence. For instance, preventing landlords from using AI to establish rents? That … doesn’t make much sense. If one wishes to find rents in one’s area, one can just search it online, then set it wherever you want — above, or below.

This is an important point, by the way: artificial intelligence is just a tool.

It can be used for good or ill, however you define that. The rent example is a perfect example. One could use the tool to set the highest possible rent, or … the lowest possible rent. One could use it to maximize profits and rents, or say that one wants the lowest possible rent just to stay afloat. 

Advertisement

That gets us to one particular bill that would have banned Maine landlords from using AI tools to set rents at all, LD 1552, “An Act to Prohibit Landlords from Setting Rents Through the Use of Artificial Intelligence.”

Note the language. It would prohibit from landlords from using artificial intelligence for that purpose regardless of intent or scope. Fortunately, that particularly ill-thought out bill was killed in committee, but that’s not the only example, not by any stretch of the imagination. Legislators want to ban the use of artificial intelligence in all sorts of fields, from health insurance claims to mental health treatment to the very structure of state government itself.

It’s important to note here that regulation of artificial intelligence is not a bad thing. Every industry needs regulation in a democratic country, and artificial intelligence is no exception. The point with artificial intelligence, as with any other industry, is that we can regulate it without completely killing it. We can put in place reasonable legal guardrails that don’t completely kill the industry. Unfortunately, the bills currently under consideration in Maine don’t meet that guideline. Instead, they try to completely overregulate or kill the industry within our state.

A big part of the problem, in both Augusta, Maine, and Washington, D.C., is that many of the legislators proposing this legislation probably have not actually used artificial intelligence that much themselves at all. Everyone doesn’t need to use artificial intelligence all the time, but certainly state and federal legislators who are proposing brand-new regulations on the technology should, at least, understand it. 

Ideally, any state legislator (or member of Congress) who proposes any new regulation on artificial intelligence should at least be thoroughly familiar with the paid version of all three major U.S. services: Grok, ChatGPT and Claude. The important thing to point out here is that the paid versions of these can be dramatically different than the free versions. If you only occasionally use the free versions, you don’t really understand the technology and what it can do.

So, yes, artificial intelligence can and should be regulated by any government, but it should only be regulated by people who actually understand how it works. We cannot trust the government to actually regulate anything it does not fully understand. That’s often the problem with many emerging new technologies: the legislators trying to write the laws don’t really understand what they’re attempting to regulate.

In many ways, the risk of overregulating artificial intelligence, and crippling our abilities in the process, is worse than the risk of letting our relationship with it continue to develop. That’s why the state and federal governments should move cautiously with any regulations. We don’t want to rob ourselves of future prosperity. Rather, we want to set ourselves up for future success. This pivotal moment will determine where we go.

Join the Conversation

Please sign into your CentralMaine.com account to participate in conversations below. If you do not have an account, you can register or subscribe. Questions? Please see our FAQs.