Cities Across The US Are Embracing AI Guidelines For Local Government Workers

Mayor Michelle Wu signs the Technology Modernization Executive Order on Aug. 18 to accelerate improvements to the City of Boston’s tech infrastructure and digital processes, making it easier to harness innovation and keep projects moving forward. (Courtesy of the Boston Department of Innovation & Technology)
While some states and the federal government take their time in considering how artificial intelligence can and should be used, municipalities across the U.S. have been forging their own way in making AI policies for their government employees.
“AI is generally useful,” Boston’s Chief Innovation Officer Santiago Garces said. “But it is a set of technologies that also carries unique risks that need to be considered. And I think that our employees are generally concerned about accuracy, privacy, security and intellectual property.”
Boston was among the first cities in the U.S. to make a set of guidelines for its employees, rolling out a document outlining the purpose of generative AI in government work, sample use cases, and a set of principles in May 2023.
Garces and his team watched the rollout and quick growth of ChatGPT in 2022, and believed that AI tools were going to have widespread adoption within most industries very quickly. Use of AI felt inevitable in most of the tedious or repetitive tasks of government employees, and Garces said they wanted to work with their employees to figure out the ethical use of AI, instead of resisting it.
“The notion behind the guidelines was enabling this city to be able to get into this period of responsible experimentation, so that we could learn,” Garces said. “Instead of just waiting to see what happened, we would look at managing the risk in a way that was proactive, and engage with all of our workforce as partners in learning.”
What Do The Guidelines Say?
Boston is far from alone in enacting its own AI policy. Many other cities and counties across the U.S. have developed similar policies in recent years, usually in the form of “guidelines” that steer how a government employee may evaluate AI’s accuracy or efficiency with specific tasks.
Guidelines can mirror some state legislation, and dictate when not to use the technology, like with confidential information or in making life-altering decisions such as hiring. The guidelines are meant to stay open-ended and to be flexible with changing state regulations, several city tech officials said.
In Lebanon, New Hampshire, the city’s AI policy is purposefully meant to be changed and shaped with the influence of state or city laws, Melanie McDonough, the city’s chief innovation and AI officer, said.
“We’re trying to build a policy that’s robust, that can withstand the pace at which AI is changing,” she said. “Policy is harder to change. Guidelines can be updated more frequently just to say, ‘oh, be aware, we’re actually not allowing the use of this particular feature internally because it doesn’t have enough protection.’”
The city’s policy, first released in December 2023, drew a lot from the Biden-era 2022 White House AI Blueprint. It’s centered on how city workers may use AI operationally, how they can center privacy and protection in their use and how they may navigate new AI as it becomes more pervasive in everyday life.
Boston’s guidelines outline the purpose of generative AI, and call it a tool — “We are responsible for the outcomes of our tools,” the policy says. The guidelines list several principles including empowerment, inclusion, respect, transparency, accountability, innovation, risk management, privacy, security and public purpose, and includes a list of “dos” and “do nots” in how to uphold those principles while using AI.
“We were thinking about how we capture the risk and opportunity specific to the technology in a way that does not create conflicting or additional things that might conflict with existing regulations,” Garces said of Boston’s policy.
Tempe, Arizona released a similar policy for its city workers just a month after Boston in 2023. Its principles also include ideas about the purpose and scope of the technology, and talks about human-centered approaches to using AI, and human responsibility with AI outcomes.
One of its creators, Stephanie Deitrick, Tempe’s chief data and analytics officer, said she began thinking about an AI framework about a year and a half before the city released it, as she was researching data, bias and inequity when it came to machine learning algorithms. When ChatGPT released, Deitrick said she realized that generative AI chatbots would soon be in the hands of everyday people, and she felt the city needed safeguards.
All new AI tools are reviewed by a governance committee, Tempe’s Director of Information Technology Jared Morris said, and state and federal legislation is reviewed and incorporated as needed. Though Tempe’s policy specifically talks about AI use, Deitrick said it’s broad enough to apply to any technology city workers use.
“These are our values, and we are going to make sure that whatever governance we have aligns with these values,” she said of Tempe’s policy. “And then it lays out the responsibility to the city, IT, the departments and the users that they have to participate in governance, and they are active users who are actively responsible for what they’re doing.”
AI Uses In Local Government
Garces’ team is looking to update its 2023 guidelines, and surveyed its workforce this spring about how they currently use AI.
Of those surveyed, 60% of employees said that they use AI in some form at least once a week, and 78% said that they were interested in learning more about generative AI. Most of the current uses are for drafting memos, proofreading emails, and some data analysis or code generation, Garces said.
A few employees use multimodal models that can help generate images or videos. Garces said one of the city’s departments recently used Google’s Veo 3 to create a 20-second video about best practices on trash disposal. A preliminary quote for the educational video was around $20,000 for traditional film-making, but using AI cost the department about $30 in credits through Google, he said.
“You start seeing the potential impact in helping us do things that were either out of our reach or being able to do them faster or being able to do them for less money,” Garces said.
In Tempe, city employees have about 150 different applications of AI in their work, Morris said, with employees reaching a high point of about 100,000 uses in a given month. Many of these AI uses are “off the shelf,” models like ChatGPT that can assist with writing or research tasks. But others are paid models, like a partnership with AI company Axon, which does real-time object recognition that Tempe uses for a “whole of city” approach, Morris said.
For example, if someone calls into the emergency department about a person in distress in a blue car, the object recognition system can alert officials to blue cars out on the road, and get police or medical staff to them.
The city’s guidelines are careful to outline the potential harms decisionmaking and generative AI tools are capable of contributing to, Morris said. Though they use object recognition, they aren’t using facial recognition technology, he said.
“We’re really careful, trying to be very, very careful on anything that could possibly deprive anyone of liberty or job opportunities,” Morris said.
Why Strike Out On Their Own?
McDonough said her team found it more difficult to operate without an AI policy than with one, as she realized how important the technology was going to be. She said developing a policy for city government is different than one at the state or federal level — “we go and answer to a city council and we answer to the public.”
“In some ways, if people aren’t buying into what they’re hearing at the federal level, we don’t want them to get lost in the weeds,” she said. “Like, okay, here’s where we are. This is our policy, right here. You live in Lebanon, New Hampshire, and this is what we’re talking about.”
In Tempe, Deitrick said she felt that AI was so powerful that there was real potential for something to go wrong if the city didn’t outline how it expected its employees to use the technology.
“I think without policy and strong governance, not that mistakes won’t happen — but it just opens the door to very loose intentioned use of technology and data,” Deitrick said. “I think it makes us more intentional in what we’re doing and requires people to have the conversation.”
Garces echoes McDonough, saying that city workers might feel a pull to be more connected and responsive to their constituents than those in higher levels of government. And they may be able to act faster on societal developments, like AI, that have the power to drastically change the lives of the people in their city.
“We think that there’s a duty in trying to make sure that our constituents are informed and that they participate in these things,” Garces said.
Garces said he’d be happy for state or federal government officials to take inspiration from the policies that Boston and other cities have developed.
“My hope would be that state and federal regulators are working together with cities and not working against them,” Garces said. “Because I think that we have a lot of information and knowledge about how some of these things are starting to occur.”
This story was published by Nebraska Examiner, an editorially independent newsroom providing a hard-hitting, daily flow of news. Read the original article: https://nebraskaexaminer.com/2025/08/21/repub/cities-across-the-us-are-e...
Category:
User login
Omaha Daily Record
The Daily Record
222 South 72nd Street, Suite 302
Omaha, Nebraska
68114
United States
Tele (402) 345-1303
Fax (402) 345-2351