The Louisiana-based company 365 Labs, which develops software for law enforcement agencies, and PoliceReports.ai, founded by a former Florida police officer, are pitching technology that can write police reports within a matter of seconds. They work in different ways, but all follow the basic premise that police can share details of an incident with a chatbot and get back a legally admissible document written by AI.
Police are attracted to the same promises that have made generative AI attractive in other industries: The technology offers a quick way to spit out clean copy in seconds, sparing officers one of the big nuisances of their jobs. A 2019 survey found that police spend up to 3 hours a shift on paperwork.
But government agencies and academics are already raising alarms — pointing out that the idea also steers straight into two weaknesses of the technology: Its propensity to make mistakes, even falsify facts; and the risk of leaking sensitive information.
Large language models such as ChatGPT are known for inaccuracies, often described as “hallucinations.” Some of the higher-profile examples have resulted in allegations of defamation and perpetuating racist theories. These mistakes — though harmless in many contexts — could have life-altering consequences when used in law enforcement, Chris Gilliard, a privacy researcher and a Just Tech fellow at the Social Science Research Council, said.
“If you want to use it to tell your kid a bedtime story about a ninja in the style of a rap by Eminem, sure,” Gilliard said. “But when it’s a life-or-death thing, or it has the potential for long-term effects, it doesn’t seem like a good application.”
In its marketing material, 365 Labs claims that AI-written police reports could actually improve on accuracy compared with reports written by officers because it’s less prone to misspellings and grammatical errors. The software automatically puts details such as names and locations into templates, according to the company. PoliceReports.ai’s website says that its AI produces accurate reports, but also notes that it’s essential to review and validate all outputs.
365 Labs and PoliceReports.ai didn’t respond to requests for comment.
While there are no federal regulations on how government agencies should use AI, many proposed guidelines for AI’s use in the public sector stress the need for human review. New Jersey’s guidelines for government use of generative AI warns that AI-written content shouldn’t be used without careful editing, and also notes that it shouldn’t be used for any sensitive topics.
An Interpol report on ChatGPT’s effects on law enforcement also highlighted that AI does not meet the standards of accuracy and impartiality needed for police reports.
“The final responsibility for the accuracy and quality of police reports shall always remain with police officers who have the necessary training and expertise,” the report stated.
The requirement for human review raises the question of how much time the software would actually save, if it’s creating a new task for police officers to handle on top of generating the reports themselves.
Jonathan Parham, a former police director in Rahway, New Jersey, said he doesn’t support a police report mostly written by AI, calling the concept “troubling.” He raised concerns that AI-written reports would fail to include nuanced details that only a human could describe, and that even a slight error or omission could invalidate the entire report.
But he’s not against using AI to save time. Instead, he has proposed using generative AI more as an aide, less as a writer, creating a ChatGPT bot he called the “Police Report-Writer assistant.” He points out that many officers have issues with spelling and grammar, making writing mistakes that could jeopardize an entire case.
His ChatGPT bot, he said, asks officers a set of questions related to the incident, and then organizes the details while cleaning up grammar and spelling errors, but never fully writes the reports.
“The AI should never replace the officer — it should enhance their operational competency,” Parham said.
He said his chatbot has gotten mixed feedback from police who’ve tried it out, with younger cops touting its benefits, and veteran officers against the technology.
Chips part two
Secretary of Commerce Gina Raimondo says to achieve dominance in the semiconductor industry the U.S. might require a sequel to the CHIPS and Science Act.
As POLITICO’s Brendan Bordelon reported yesterday for Pro subscribers, Raimondo told Intel CEO Pat Gelsinger at an event that she’s “out of breath running as fast as I can to implement CHIPS one,” but “All of that being said, I suspect there will have to be — whether you call it CHIPS Two or something else — continued investment… if we want to lead the world — look, we fell pretty far. We took our eye off the ball.”
The U.S. gave the domestic semiconductor industry a $53 billion subsidy as part of the CHIPS act passed in 2022. Intel is the recipient of grant money from that bill, which will be used to build a massive microchip complex outside Columbus, Ohio, the details of which Gelsinger said would be announced “very soon.”
State of the llms
An international team of AI researchers found that while large language models have made a great leap over the past few years, there are still both technical and ethical hurdles the field desperately needs to clear.
In a pre-print that looks at the three leading LLM families — OpenAI’s GPT, Meta’s LLaMA, and Google’s PaLM — the researchers led by Snap machine learning lead Shervin Minaee evaluate the technology’s current state and make some recommendations for developers going forward. They conclude that while “the pace of innovation is increasing rather than slowing down” in the field, there’s still much to do.
Take, for example, context: In the example they describe, for an LLM to efficiently recommend a good movie for the user, it needs a lot of data about the user. But attention-based models, the dominant form of LLM at this moment, “are highly inefficient for longer contexts,” which might drive more research for different AI architectures.
They also emphasize that “As LLMs are increasingly deployed in real world applications, they need to be protected from potential threats, to prevent them being used to manipulate people or spread mis-information,” something they find the field is currently working on.
Source: Politico
Original: Read More