OpenAI says its latest GPT-4o model is ‘medium’ risk

Posted by
Check your BMI
An image of OpenAI’s logo, which looks like a stylized and symmetrical braid.
Image: OpenAI
toonsbymoonlight

OpenAI has released its GPT-4o System Card, a research document that outlines the safety measures and risk evaluations the startup conducted before releasing its latest model.

GPT-4o was launched publicly in May of this year. Before its debut, OpenAI used an external group of red teamers, or security experts trying to find weaknesses in a system, to find key risks in the model (which is a fairly standard practice). They examined risks like the possibility that GPT-4o would create unauthorized clones of someone’s voice, erotic and violent content, or chunks of reproduced copyrighted audio. Now, the results are being released.

According to OpenAI’s own framework, the researchers found GPT-4o to be of “medium” risk. The overall risk level was taken from the highest risk rating of four overall categories: cybersecurity, biological threats, persuasion, and model autonomy. All of these were deemed low risk except persuasion, where the researchers found some writing samples from GPT-4o could be better at swaying readers’ opinions than human-written text — although the model’s samples weren’t more persuasive overall.

An OpenAI spokesperson, Lindsay McCallum Rémy, told The Verge that the system card includes preparedness evaluations created by an internal team, alongside external testers listed on OpenAI’s website as Model Evaluation and Threat Research (METR) and Apollo Research, both of which build evaluations for AI systems.

This isn’t the first system card OpenAI has released; GPT-4, GPT-4 with vision, and DALL-E 3 were also similarly tested and the research was released. But OpenAI is releasing this system card at a pivotal time. The company has been fielding nonstop criticism of its safety standards, from its own employees to state senators. Only minutes before the release of GPT-4o’s system card, The Verge exclusively reported on an open letter from Sen. Elizabeth Warren (D-MA) and Rep. Lori Trahan (D-MA) that called for answers about how OpenAI handles whistleblowers and safety reviews. That letter outlines the many safety issues that have been called out publicly, including CEO Sam Altman’s brief ousting from the company in 2023 as a result of the board’s concerns and the departure of a safety executive, who claimed that “safety culture and processes have taken a backseat to shiny products.”

Moreover, the company is releasing a highly capable multimodal model just ahead of a US presidential election. There’s a clear potential risk of the model accidentally spreading misinformation or getting hijacked by malicious actors — even if OpenAI is hoping to highlight that the company is testing real-world scenarios to prevent misuse.

There have been plenty of calls for OpenAI to be more transparent, not just with the model’s training data (is it trained on YouTube?), but with its safety testing. In California, where OpenAI and many other leading AI labs are based, state Sen. Scott Wiener is working to pass a bill to regulate large language models, including restrictions that would hold companies legally accountable if their AI is used in harmful ways. If that bill is passed, OpenAI’s frontier models would have to comply with state-mandated risk assessments before making models available for public use. But the biggest takeaway from the GPT-4o System Card is that, despite the group of external red teamers and testers, a lot of this relies on OpenAI to evaluate itself.