OpenAI rolls out ‘Advanced’ security mode for vulnerable accounts


To anyone Fearing that ChatGPT and Codex accounts could be targeted by attackers, OpenAI announced Thursday that it is adding a new, optional level of account protection that adds an extra layer of security. This feature, called Advanced Account Security, enforces strict access controls that can make account takeover attacks extremely difficult.

Such measures are not a new idea in the field of account security. Google, for example, introduced its Advanced Protection account security nearly a decade ago. But with the rapid spread of mainstream AI services around the world, there is an urgent need to put in place a set of basic protections. OpenAI says the launch is part of its broader cybersecurity strategy announced earlier this month.

“People are turning to AI for deeply personal questions and increasingly high-stakes business,” the company said Thursday in a blog post. “Over time, a ChatGPT account can carry sensitive personal and professional context, sitting at the center of connected tools and workflows. For some people, such as journalists, elected officials, political dissidents, researchers, and those who are particularly security-conscious, the risks are higher.”

People who enable advanced account security can no longer use regular passwords on their accounts. Instead, they should add two physical security keys or passkeys to significantly reduce the risk of successful phishing attacks. This feature also cancels emails, SMS, and account recovery methods. Instead, users must use recovery keys, backup passkeys, or physical security keys. OpenAI says it has partnered with Yubico to offer low-cost YubiKey packages to advanced account security users.

An image may contain text and a page

Courtesy of OpenAi

Most importantly, when a user turns on advanced account security, they can no longer seek help from the OpenAI support team to recover the account, because support no longer has access or control over any of the recovery options. This way, attackers can’t try to break into accounts by targeting support portals with social engineering attacks.

Advanced account security also enforces shorter login windows and sessions before the user has to log back in on the device. It issues alerts any time someone logs into the locked account, pointing to the control panel to review active ChatGPT and Codex sessions. Additionally, while OpenAI provides an option for any user to opt out of the use of their ChatGPT conversations for model training, this exception is turned on by default for Advanced Account Security users.

Members of OpenAI’s Trusted Access for Cyber ​​program, which gives cybersecurity professionals, researchers and others advanced access to new models, will be required to enable advanced account security starting June 1 or provide alternative certification that they are implementing anti-phishing authentication through the organization’s single sign-on mechanism.

Leave a Reply