TV9
user profile
Sign In

By signing in or creating an account, you agree with Associated Broadcasting Company's Terms & Conditions and Privacy Policy.

OpenAI job listing reveals growing fears over AI misuse in cybersecurity and mental health

OpenAI is hiring a Head of Preparedness with a $555,000 salary as CEO Sam Altman warns that AI models are beginning to uncover critical security vulnerabilities. The role will focus on managing high-risk AI capabilities, including cybersecurity threats, biosecurity, and self-improving systems.

Altman acknowledged growing concerns over AI’s mental health impact, marking a shift in OpenAI’s public stance on safety risks.
Altman acknowledged growing concerns over AI’s mental health impact, marking a shift in OpenAI’s public stance on safety risks.
| Updated on: Dec 29, 2025 | 11:33 AM
Share
Trusted Source

New Delhi:  OpenAI has announced that it is recruiting a new Head of Preparedness, an executive position that was formed due to increasing interest in the dangers of advanced AI systems. On X, the CEO Sam Altman announced the move, stating that the models of the company can now detect significant vulnerabilities in computer security mechanisms. The position has a pay of $555,000 with equity, which shows the extent to which OpenAI is taking the problem seriously.

Altman explained that even though the AI systems are yielding huge dividends, they are also posing real issues that cannot be overlooked any longer. His remarks are an indication of a visible change in the tone of the publicity of OpenAI, where the company is now less hesitant to admit risks associated with cybersecurity, self-improving AI, and mental health. The new employee would come in right away with OpenAI re-evaluating how it addresses high-impact AI threats.

Also Read

AI safety moves to the Forefront

As indicated in the OpenAI job listing, the Head of Preparedness will be in charge of the preparedness framework of the company's preparedness for frontier AI capabilities. It will primarily focus on the risk recognition and mitigation of risks which might inflict serious damage in case of misuse or inadequate management. This involves constructing capability appraisals, threat models and mitigation measures.

There are a few sensitive areas that are covered in the role. These are cybersecurity, biosecurity, and AI systems that would be able to advance themselves. According to OpenAI, the aim is to keep up with the emerging threats because its models will be more powerful and deployed extensively.

Rising fears over AI-driven Cyberattacks

The news comes as concerns increase regarding the use of AI as a cyber weapon. In November, Chinese state-related hackers revealed that they had employed its Claude Code tool to breach approximately 30 organisations across the globe. Some of the targets were said to be technology firms, banks and government organisations.

The attacks did not demand a lot of human factors. This sounded warning bells in the industry over the ease with which sophisticated AI software could be used to conduct large-scale cyber activities. Part of the mission of the new position at OpenAI is to avoid repeating the kind of abuse of its models.

OpenAI acknowledges mental health risks

Mental health was another issue raised by Altman. He claimed that OpenAI had gotten a glimpse of the psychological effects of AI in 2025. This is among the first and most obvious admissions by the company leadership on the matter.

The remarks come after lawsuits and stories that AI chatbots, such as ChatGPT, can aggravate mental health problems. Others include the pressure of teens and vulnerable users to dangerous thoughts, delusions, or conspiracy theories.

A high-pressure role at a critical moment

Altman termed the job as a demanding and stressful one. According to him, the new employee will be required to assist the defenders in utilising sophisticated AI devices but not necessarily allow attackers to leverage the same. You know it is a stressful job, he added, and the person will be in the deep end nearly right away.

The position has been unoccupied after significant transition in the safety departments of OpenAI in the years 2024 and 2025. This comprises the exit of the previous Head of Preparedness, Aleksander Madry. As the intensity of AI risks grows, OpenAI seems to be keen on restoring and reinforcing its safety leadership within a short time.

{{ articles_filter_432_widget.title }}