- OpenAI-affiliated X accounts face Multiple phishing attacks since 2023.
- this The last incident happened yesterday When malicious actors promote fake $OPENAI tokens.
- The company’s The internal forum was also hacked last yearbut the incident was not reported until July 2024.
OpenAI’s news account on X was hacked yesterday. Promoting $OPENAI token claim phishing linksThis is the fourth time malicious actors have targeted the company’s X account since January 2023.
Let’s analyze what happened and how OpenAI responded to it.
Fake $OPENAI token claims
Around 22:26 UTC on September 23, followers of OpenAI Newsroom began reporting suspicious activity.
The account claims ChatGPT users are eligible to receive $OPENAI token sharesAims to “bridge the gap between artificial intelligence and blockchain technology.”
Users notice that their browser Mark the link as “Suspected Phishing”. However, they were unable to warn others in the comments because the hacker shut it down.
Neither OpenAI Newsletter nor the company’s CEO Sam Altman responded to the incident on their X accounts, but the malicious post has been deleted.
OpenAI in trouble
Yesterday’s attack was Cybercriminals target OpenAI-related accounts for the fourth time Promote $OPENAI token.
The first case occurred in June 2023, when malicious actors hacked into the X account of OpenAI CTO Mira Murati. A year later, the company’s chief scientist Jakub Pachocki also suffered a similar incident. The most recent case was reported by OpenAI researcher Jason Wei on September 22.
In addition to X hackers, attackers Hacking OpenAI internal forums in 2023They gained access to confidential information, including employee data and communications data, but failed to obtain company codes.
Because the incident did not affect source code or customer data, the company did not disclose the incident until July 2024.
despite this, The lack of communication is worrisome. Consider the ethical implications of such a breach and the sensitivity of the information accessed.
Dr. Tanishq Mathew Abraham and others Criticism of OpenAI Poor safety measures This has led to repeated incidents of this nature.
ChatGPT reveals secrets
Although OpenAI is frequently attacked by cyber attacks, ChatGPT can leak internal information No external intervention is required.
Earlier this year, Reddit user wrote Chatbots Revealing its moral system directives Respond with a simple greeting.
ChatGPT’s response included guidance on selecting sources and instructions for prioritizing diversity. It also included Depictions of public figures are prohibited or copyrighted characters.
Call for transparency
OpenAI Frequent cyber attacks Asking systemic questions This requires more attention from company leadership.
also, Lack of timely communication The incident raised concerns about OpenAI’s commitment to transparency and user safety.
Will OpenAI take steps to prevent future violations? We’ll have to wait and see.
refer to
Disclaimer: The opinions expressed in this article do not constitute financial advice. We encourage readers to conduct their own research and determine their own risk tolerance before making any financial decisions. Cryptocurrency is a highly volatile, high-risk asset class.