OpenAI and Google DeepMind issue AI risk warnings
Former and current employees of the OpenAI and Google DeepMind teams have expressed concerns about the safety of AI technology.
Recently, former and current employees of OpenAI, supported by Microsoft (MSFT), and Alphabet's (GOOGL) Google DeepMind team, have expressed concerns about the risks associated with AI.
According to reports, these employees, including 11 current and former OpenAI employees and one current Google DeepMind employee, have issued a public letter expressing worries that the financial motives of AI startups hinder effective regulation, stating that "customized corporate governance arrangements cannot solve this problem."
The letter also warns that unregulated AI poses dangers, including the spread of misinformation, the loss of control over autonomous AI systems, and the exacerbation of existing inequalities, which could lead to "human extinction."
Researchers have found that image generators developed by organizations like OpenAI and Microsoft contain deceptive content related to elections in photos, despite regulations prohibiting such content. Additionally, AI companies have "limited accountability" when it comes to their ability and limitations in sharing their systems with governments and cannot be expected to do so freely.
The letter reflects these employees' concerns about the safety of AGI (Artificial General Intelligence) technology, as it can rapidly and cheaply generate human-like text, images, and audio. The group calls for AI companies to establish a framework for current and former employees to submit risk-related issues instead of enforcing non-disclosure agreements that prohibit criticism.
Disclaimer: The views in this article are from the original author and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.