AGI Fully Superhuman in the Next Decade?How to regulate?Listen to the governance of "AI politician" Altman
On June 10, OpenAI CEO Sam Altman (Sam Altman) connected with the audience at the 2023 Zhiyuan Artificial Intelligence Conference, and had a one-on-one Q & A with Zhang Hongjiang, the chairman of Zhiyuan Research Institute。
On June 10, OpenAI CEO Sam Altman (Sam Altman) connected with the audience at the 2023 Zhiyuan Artificial Intelligence Conference, and had a one-on-one Q & A with Zhang Hongjiang, the chairman of Zhiyuan Research Institute。In the video link, Altman discusses the alignment of AGI (General Artificial Intelligence) technology and the recent high-profile AI regulatory issues, and reveals that OpenAI is investing in using AI systems to help humans assist in overseeing other AI systems.。
Two development paths for investing in AGI security: establishing a governance charter + expanding governance channels
In his speech, Altman first outlined a vision for AGI technology。He said AGI is likely to surpass human expertise in almost all areas over the next decade, and may eventually surpass the collective productivity of humanity's largest companies.。He further noted that the AI revolution will create shared wealth and make it possible to raise the standard of living for everyone, address common challenges such as climate change and global health security, and improve social well-being in countless other ways.。
However, Altman also pointed out that in order to realize and enjoy this vision, all parties need to work together to invest in AGI security and manage risks, and to this end, he proposed two possible development paths.。
One is to establish an international AGI governance charter as soon as possible.。Countries need to establish equal and uniform international norms and standards and develop protective measures for the use of AGI through an inclusive process。In addition, countries need to build global confidence in the safe development of AI systems through international cooperation and provide verifiable measures to promote mechanisms for increased transparency and knowledge sharing.。
The second is to fully clarify the research orientation of AGI and expand the channels of AGI governance.。In OpenAI's conception, the AI system is positioned as "a useful and safe assistant," suggesting that the company needs to work on training ChatGPT so that it does not issue violent threats or assist users in harmful activities.。In response, Altman expressed his own concerns, saying that as we get closer to AGI, the potential impact and extent of any non-compliance will grow exponentially。To address these challenges ahead of time, we strive to minimize the risk of future catastrophic outcomes.。
To this end, OpenAI has even invested in a number of new and complementary research directions, including the field of scalable supervision and interpretability.。The former mainly explores the possibility of using AI systems to assist humans in supervising other AI systems, such as training a model to help human supervisors find defects in the output of other models, while the latter mainly assists the former, using machine learning theory to make comprehensible explanations of the internal actions of the model.。
Of these two paths, the first focuses more on acts of international cooperation, which is what Altman continues to work for.。Known as an "AI politician," he has visited nearly 20 countries in the past two months to meet with local students, developers and heads of state to discuss the development of artificial intelligence technology.。In Altman's own words, it hopes to meet more developers through this "round-the-world trip."。At the same time, "I also want to talk about policy makers."。
In his speech, Altman also specifically quoted from Lao Tzu's Tao Te Ching: A journey of a thousand miles begins with a single step.。He believes that the most constructive step that can be taken now is to work together among the international scientific and technological community, especially to promote the establishment of mechanisms to improve transparency and knowledge sharing in the advancement of AI security technology.。
At home, Altman has never stopped coordinating the parties.。In early May, at a time of growing concern in the U.S. Congress about the misuse of artificial intelligence technology, it has been holding private meetings with key figures in the U.S. Congress to actively explore possible regulatory frameworks at the government and corporate levels.。
In addition, he testified before the U.S. Senate Judiciary Committee and made three recommendations for AI regulation: first, the establishment of a new government agency to authorize large AI models and revoke the relevant licenses of companies that do not meet government standards;。
The second path focuses more on self-development and iterative thinking about AGI research.。For OpenAI, according to its latest research published on May 9, the company has tried to use the GPT-4 model to automatically explain the meaning of each neuron in GPT-2, which represents the beginning of the company's attempt to use the language model to explain the principles of the language model itself, using the model to understand the model。In addition, OpenAI is also advancing another study aimed at using "process supervision" to mitigate the ChatGPT "illusion" and achieve better "alignment" of AGI technology.。
For Altman, he is also exploring the potential of GPT-4 in the field of interpretability research, aiming to use OpenAI's research results in this field to help humans better regulate artificial intelligence technology.。
It is worth noting that, according to a paper published by OpenAI, when using GPT-4 to explain the neurons of the GPT-2 model, the company found that GPT-4 was more explanatory on models with smaller parameter sizes, and that as the parameter size gradually increased, the score of GPT-4 interpretation gradually decreased。This means that the larger the model, the more difficult it is to explain and understand.。
After discovering the problem, Altman, in his speech over the weekend, also boldly articulated the need to tackle the "model regulatory model" technique.。Targeting remains an open question for more advanced systems, which we believe will require new technological approaches, as well as more governance and oversight.。Imagine a futuristic AGI system coming up with 100,000 lines of binary code, and it's unlikely that human supervisors will detect whether such a model is doing something nefarious.。
Altman: "open source everything" is not necessarily the best path model wind control problem detection system is the key
In the following one-on-one question and answer, Altman boldly predicted the arrival of artificial intelligence, he said that in the next decade, we may have a very powerful artificial intelligence system, and for the outside world's attention to the open source of OpenAI, he pointed out that some of OpenAI's models are open source, some are not, but over time, OpenAI may open source more models, but there is no specific timetable。
However, he also pointed out that although the open source model does have advantages when it comes to AI security, "open source everything" is not necessarily the best path.。
During the Q & A, Zhang Hongjiang also posed an interesting question, saying: "If there were only three models in this world, would it be safer??In response, Altman hit the nail on the head, saying that the key to model safety is not the number of models, but whether humans have systems that can perform qualified safety tests on models.。
He said: "I think there are different views on whether it is safer to have a few models in the world or a majority model.。I think the more important thing is whether we have a system that allows any powerful model to be fully security tested?Do we have a framework that allows anyone who creates a model that is strong enough to have both the resources and the responsibility to make sure that what they create is safe and aligned?"
In the Q & A, Altman also fully affirmed China's talent pool in the field of artificial intelligence research and called on these talents to increase research investment in the field.。"Because it involves understanding user preferences in different countries in very different contexts, [AI research] requires a lot of different inputs to achieve this, and China has some of the best AI talent in the world," he said.。Fundamentally, considering that solving the difficulty of aligning advanced AI systems requires the best minds from all over the world, I really hope that Chinese AI researchers will make a huge contribution here。"
·Original
Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.