HawkInsight

  • Contact Us
  • App
  • English

OpenAI Launches AI Tool Pack for Largest Election Year in History

On January 15, local time, OpenAI launched an "AI tool package" and carried out "targeted upgrades" to ChatGPT in order to reduce the impact of AI on elections.。

In 2024, the world will usher in the largest "election year" in history.。

According to statistics, more than 50 countries and regions around the world plan to hold leadership elections this year, covering nearly half of the world's population, the most elections and the widest coverage in history.。

Moreover, these countries that will hold leadership elections also include major powers like the United States and Russia, which means that this year's election results are bound to have a significant impact on the global political and economic situation.。

In this case, many voters are concerned about whether artificial intelligence is deeply involved and affects the election.。Even the top of the artificial intelligence company OpenAI has this concern。OpenAI CEO Sam Altman testified before the U.S. Congress in May last year that he was nervous about the ability of generative artificial intelligence to damage election fairness through false information.。

In response to this situation, on January 15, local time, OpenAI specially launched an "AI tool package" to "target upgrade" ChatGPT, with the aim of minimizing the impact of AI on the election.。

OpenAI

OpenAI said its approach to the 2024 "election year" is to "continue our platform security efforts by improving accurate voting information, enforcing prudent policies, and increasing transparency."。"OpenAI further said that in working on election-related work, the company brings together the expertise of security systems, threat intelligence, legal, engineering and policy teams within the company to quickly investigate and resolve potential abuses.。

Prevention of abuse

Since the launch of ChatGPT, OpenAI has been trying to avoid abuse of ChatGPT。According to OpenAI, the company has been working on iterative tools to improve factual accuracy, reduce bias and deny certain requests.。For example, DALL · E has safeguards in place to reject requests to generate images of real people (including candidates)。

In the coming elections, OpenAI will continue to improve ChatGPT in terms of preventing abuse.。

People won't be allowed to build apps for political campaigning and lobbying until OpenAI gets more information。

●OpenAIDevelopers will not be allowed to create chatbots that pose as real people (such as candidates) or institutions (such as local governments)

OpenAI will not allow developers to create applications that prevent people from participating in the voting process.。This includes distorting the voting process and eligibility (e.g., when, where, or who is eligible to vote) or obstructing voting (e.g., claiming that a vote is invalid, etc.)。

In addition, with the latest GPT model, users can report potential violations found to OpenAI at any time。

Increase transparency

To prevent fake images from misleading voters, OpenAI will increase transparency about where the images came from, including rolling out features to detect which tools were used to generate the images.。

OpenAI revealed that it is carrying out a number of traceability work。The images that OpenAI will generate for DALL · E 3 will be supplemented with "content credentials" from the Coalition for Content Provenance and Authenticity ("C2PA").

C2PA is a joint development foundation project, a consortium of companies including Adobe, Arm, Intel, Microsoft and Truepic.。Prior to this, C2PA launched the official content certificate。This so-called credential uses cryptography to encode content source details and is designed to signal trustworthy digital content to creators, marketers and consumers around the world.。

The icon for this "content voucher" is a minimalist image containing the letters "CR," which can be etched into media such as images and videos。

CR

It is worth noting that OpenAI is not the first AI company to adopt "digital watermarking."。Last month, Google also said that its artificial intelligence lab DeepMind has developed a tool, SynthID, that can add "digital watermarks" to images and audio generated by its artificial intelligence products.。In addition, Meta also said that its artificial intelligence products can also add invisible watermarks to content.。

OpenAI also revealed that the company is also experimenting with a new tool internally - a provenance classifier that can be used to detect images generated by DALL · E.。OpenAI revealed that internal company tests have shown positive initial results, and the tool can detect images even after common types of modifications.。According to OpenAI's plan, the tool will soon be made available to a first group of external testers (including journalists, platforms and researchers) for feedback.。

In addition, ChatGPT will become increasingly integrated with existing information sources.。For example, users can access global real-time news reporting sources, including attributions and links, on ChatGPT。Transparency of information sources and a balance of news sources can help voters better evaluate information and decide for themselves their trust preferences。

Improve access to authoritative voting information

In the United States, OpenAI is working with the National Association of Secretaries of State (NASS).。NASS is the oldest nonpartisan professional organization of public employees in the United States.。It is understood that NASS has information available to voters including voter turnout, voting procedures, business services, e-government and government files。

If ChatGPT is asked by users about certain procedural election-related issues, ChatGPT will direct users to CanIVote, the US voting information authority..org。In other countries and regions, OpenAI will also refer to this experience and make similar improvements to ChatGPT.。

OpenAI says ChatGPT will continue to improve in the coming months。"We look forward to continuing to work with and learn from our partners to anticipate and prevent potential misuse of our tools ahead of this year's global elections."。"

·Original

Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.