HawkInsight

  • Contact Us
  • App
  • English

Amazon is training a large model code-named "Olympus" with twice the parameters of GPT-4!

On November 8, according to media reports, Amazon is spending millions of dollars to train its own large language model (LLM), internally codenamed "Olympus," which could be launched as early as the end of December.。

On November 8, according to media reports, Amazon is spending millions of dollars to train its own large language model (LLM) in order to compete with OpenAI and Alphabet's top models.。

The internal code name of Amazon's self-developed model is "Olympus," the source said.。The team training Olympus is led by Rohit Prasad and reports directly to CEO Andy Jassy。Prasad was previously head of Amazon's artificial intelligence assistant Alexa and chief scientist for Amazon's General Artificial Intelligence (AGI).。

To better train Olympus, Prasad brought together researchers who had previously worked on Alexa AI and the Amazon science team to work on training Olympus, combining AI efforts across the company with dedicated resources.。

Olympus could launch as early as the end of December, according to people familiar with the matter.。

                  

AI competition

                 

It is worth mentioning that before the emergence of ChatGPT last year, Amazon Web Services (AWS) was developing artificial intelligence software, and the function is similar to ChatGPT。A person familiar with the company's plans said AWS had hoped to launch the software product at its annual customer conference at the end of November, when the company internally dubbed it "Bedrock," but had to delay the release due to technical hurdles.。

A few days after the start of the annual client conference, ChatGPT was born and immediately attracted global attention。But soon Amazon executives were glad they didn't release the product because Bedrock and ChatGPT weren't on the same level at all。If it is released with OpenAI, it is likely to be "laughed off"。

After continuous improvement, Bedrock was officially launched on September 28 this year。Bedrock is a service that provides access to generated AI on the cloud。Technically, Bedrock is a library of base models, all of which offer similar content generation capabilities。Because Bedrock's models are hosted by AWS, existing customers can access them through common channels。Accessible models include other models from AI21 Labs, Anthropoic, Cohere, Meta, Stability AI, and Amazon。

Before Olympus, Amazon tried to train smaller models like Titan。

Titan plans to offer two AI models: one to create text, the other to improve search and personalization。In addition to creating new text such as blog posts or emails, Titan can group items into different categories, hold open conversations, and extract specific information from blocks of text.。On September 28, AWS announced the first available Titan model, called Titan Embedding, which enhances search by tailoring results based on more relevant and contextual responses.。

In addition, Amazon has partnered with AI model startups such as Anthropoic and AI21 Labs to make these models available to users of AWS.。

Despite all this, there are big models of OpenAI in front of Amazon, and these models are inevitably a little inferior.。

In an effort to catch up with OpenAI, people familiar with the matter said that the Olympus model Amazon is training will have up to two trillion parameters, which is one of the largest models currently being trained.。In contrast, OpenAI's GPT-4 model has one trillion parameters, and this is already one of the best models available.。More parameters mean that the model can be better adjusted to complete the task, and all aspects of the performance of the AI robot are expected to reach a higher level.。As you can see from this, Amazon is trying to play big this time.。

亚马逊

                                         

Better performance and higher cost

                          

However, Olympus's large-scale parameters require very high computational power, and the cost of training Olympus will be enormous.。This is a big challenge for Amazon.。

Microsoft hit nearly $10 billion before hitting a ChatGPT。And Olympus has twice the parameters of the GPT-4 model, and the cost of hardware such as the required GPU is bound to rise as well.。Not to mention other expenses such as labor costs.。

In addition, even if the model is available, the operation of the model is a cost。Analysts expect OpenAI to spend nearly $700,000 a day to run ChatGPT.。

Amazon had previously expected the company's capital investment to exceed $50 billion in 2023, of which AWS will account for a large portion。It is not yet known how much of this money will flow to Olympus.。

In the absence of huge growth in performance, Amazon had to "tear down the wall" in order to increase investment in LLM and generative artificial intelligence.。The company has been pushing ahead with cost-cutting plans this year.。The plan includes reducing labor costs, reducing sales and marketing expenses, and reforming transportation operations to reduce transportation costs.。

The reason for investing heavily in developing Olympus is that Amazon believes that having a self-developed model can make its product more attractive, and that having a model like GPT-4 can greatly enhance the company's competitiveness, and corporate customers want the best-performing model on AWS.。

Jeff Pearson, managing director of technology consultancy Slalom, said tech companies must compete in generative AI or they will lose relevance and market share.。But including equipment such as servers and data centers requires significant capital expenditures.。This is what they have to face。

·Original

Disclaimer: The views in this article are from the original author and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.