There is really a trend of encircling and suppressing Google!
When Google made a series of major releases at the Cloud Next conference last night, you all came to grab the attention: OpenAI updated GPT-4 Turbo, Later, Mistral open sourced a super large model of 8X22B. Google’s heart: A group of children in Nancun bullied me into being old and weak.
The second largest open source model: Mixtral 8X22B##In January this year, Mistral AI Announced the technical details of Mixtral 8x7B and launched the Mixtral 8x7B – Instruct chat model. The model's performance significantly outperforms GPT-3.5 Turbo, Claude-2.1, Gemini Pro, and Llama 2 70B chat models on human evaluation benchmarks.
Just 3 months later, Mistral AI open sourced the Mistral 8X22B model, bringing another large model with strong performance to the open source community.
Someone has looked at the details of the Mistral 8X22B model and the model file size is approximately 262 GB.
As a result, Mistral 8X22B has become the second largest open source model so far, second only to the Grok-1 previously launched by xAI (with a parameter volume of 314 billion).
Some people exclaimed that there is another "big guy" in the MoE circle. MoE is a mixture of experts model, and previously Grok-1 was also a MoE model.
GPT-4 Turbo new visual function upgradeOn the other side, OpenAI announced GPT -4 Turbo with Vision is now available via the API, and the Vision functionality is now available using JSON schemas and function calls.
The following are the details of the OpenAI official website.
#Despite this, netizens from all walks of life are "not cold" about OpenAI's minor repairs.
https://platform.openai.com/docs/models/continuous-model-upgradesThe above is the detailed content of Mistral open source 8X22B large model, OpenAI updates GPT-4 Turbo vision, they are all bullying Google. For more information, please follow other related articles on the PHP Chinese website!