Home >Hardware Tutorial >Hardware News >Google AI announces Gemini 1.5 Pro and Gemma 2 for developers
Google AI has started to provide developers with access to extended context windows and cost-saving features, starting with the Gemini 1.5 Pro large language model (LLM). Previously available through a waitlist, the full 2 million token context window is now open to all developers. Google is also introducing context caching to reduce costs for tasks that reuse similar information across multiple prompts.
With the new update, Gemini 1.5 Pro and Flash models can now dynamically generate and execute Python code. The models can further utilize data analysis and mathematics to effectively solve reasoning problems. According to Google, the code execution environment is secure and does not have internet access; the company also says developers will only get billed for the final output generated by the model. The lightweight Gemma 2 model is now also available for use within the Google AI Studio. Those interested can access it under "Advanced Settings" in Google AI Studio.
Several companies are already using Google Gemini 1.5 Flash. The list of apps leveraging the technology includes Envision, which aids visually impaired users, Plural's policy analysis, Zapier's video processing, and Dot's personalized AI memory system. The other thing that is being rolled out to developers right now is what the Google post calls "text tuning", with full access to this feature expected by mid-July.
Working For NotebookcheckAre you a techie who knows how to write? Then join our Team! Wanted:- News WriterDetails hereThe above is the detailed content of Google AI announces Gemini 1.5 Pro and Gemma 2 for developers. For more information, please follow other related articles on the PHP Chinese website!