花崗岩3.0
Granite 3.0 是一個開源、輕量級的生成語言模型系列,專為一系列企業級任務而設計。它原生支援多語言功能、編碼、推理和工具使用,適合企業環境。
我測試了運行這個模型,看看它可以處理哪些任務。
環境設定
我在 Google Colab 中設定了 Granite 3.0 環境,並使用以下指令安裝了必要的函式庫:
!pip install torch torchvision torchaudio !pip install accelerate !pip install -U transformers
執行
我測試了Granite 3.0的2B和8B型號的性能。
2B型號
我運行了 2B 模型。這是 2B 模型的程式碼範例:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-2b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() chat = [ { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(chat, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=100) output = tokenizer.batch_decode(output) print(output[0])
輸出
userPlease list one IBM Research laboratory located in the United States. You should only output its name and location. assistant1. IBM Research - Austin, Texas
8B型號
將2b替換為8b即可使用8B模型。以下是 8B 模型的沒有角色和使用者輸入欄位的程式碼範例:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-8b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() chat = [ { "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(chat, add_special_tokens=False, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=100) generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True) print(generated_text)
輸出
1. IBM Almaden Research Center - San Jose, California
函數呼叫
我探索了函數呼叫功能,並使用虛擬函數對其進行了測試。這裡,get_current_weather 被定義為傳回模擬天氣資料。
虛擬函數
import json def get_current_weather(location: str) -> dict: """ Retrieves current weather information for the specified location (default: San Francisco). Args: location (str): Name of the city to retrieve weather data for. Returns: dict: Dictionary containing weather information (temperature, description, humidity). """ print(f"Getting current weather for {location}") try: weather_description = "sample" temperature = "20.0" humidity = "80.0" return { "description": weather_description, "temperature": temperature, "humidity": humidity } except Exception as e: print(f"Error fetching weather data: {e}") return {"weather": "NA"}
即時創作
我建立了一個呼叫函數的提示:
functions = [ { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and country code, e.g. San Francisco, US", } }, "required": ["location"], }, }, ] query = "What's the weather like in Boston?" payload = { "functions_str": [json.dumps(x) for x in functions] } chat = [ {"role":"system","content": f"You are a helpful assistant with access to the following function calls. Your task is to produce a sequence of function calls necessary to generate response to the user utterance. Use the following function calls as required.{payload}"}, {"role": "user", "content": query } ]
響應生成
使用以下程式碼,我產生了一個回應:
instruction_1 = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(instruction_1, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=1024) generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True) print(generated_text)
輸出
{'name': 'get_current_weather', 'arguments': {'location': 'Boston'}}
這證實了模型能夠根據指定城市產生正確的函數呼叫。
增強互動流程的格式規範
Granite 3.0 允許格式規格以促進結構化格式的回應。本節說明如何使用 [UTTERANCE] 進行回應,並使用 [THINK] 進行內心想法。
另一方面,由於函數呼叫以純文字形式輸出,因此可能需要實作單獨的機制來區分函數呼叫和常規文字回應。
指定輸出格式
以下是指導 AI 輸出的範例提示:
prompt = """You are a conversational AI assistant that deepens interactions by alternating between responses and inner thoughts. <constraints> * Record spoken responses after the [UTTERANCE] tag and inner thoughts after the [THINK] tag. * Use [UTTERANCE] as a start marker to begin outputting an utterance. * After [THINK], describe your internal reasoning or strategy for the next response. This may include insights on the user's reaction, adjustments to improve interaction, or further goals to deepen the conversation. * Important: **Use [UTTERANCE] and [THINK] as a start signal without needing a closing tag.** </constraints> Follow these instructions, alternating between [UTTERANCE] and [THINK] formats for responses. <output example> example1: [UTTERANCE]Hello! How can I assist you today?[THINK]I’ll start with a neutral tone to understand their needs. Preparing to offer specific suggestions based on their response.[UTTERANCE]Thank you! In that case, I have a few methods I can suggest![THINK]Since I now know what they’re looking for, I'll move on to specific suggestions, maintaining a friendly and approachable tone. ... </output>example> Please respond to the following user_input. <user_input> Hello! What can you do? </user_input> """
執行程式碼範例
產生回應的程式碼:
chat = [ { "role": "user", "content": prompt }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(chat, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=1024) generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True) print(generated_text)
範例輸出
輸出如下:
[UTTERANCE]Hello! I'm here to provide information, answer questions, and assist with various tasks. I can help with a wide range of topics, from general knowledge to specific queries. How can I assist you today? [THINK]I've introduced my capabilities and offered assistance, setting the stage for the user to share their needs or ask questions.
[UTTERANCE] 和 [THINK] 標籤已成功使用,允許有效的回應格式。
根據提示的不同,輸出中有時可能會出現結束標籤(例如[/UTTERANCE]或[/THINK]),但總的來說,一般都可以成功指定輸出格式。
串流程式碼範例
讓我們看看如何輸出流響應。
以下程式碼使用 asyncio 和線程庫來非同步傳輸來自 Granite 3.0 的回應。
!pip install torch torchvision torchaudio !pip install accelerate !pip install -U transformers
範例輸出
執行上述程式碼將產生以下格式的非同步回應:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-2b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() chat = [ { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) input_tokens = tokenizer(chat, return_tensors="pt").to("cuda") output = model.generate(**input_tokens, max_new_tokens=100) output = tokenizer.batch_decode(output) print(output[0])
此範例示範了成功的串流。每個token都是非同步生成並順序顯示,讓使用者可以即時查看生成過程。
概括
Granite 3.0 即使使用 8B 機型也能提供相當強的反應。函數呼叫和格式規範功能也運作良好,顯示其具有廣泛的應用潛力。
以上是我嘗試過花崗岩。的詳細內容。更多資訊請關注PHP中文網其他相關文章!

theDifferenceBetweewneaforoopandawhileLoopInpythonisthataThataThataThataThataThataThataNumberoFiterationSiskNownInAdvance,而leleawhileLoopisusedWhenaconDitionNeedneedneedneedNeedStobeCheckedStobeCheckedStobeCheckedStobeCheckedStobeceDrepeTysepectients.peatsiveSectlyStheStobeCeptellyWithnumberofiterations.1)forloopsareAceareIdealForitoringercortersence

在Python中,for循環適用於已知迭代次數的情況,而while循環適合未知迭代次數且需要更多控制的情況。 1)for循環適用於遍歷序列,如列表、字符串等,代碼簡潔且Pythonic。 2)while循環在需要根據條件控制循環或等待用戶輸入時更合適,但需注意避免無限循環。 3)性能上,for循環略快,但差異通常不大。選擇合適的循環類型可以提高代碼的效率和可讀性。

在Python中,可以通過五種方法合併列表:1)使用 運算符,簡單直觀,適用於小列表;2)使用extend()方法,直接修改原列表,適用於需要頻繁更新的列表;3)使用列表解析式,簡潔且可對元素進行操作;4)使用itertools.chain()函數,內存高效,適合大數據集;5)使用*運算符和zip()函數,適用於需要配對元素的場景。每種方法都有其特定用途和優缺點,選擇時應考慮項目需求和性能。

foroopsare whenthenemberofiterationsisknown,而whileLoopsareUseduntilacTitionismet.1)ForloopSareIdealForeSequencesLikeLists,UsingSyntaxLike'forfruitinFruitinFruitinFruitIts:print(fruit)'。 2)'

toConcateNateAlistofListsInpython,useextend,listComprehensions,itertools.Chain,orrecursiveFunctions.1)ExtendMethodStraightForwardButverBose.2)listComprechencomprechensionsareconconconciseandemandeconeandefforlargerdatasets.3)

Tomergelistsinpython,YouCanusethe操作員,estextMethod,ListComprehension,Oritertools

在Python3中,可以通過多種方法連接兩個列表:1)使用 運算符,適用於小列表,但對大列表效率低;2)使用extend方法,適用於大列表,內存效率高,但會修改原列表;3)使用*運算符,適用於合併多個列表,不修改原列表;4)使用itertools.chain,適用於大數據集,內存效率高。

使用join()方法是Python中從列表連接字符串最有效的方法。 1)使用join()方法高效且易讀。 2)循環使用 運算符對大列表效率低。 3)列表推導式與join()結合適用於需要轉換的場景。 4)reduce()方法適用於其他類型歸約,但對字符串連接效率低。完整句子結束。


熱AI工具

Undresser.AI Undress
人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover
用於從照片中去除衣服的線上人工智慧工具。

Undress AI Tool
免費脫衣圖片

Clothoff.io
AI脫衣器

Video Face Swap
使用我們完全免費的人工智慧換臉工具,輕鬆在任何影片中換臉!

熱門文章

熱工具

MinGW - Minimalist GNU for Windows
這個專案正在遷移到osdn.net/projects/mingw的過程中,你可以繼續在那裡關注我們。 MinGW:GNU編譯器集合(GCC)的本機Windows移植版本,可自由分發的導入函式庫和用於建置本機Windows應用程式的頭檔;包括對MSVC執行時間的擴展,以支援C99功能。 MinGW的所有軟體都可以在64位元Windows平台上運作。

SublimeText3 Mac版
神級程式碼編輯軟體(SublimeText3)

SublimeText3 Linux新版
SublimeText3 Linux最新版

EditPlus 中文破解版
體積小,語法高亮,不支援程式碼提示功能

DVWA
Damn Vulnerable Web App (DVWA) 是一個PHP/MySQL的Web應用程序,非常容易受到攻擊。它的主要目標是成為安全專業人員在合法環境中測試自己的技能和工具的輔助工具,幫助Web開發人員更好地理解保護網路應用程式的過程,並幫助教師/學生在課堂環境中教授/學習Web應用程式安全性。 DVWA的目標是透過簡單直接的介面練習一些最常見的Web漏洞,難度各不相同。請注意,該軟體中