探索Zephyr-7B:強大的開源LLM
> OpenAI LLM排行榜嗡嗡作響,旨在競爭GPT-4的新開源車型,而Zephyr-7B是一個出色的競爭者。本教程從WebPilot.ai探索了這種尖端語言模型,展示了它與變形金剛管道的使用,並在代理 - 教學數據集上進行了微調。 AI的新手? AI基礎知識技能軌道是一個很好的起點。
Zephyr-7b經過訓練,可以充當有益的助手。它的優勢在於生成連貫的文本,翻譯語言,總結信息,情感分析和上下文感知的問題回答。
Zephyr-7b-β:微調的漫威
來自Zephyr Chat
> >使用擁抱的臉型變壓器訪問Zephyr-7b
>本教程使用擁抱的臉部變壓器來輕鬆訪問。 (如果遇到加載問題,請諮詢推理Kaggle筆記本。安裝庫:
確保您有最新版本:!pip install -q -U transformers !pip install -q -U accelerate !pip install -q -U bitsandbytes
import torch from transformers import pipeline利用多個GPU進行更快的生成。
device_map="auto"
torch.bfloat16
生成文本:model_name = "HuggingFaceH4/zephyr-7b-beta" pipe = pipeline( "text-generation", model=model_name, torch_dtype=torch.bfloat16, device_map="auto", )
prompt = "Write a Python function that can clean the HTML tags from the file:" outputs = pipe( prompt, max_new_tokens=300, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"])系統提示:
使用Zephyr-7B樣式系統提示自定義響應:
!pip install -q -U transformers !pip install -q -U accelerate !pip install -q -U bitsandbytes
import torch from transformers import pipeline
> kaggle秘密(對於kaggle筆記本):檢索擁抱的臉和偏見和偏見API鍵。
擁抱面部和重量和偏見登錄:
model_name = "HuggingFaceH4/zephyr-7b-beta" pipe = pipeline( "text-generation", model=model_name, torch_dtype=torch.bfloat16, device_map="auto", )
prompt = "Write a Python function that can clean the HTML tags from the file:" outputs = pipe( prompt, max_new_tokens=300, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"])函數將數據集適應Zephyr-7b的及時樣式。
format_prompt
messages = [ { "role": "system", "content": "You are a skilled software engineer who consistently produces high-quality Python code.", }, { "role": "user", "content": "Write a Python code to display text in a star pattern.", }, ] prompt = pipe.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) outputs = pipe( prompt, max_new_tokens=300, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"])
>加載和準備模型
%%capture %pip install -U bitsandbytes %pip install -U transformers %pip install -U peft %pip install -U accelerate %pip install -U trl
# ... (Import statements as in original tutorial) ...
!huggingface-cli login --token $secret_hf # ... (wandb login as in original tutorial) ...
base_model = "HuggingFaceH4/zephyr-7b-beta" dataset_name = "THUDM/AgentInstruct" new_model = "zephyr-7b-beta-Agent-Instruct"
# ... (format_prompt function and dataset loading as in original tutorial) ...開始訓練:
# ... (bnb_config and model loading as in original tutorial) ...
>保存和部署微調模型
# ... (tokenizer loading and configuration as in original tutorial) ...
# ... (peft_config and model preparation as in original tutorial) ...
測試微型模型
>用各種提示測試模型的性能。原始教程中提供了示例。
> Zephyr-7b-beta表現出令人印象深刻的功能。本教程為即使在資源受限的GPU上,也提供了利用和微調這一強大的LLM的綜合指南。 考慮大型語言模型(LLMS)概念課程,以了解更深的LLM知識。
以上是Zephyr-7B的綜合指南:功能,用法和微調的詳細內容。更多資訊請關注PHP中文網其他相關文章!