Maison > Article > développement back-end > Construire un système multimodèle rentable : GPT-GPT-Guide de mise en œuvre
Dans des scénarios commerciaux réels, nous sommes souvent confrontés à ces défis :
La solution idéale consiste à sélectionner dynamiquement les modèles appropriés en fonction de la complexité des tâches, garantissant ainsi les performances tout en contrôlant les coûts.
from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain from langchain.prompts import ChatPromptTemplate from langchain.callbacks import get_openai_callback from typing import Dict, List, Optional import json # Initialize models class ModelPool: def __init__(self): self.gpt4 = ChatOpenAI( model_name="gpt-4", temperature=0.7, max_tokens=1000 ) self.gpt35 = ChatOpenAI( model_name="gpt-3.5-turbo", temperature=0.7, max_tokens=1000 )
class ComplexityAnalyzer: def __init__(self): self.complexity_prompt = ChatPromptTemplate.from_template( "Analyze the complexity of the following task, return a score from 1-10:\n{task}" ) self.analyzer_chain = LLMChain( llm=ChatOpenAI(model_name="gpt-3.5-turbo"), prompt=self.complexity_prompt ) async def analyze(self, task: str) -> int: result = await self.analyzer_chain.arun(task=task) return int(result.strip())
class ModelRouter: def __init__(self, complexity_threshold: int = 7): self.complexity_threshold = complexity_threshold self.model_pool = ModelPool() self.analyzer = ComplexityAnalyzer() async def route(self, task: str) -> ChatOpenAI: complexity = await self.analyzer.analyze(task) if complexity >= self.complexity_threshold: return self.model_pool.gpt4 return self.model_pool.gpt35
class CostController: def __init__(self, budget_limit: float): self.budget_limit = budget_limit self.total_cost = 0.0 def track_cost(self, callback_data): cost = callback_data.total_cost self.total_cost += cost if self.total_cost > self.budget_limit: raise Exception("Budget exceeded") return cost
class MultiModelSystem: def __init__(self, budget_limit: float = 10.0): self.router = ModelRouter() self.cost_controller = CostController(budget_limit) async def process(self, task: str) -> Dict: model = await self.router.route(task) with get_openai_callback() as cb: response = await model.agenerate([[task]]) cost = self.cost_controller.track_cost(cb) return { "result": response.generations[0][0].text, "model": model.model_name, "cost": cost }
Démontrons le système à travers un exemple de service client :
async def customer_service_demo(): system = MultiModelSystem(budget_limit=1.0) # Simple query - should route to GPT-3.5 simple_query = "What are your business hours?" simple_result = await system.process(simple_query) # Complex query - should route to GPT-4 complex_query = """ I'd like to understand your return policy. Specifically: 1. If the product has quality issues but has been used for a while 2. If it's a limited item but the packaging has been opened 3. If it's a cross-border purchase How should these situations be handled? What costs are involved? """ complex_result = await system.process(complex_query) return simple_result, complex_result
Lors de tests réels, nous avons comparé différentes stratégies :
Strategy | Avg Response Time | Avg Cost/Query | Accuracy |
---|---|---|---|
GPT-4 Only | 2.5s | .06 | 95% |
GPT-3.5 Only | 1.0s | .004 | 85% |
Hybrid Strategy | 1.5s | .015 | 92% |
Les systèmes de collaboration multimodèles peuvent réduire considérablement les coûts opérationnels tout en maintenant une qualité de service élevée. La clé est de :
Ce qui précède est le contenu détaillé de. pour plus d'informations, suivez d'autres articles connexes sur le site Web de PHP en chinois!