隨著音訊內容消費的日益普及,將文件或書面內容轉換為真實音訊格式的能力最近已成為趨勢。
雖然 Google 的 NotebookLM 在這個領域引起了關注,但我想探索使用現代雲端服務建立類似的系統。在本文中,我將向您介紹如何建立一個可擴展的雲端原生系統,該系統使用 FastAPI、Firebase、Google Cloud Pub/Sub 和 Azure 的文字轉語音服務將文件轉換為高品質的播客。
這裡有一個展示,您可以參考該系統的結果:MyPodify Showcase
將文件轉換為播客並不像透過文字轉語音引擎運作文字那麼簡單。它需要仔細的處理、自然語言的理解以及處理各種文件格式的能力,同時保持流暢的使用者體驗。系統需要:
讓我們分解關鍵組件並了解它們如何協同工作:
FastAPI 作為我們的後端框架,選擇它有幾個令人信服的原因:
這是我們的上傳端點的詳細資訊:
@app.post('/upload') async def upload_files( token: Annotated[ParsedToken, Depends(verify_firebase_token)], project_name: str, description: str, website_link: str, host_count: int, files: Optional[List[UploadFile]] = File(None) ): # Validate token user_id = token['uid'] # Generate unique identifiers project_id = str(uuid.uuid4()) podcast_id = str(uuid.uuid4()) # Process and store files file_urls = await process_uploads(files, user_id, project_id) # Create Firestore document await create_project_document(user_id, project_id, { 'status': 'pending', 'created_at': datetime.now(), 'project_name': project_name, 'description': description, 'file_urls': file_urls }) # Trigger async processing await publish_to_pubsub(user_id, project_id, podcast_id, file_urls) return {'project_id': project_id, 'status': 'processing'}
Firebase 為我們的應用程式提供了兩個關鍵服務:
以下是我們實現即時狀態更新的方法:
async def update_status(user_id: str, project_id: str, status: str, metadata: dict = None): doc_ref = db.collection('projects').document(f'{user_id}/{project_id}') update_data = { 'status': status, 'updated_at': datetime.now() } if metadata: update_data.update(metadata) await doc_ref.update(update_data)
Pub/Sub 作為我們的訊息傳遞主幹,支援:
訊息結構範例:
@app.post('/upload') async def upload_files( token: Annotated[ParsedToken, Depends(verify_firebase_token)], project_name: str, description: str, website_link: str, host_count: int, files: Optional[List[UploadFile]] = File(None) ): # Validate token user_id = token['uid'] # Generate unique identifiers project_id = str(uuid.uuid4()) podcast_id = str(uuid.uuid4()) # Process and store files file_urls = await process_uploads(files, user_id, project_id) # Create Firestore document await create_project_document(user_id, project_id, { 'status': 'pending', 'created_at': datetime.now(), 'project_name': project_name, 'description': description, 'file_urls': file_urls }) # Trigger async processing await publish_to_pubsub(user_id, project_id, podcast_id, file_urls) return {'project_id': project_id, 'status': 'processing'}
我們音訊產生的核心使用 Azure 的認知服務語音 SDK。讓我們看看我們如何實現聽起來自然的語音合成:
async def update_status(user_id: str, project_id: str, status: str, metadata: dict = None): doc_ref = db.collection('projects').document(f'{user_id}/{project_id}') update_data = { 'status': status, 'updated_at': datetime.now() } if metadata: update_data.update(metadata) await doc_ref.update(update_data)
我們系統的獨特功能之一是能夠使用人工智慧產生多語音播客。以下是我們如何處理不同主機的腳本產生:
{ 'user_id': 'uid_123', 'project_id': 'proj_456', 'podcast_id': 'pod_789', 'file_urls': ['gs://bucket/file1.pdf'], 'description': 'Technical blog post about cloud architecture', 'host_count': 2, 'action': 'CREATE_PROJECT' }
對於語音合成,我們將不同的揚聲器對應到特定的 Azure 語音:
import azure.cognitiveservices.speech as speechsdk from pathlib import Path class SpeechGenerator: def __init__(self): self.speech_config = speechsdk.SpeechConfig( subscription=os.getenv("AZURE_SPEECH_KEY"), region=os.getenv("AZURE_SPEECH_REGION") ) async def create_speech_segment(self, text, voice, output_file): try: self.speech_config.speech_synthesis_voice_name = voice synthesizer = speechsdk.SpeechSynthesizer( speech_config=self.speech_config, audio_config=None ) # Generate speech from text result = synthesizer.speak_text_async(text).get() if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted: with open(output_file, "wb") as audio_file: audio_file.write(result.audio_data) return True return False except Exception as e: logger.error(f"Speech synthesis failed: {str(e)}") return False
工作組件處理繁重的工作:
文獻分析
內容處理
音訊產生
這是我們的工作邏輯的簡化視圖:
async def generate_podcast_script(outline: str, analysis: str, host_count: int): # System instructions for different podcast formats system_instructions = TWO_HOST_SYSTEM_PROMPT if host_count > 1 else ONE_HOST_SYSTEM_PROMPT # Example of how we structure the AI conversation if host_count > 1: script_format = """ **Alex**: "Hello and welcome to MyPodify! I'm your host Alex, joined by..." **Jane**: "Hi everyone! I'm Jane, and today we're diving into {topic}..." """ else: script_format = """ **Alex**: "Welcome to MyPodify! Today we're exploring {topic}..." """ # Generate the complete script using AI script = await generate_content_from_openai( content=f"{outline}\n\nContent Details:{analysis}", system_instructions=system_instructions, purpose="Podcast Script" ) return script
系統實現了全面的錯誤處理:
重試邏輯
狀態追蹤
資源清理
為了處理生產負載,我們實作了多項最佳化:
工作人員縮放
儲存最佳化
處理最佳化
系統包括全面監控:
@app.post('/upload') async def upload_files( token: Annotated[ParsedToken, Depends(verify_firebase_token)], project_name: str, description: str, website_link: str, host_count: int, files: Optional[List[UploadFile]] = File(None) ): # Validate token user_id = token['uid'] # Generate unique identifiers project_id = str(uuid.uuid4()) podcast_id = str(uuid.uuid4()) # Process and store files file_urls = await process_uploads(files, user_id, project_id) # Create Firestore document await create_project_document(user_id, project_id, { 'status': 'pending', 'created_at': datetime.now(), 'project_name': project_name, 'description': description, 'file_urls': file_urls }) # Trigger async processing await publish_to_pubsub(user_id, project_id, podcast_id, file_urls) return {'project_id': project_id, 'status': 'processing'}
雖然目前系統運作良好,但未來的改進還有一些令人興奮的可能性:
增強的音頻處理
內容增強
平台整合
建立文件到播客轉換器是進入現代雲端架構的令人興奮的旅程。 FastAPI、Firebase、Google Cloud Pub/Sub 和 Azure 的文字轉語音服務的組合為大規模處理複雜的文件處理提供了堅實的基礎。
事件驅動的架構確保系統在負載下保持回應,而託管服務的使用減少了營運開銷。無論您是建立類似的系統還是只是探索雲端原生架構,我希望這次深入研究能為建立可擴展、生產就緒的應用程式提供寶貴的見解。
想了解更多有關雲端架構和現代應用程式開發的資訊嗎?追蹤我,以取得更多技術實用教學。
以上是如何建立您自己的 Google NotebookLM的詳細內容。更多資訊請關注PHP中文網其他相關文章!