Heim > Artikel > Backend-Entwicklung > Chatten Sie mit Repos (PRs) über Llama B
By Tobi.A
When working with large repositories, keeping up with pull requests (PRs)-especially those containing thousands of lines of code-can be a real challenge. Whether it's understanding the impact of specific changes or navigating through massive updates, PR reviews can quickly become overwhelming. To tackle this, I set out to build a project that would allow me to quickly and efficiently understand changes within these large PRs.
Using Retrieval-Augmented Generation (RAG) combined with Langtrace's observability tools, I developed "Chat with Repo(PRs)"-a tool aimed at simplifying the process of reviewing large PRs. Additionally, I documented and compared the performance of Llama 3.1B to GPT-4o. Through this project, I explored how these models handle code explanations and summarizations, and which ones offer the best balance of speed and accuracy for this use case.
All code used in this blog can be found here
Before we dive into the details, let's outline the key tools employed in this project:
LLM Services:
Embedding Model:
Vector Database:
LLM Observability:
The Chat with Repo(PRs) system implements a simple RAG architecture for PR analysis. It begins by ingesting PR data via GitHub's API, chunking large files to manage token limits. These chunks are vectorized using SentenceTransformers, creating dense embeddings that capture code semantics. A FAISS index enables sub-linear time similarity search over these embeddings. Queries undergo the same embedding process, facilitating semantic matching against the code index. The retrieved chunks form a dynamic context for the chosen LLM (via OpenAI, Groq, or Ollama), which then performs contextualized inference. This approach leverages both the efficiency of vector search and the generative power of LLMs, allowing for nuanced code understanding that adapts to varying PR complexities. Finally, the Langtrace integration provides granular observability into embedding and LLM operations, offering insights into performance bottlenecks and potential optimizations in the RAG pipeline. Let's dive into its key components.
The chunking process in this system is designed to break down large pull requests into manageable, context-rich pieces. The core of this process is implemented in the IngestionService class, particularly in the chunk_large_file and create_chunks_from_patch methods.
When a PR is ingested, each file's patch is processed individually. The chunk_large_file method is responsible for splitting large files:
def chunk_large_file(self, file_patch: str, chunk_size: int = config.CHUNK_SIZE) -> List[str]: lines = file_patch.split('\n') chunks = [] current_chunk = [] current_chunk_size = 0 for line in lines: line_size = len(line) if current_chunk_size + line_size > chunk_size and current_chunk: chunks.append('\n'.join(current_chunk)) current_chunk = [] current_chunk_size = 0 current_chunk.append(line) current_chunk_size += line_size if current_chunk: chunks.append('\n'.join(current_chunk)) return chunks
This method splits the file based on a configured chunk size, ensuring that each chunk doesn't exceed this limit. It's a line-based approach that tries to keep logical units of code together as much as possible within the size constraint.
Once the file is split into chunks, the create_chunks_from_patch method processes each chunk. This method enriches each chunk with contextual information:
def create_chunks_from_patch(self, repo_info, pr_info, file_info, repo_explanation, pr_explanation): code_blocks = self.chunk_large_file(file_info['patch']) chunks = [] for i, block in enumerate(code_blocks): chunk_explanation = self.generate_safe_explanation(f"Explain this part of the code and its changes: {block}") chunk = { "code": block, "explanations": { "repository": repo_explanation, "pull_request": pr_explanation, "file": file_explanation, "code": chunk_explanation }, "metadata": { "repo": repo_info["name"], "pr_number": pr_info["number"], "file": file_info["filename"], "chunk_number": i + 1, "total_chunks": len(code_blocks), "timestamp": pr_info["updated_at"] } } chunks.append(chunk)
It generates an explanation for each code block using the LLM service.
It attaches metadata including the repository name, PR number, file name, chunk number, and timestamp.
It includes broader context like repository and pull request explanations.
This approach ensures that each chunk is not just a slice of code, but a rich, context-aware unit:
This includes:
The EmbeddingService class handles the creation of embeddings and similarity search:
1. Embedding Creation:
For each chunk, we create an embedding using SentenceTransformer:
text_to_embed = self.get_full_context(chunk) embedding = self.model.encode([text_to_embed])[0]
The embedding combines code content, code explanation, file explanation, PR explanation, and repository explanation.
2. Indexing:
We use FAISS to index these embeddings:
self.index.add(np.array([embedding]))
3. Query Processing:
When a question is asked, we create an embedding for the query and perform a similarity search:
query_vector = self.model.encode([query]) D, I = self.index.search(query_vector, k)
4. Chunk Selection:
The system selects the top k chunks (default 3) with the highest similarity scores.
This captures both code structure and semantic meaning, allowing for relevant chunk retrieval even when queries don't exactly match code syntax. FAISS enables efficient similarity computations, making it quick to find relevant chunks in large repositories.
To ensure comprehensive observability and performance monitoring, we've integrated Langtrace into our "Chat with Repo(PRs)" application. Langtrace provides real-time tracing, evaluations, and metrics for our LLM interactions, vector database operations, and overall application performance.
To explore how open-source models compare to their closed-source counterparts in handling large PRs, I conducted a comparative analysis between Llama 3.1b (open-source) and GPT-4o (closed-source). The test case involved a significant update to the Langtrace's repository, with over 2,300 additions, nearly 200 deletions, 250 commits, and changes across 47 files. My goal was to quickly understand these specific changes and assess how each model performs in code review tasks.
Methodology:
I posed a set of technical questions related to the pull request (PR), covering:
Both models were provided with the same code snippets and contextual information. Their responses were evaluated based on:
Code Understanding:
Knowledge of Frameworks:
Architectural Insights:
Handling Uncertainty:
Technical Detail vs. Broader Context:
Below are examples of questions posed to both models, the expected output, and their respective answers:
While GPT-4o remains stronger in broader architectural insights, Llama 3.1b's rapid progress and versatility in code comprehension make it a powerful option for code review. Open-source models are catching up quickly, and as they continue to improve, they could play a significant role in democratizing AI-assisted software development. The ability to tailor and integrate these models into specific development workflows could soon make them indispensable tools for reviewing, debugging, and managing large codebases.
We'd love to hear your thoughts! Join our community on Discord or reach out at support@langtrace.ai to share your experiences, insights, and suggestions. Together, we can continue advancing observability in LLM development and beyond.
Happy tracing!
Ressources utiles
Premiers pas avec Langtrace https://docs.langtrace.ai/introduction
Langtrace Twitter(X) https://x.com/langtrace_ai
Langtrace Linkedin https://www.linkedin.com/company/langtrace/about/
Site Web Langtrace https://langtrace.ai/
Discorde Langtrace https://discord.langtrace.ai/
Langtrace Github https://github.com/Scale3-Labs/langtrace
Das obige ist der detaillierte Inhalt vonChatten Sie mit Repos (PRs) über Llama B. Für weitere Informationen folgen Sie bitte anderen verwandten Artikeln auf der PHP chinesischen Website!