首页  >  文章  >  后端开发  >  Infusion v.0

Infusion v.0

Barbara Streisand
Barbara Streisand原创
2024-09-21 08:15:32535浏览

Infusion v.0

Over the course of the past 2 weeks I have been working on a documentation-generation tool that uses Open AI API to generate new files with documentation in them. I have built it using Python, Click, and LangChain libraries. The features include:

  • Automatically generates structured comments and documentation for source code.
  • Supports multiple programming languages (identified via file extension).
  • Handles multiple files at once (no batch processing yet).
  • Allows custom output directories to store the processed files.
  • Allows you to specify a model to use.

You can access the GitHub repo here:
https://github.com/SychAndrii/infusion

Infusion is a command-line tool designed to assist developers by generating documentation for their source code. By providing file paths, Infusion leverages language models like OpenAI’s GPT to modify the files by inserting appropriate comments and documentation. The tool supports multiple programming languages.

It is particularly useful when you need structured comments (e.g., JSDoc for JavaScript/TypeScript or JavaDoc for Java) or simple comments above functions and classes. Infusion saves the modified files to a specified output directory.

Installation

To install and run Infusion locally, clone the GitHub repository.

git clone https://github.com/your-username/infusion.git
cd infusion

After that, you will have to set up a virtual environment and install all the dependencies.

If you are on Windows, use PowerShell to set up virtual environemnt using the command:

./setup/setup.ps1

If you are on Mac / Linux, use the following command:

./setup/setup.sh

After you are done setting up virtual environment, you can use the Infusion tool by running:

pipenv run infsue [OPTIONS] [FILE_PATHS]...

Usage

To use Infusion, run the following command, replacing FILE_PATHS with the paths to the source code files you want to process.

Process a single file:

pipenv run infsue ./path/to/source.py

Process a single file with a different openAI model:

pipenv run infsue -m gpt-4o-mini ./path/to/source.py

Process a single file and specify an output folder:

pipenv run infsue ./path/to/source.py --output my_output_folder

Process multiple files:

pipenv run infsue ./file1.js ./file2.py

Process multiple files without specifying every single one of them:

pipenv run infsue ./folder/*

Process multiple files and specify an output folder to save files to instead of printing them to stdout:

pipenv run infsue ./file1.js ./file2.py --output my_output_folder

For a more practical example of usage of this tool, please see the GitHub repository! I'd love if you posted your issue to suggest any improvements in my codebase!

以上是Infusion v.0的详细内容。更多信息请关注PHP中文网其他相关文章!

声明:
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn