Rumah  >  Artikel  >  pembangunan bahagian belakang  >  Sumbangan sumber terbuka pertama saya

Sumbangan sumber terbuka pertama saya

DDD
DDDasal
2024-09-19 10:17:001750semak imbas

Memfailkan isu

Untuk sumbangan pertama saya, saya memfailkan isu untuk menambah ciri baharu pada projek lain iaitu menambah pilihan bendera baharu untuk memaparkan token yang digunakan untuk penjanaan gesaan dan penyiapan.

My first open source contribution Feat: pilihan bendera maklumat token penyelesaian sembang #8

My first open source contribution
cleobnvntra disiarkan pada

Penerangan

Pilihan bendera yang memberikan pengguna kiraan token yang dihantar dan diterima. Saya fikir ia adalah ciri penting yang membimbing pengguna untuk kekal dalam belanjawan token semasa membuat permintaan pelengkapan sembang!

Pelaksanaan

Untuk melakukan ini, kita perlu menambah bendera pilihan lain yang boleh menjadi -t dan --token-penggunaan. Apabila pengguna memasukkan bendera ini pada arahan mereka, ia harus memaparkan secara terperinci jumlah token yang digunakan dalam penjanaan penyiapan dan bilangan token yang digunakan dalam gesaan.

Lihat di GitHub

Saya memilih untuk menyumbang kepada projek sumber terbuka fadingNA, chat-minal, alat CLI yang ditulis dalam Python yang membolehkan anda memanfaatkan OpenAI untuk melakukan pelbagai perkara, seperti menggunakannya untuk menjana semakan kod, penukaran fail, menjana penurunan harga daripada teks dan teks ringkasan.

Permintaan tarik saya

Saya pernah menulis kod dalam Python sebelum ini, tetapi ia bukan kemahiran terkuat saya. Jadi menyumbang kepada projek ini memberikan pengalaman pembelajaran yang mencabar tetapi baik untuk saya.
Cabarannya ialah saya perlu membaca dan memahami kod orang lain, dan menyediakan penyelesaian yang betul dengan cara ia tidak memecahkan reka bentuk kod. Memahami aliran adalah penting supaya saya boleh menambahkan ciri dengan cekap tanpa perlu membuat perubahan besar dalam kod dan memastikan kod itu konsisten.

My first open source contribution FEAT: Bendera penggunaan token #9

My first open source contribution
cleobnvntra disiarkan pada

Ciri

Menambahkan ciri untuk menyertakan pilihan bendera --token_usage untuk pengguna. Pilihan ini memberi pengguna maklumat tentang bilangan token yang digunakan untuk penyiapan segera dan dijana.

Pelaksanaan

Penyelesaian yang saya hasilkan berdasarkan reka bentuk kod adalah untuk menyemak kewujudan bendera token_usage. Saya tidak mahu kod itu menyemak sebarang pernyataan yang tidak diperlukan jika bendera token_usage tidak digunakan, jadi saya membuat dua logik gelung serupa yang berasingan, dengan perbezaan menyemak kewujudan usage_metadata di dalam bahagian.

if token_usage:
    for chunk in runnable.stream({"input_text": input_text}):
        print(chunk.content, end="", flush=True)
        answer.append(chunk.content)

        if chunk.usage_metadata:
            completion_tokens = chunk.usage_metadata.get('output_tokens')
            prompt_tokens = chunk.usage_metadata.get('input_tokens')
else:
    for chunk in runnable.stream({"input_text": input_text}):
        print(chunk.content, end="", flush=True)
        answer.append(chunk.content)
Masukkan mod skrin penuh Keluar daripada mod skrin penuh

Paparan

At the end of the execution of get_completions() method, a check for the flag token_usage is added, which then displays the token usage details to stderr if the flag was used.

if token_usage:
    logger.error(f"Tokens used for completion: <span class="pl-s1"><span class="pl-kos">{completion_tokens}</span>"</span>)
    logger.error(f"Tokens used for prompt: <span class="pl-s1"><span class="pl-kos">{prompt_tokens}</span>"</span>)
Enter fullscreen mode Exit fullscreen mode
View on GitHub

My solution

Retrieving the token usage

if token_usage:
    for chunk in runnable.stream({"input_text": input_text}):
        print(chunk.content, end="", flush=True)
        answer.append(chunk.content)

        if chunk.usage_metadata:
            completion_tokens = chunk.usage_metadata.get('output_tokens')
            prompt_tokens = chunk.usage_metadata.get('input_tokens')
else:
    for chunk in runnable.stream({"input_text": input_text}):
        print(chunk.content, end="", flush=True)
        answer.append(chunk.content)

Originally, the code only had one for loop which retrieves the content from a stream and appends it to an array which forms the response of the completion.

Why did I write it this way?

My reasoning behind duplicating the for while adding the distinct if block is to prevent the code from repeatedly checking the if block even if the user is not using the newly added --token_usage flag. So instead, I check for the existence of the flag firstly, and then decide which for loop to execute.

Realization

Even though my pull request has been accepted by the project owner, I realized late that this way adds complexity to the code's maintainability. For example, if there are changes required in the for loop for processing the stream, that means modifying the code twice since there are two identical for loops.

What I think I could do as an improvement for it is to make it into a function so that any changes required can be done in one function only, keeping the maintainability of the code. This just proves that even if I wrote the code with optimization in mind, there are still other things that I can miss which is crucial to a project, which in this case, is maintainability.

Receiving a pull request

My tool, genereadme, also received a contribution. I received a PR from Mounayer, which is to add the same feature to my project.

My first open source contribution feat: added a new flag that displays the number of tokens sent in prompt and received in completion #13

My first open source contribution
Mounayer posted on

Description

Closes #12.

  • Added a new flag --token-usage which when given, prints the number of tokens that were sent in the prompt and the number of tokens that were returned in the completion to `stderr.

This simply required the addition for another flag check --token-usage:

   .option("--token-usage", "Show prompt and completion token usage")
Enter fullscreen mode Exit fullscreen mode

I've also made sure to keep your naming conventions/formatting style consistent, in the for loop that does the chat completion for each file processed, I have accumulated the total tokens sent and received:

    promptTokens += response.usage.prompt_tokens;
    completionTokens += response.usage.completion_tokens;
Enter fullscreen mode Exit fullscreen mode

which I then display at the end of program run-time if the --token-usage flag is provided as such:

    if (program.opts().tokenUsage) {
      console.error(`Prompt tokens: <span class="pl-s1"><span class="pl-kos">${promptTokens}</span>`</span>);
      console.error(`Completion tokens: <span class="pl-s1"><span class="pl-kos">${completionTokens}</span>`</span>);
    }
Enter fullscreen mode Exit fullscreen mode
  • Updated README.md to explain the new flag.

Testing

Test 1

genereadme examples/sum.js --token-usage
Enter fullscreen mode Exit fullscreen mode

This should display something like:

My first open source contribution

Test 2

You can try it out with multiple files too, i.e.:

genereadme examples/sum.js examples/createUser.js --token-usage
Enter fullscreen mode Exit fullscreen mode
View on GitHub

This time, instead of having to read someone else's code, someone had to read mine and contribute to it. It is nice knowing that someone is able to contribute to my project. To me, it means that they understood how my code works, so they were able to add the feature without breaking anything or adding any complexity to the code base.
With that being mentioned, reading code is also a skill that is not to be underestimated. My code is nowhere near perfect and I know there are still places I can improve on, so credit is also due to being able to read and understand code.

This specific pull request did not really require any back and forth changes as the code that was written by Mounayer is what I would have written myself.

Atas ialah kandungan terperinci Sumbangan sumber terbuka pertama saya. Untuk maklumat lanjut, sila ikut artikel berkaitan lain di laman web China PHP!

Kenyataan:
Kandungan artikel ini disumbangkan secara sukarela oleh netizen, dan hak cipta adalah milik pengarang asal. Laman web ini tidak memikul tanggungjawab undang-undang yang sepadan. Jika anda menemui sebarang kandungan yang disyaki plagiarisme atau pelanggaran, sila hubungi admin@php.cn
Artikel sebelumnya:Algoritma Isih PemilihanArtikel seterusnya:Algoritma Isih Pemilihan