首頁 >web前端 >js教程 >建構碼移

建構碼移

WBOY
WBOY原創
2024-09-10 11:08:32955瀏覽

本週,我一直在開發一個名為 codeshift 的命令列工具,它可以讓使用者輸入原始碼文件,選擇程式語言,並將其翻譯成他們選擇的語言。

Building codeshift

幕後並沒有什麼花哨的東西 - 它只是使用名為 Groq 的 AI 提供者來處理翻譯 - 但我想了解開發過程、它的使用方式以及它提供的功能。

Building codeshift 烏代拉納 / 碼移

代碼轉換

將原始碼檔案轉換為任何語言的命令列工具。

Building codeshift

特點

  • 接受多個輸入檔
  • 將輸出串流傳輸到標準輸出
  • 可以選擇輸出語言
  • 可以指定檔案路徑將輸出寫入檔案
  • 可以在.env中使用自訂API金鑰

安裝

  • 安裝 Node.js
  • 取得 Groq API 金鑰
  • 使用 Git 複製儲存庫或下載為 .zip
  • 在包含 package.json 的 repo 目錄中,執行 npm install
    • (可選)運行 npm install -g 。全域安裝該套件(讓您無需添加節點前綴即可運行它)
  • 建立一個名為 .env 的檔案並新增您的 Groq API 金鑰:GROQ_API_KEY=API_KEY_HERE

用法

codeshift [-o ]

範例

codeshift -o index.go go Examples/index.js

Building codeshift

選項

  • -o, --output: 指定將輸出寫入
  • 的檔名
  • -h, --help: 顯示指令的幫助
  • -v, --version: 輸出版本號

參數

  • :將原始檔轉換為所需的語言
  • :路徑...
在 GitHub 上查看

特徵

  • 接受多個輸入檔
  • 可以選擇輸出語言
  • 將輸出串流傳輸到標準輸出
  • 可以指定檔案路徑將輸出寫入檔案
  • 可以在.env中使用自訂API金鑰

用法

codeshift [-o ]

例如,要將檔案examples/index.js 翻譯為Go 並將輸出儲存到index.go:

codeshift -o index.go go Examples/index.js

Building codeshift

選項

  • -o, --output: 指定檔案名稱以將輸出寫入
  • -h, --help: 顯示指令的幫助
  • -v, --version: 輸出版本號

論據

  • :將原始檔轉換為的所需語言
  • :來源檔案的路徑,以空格分隔

發展

我一直致力於這個項目,作為安大略省多倫多塞內卡理工學院開源開發主題課程的一部分。一開始,我想堅持使用我熟悉的技術,但該專案的說明鼓勵我們學習新的東西,例如新的程式語言或新的運行時。

雖然我一直想學習 Java,但在網路上做了一些研究後,它似乎不是開發 CLI 工具或與 AI 模型互動的最佳選擇。它沒有得到 OpenAI 的正式支持,並且其文件中的社群庫已被棄用。

我一直堅持使用流行技術 - 它們往往很可靠,並且擁有完整的文檔和大量線上資訊。但這一次,我決定採取不同的做法。我決定使用 Bun,這是一個很酷的新 JavaScript 運行時,旨在取代 Node。

事實證明我應該堅持我的直覺。我在嘗試編譯我的專案時遇到了麻煩,我所能做的就是希望開發人員能夠解決這個問題。

無法將 OpenAI SDK 與 Sentry Node 代理程式一起使用:TypeError: getDefaultAgent is not a function 第1010章

Building codeshift
基思沃爾 發佈於

確認這是 Node 函式庫問題,而不是底層 OpenAI API 問題

  • [X] 這是 Node 函式庫的問題

描述錯誤

之前在這裡引用過,未解決就關閉:https://github.com/openai/openai-node/issues/903

這是一個相當大的問題,因為它會阻止在使用最新的 Sentry 監控套件時使用 SDK。

重現

  1. 透過 npm i @sentry/node --save 安裝 Sentry Node sdk
  2. 輸入以下程式碼;
import * as Sentry from '@sentry/node';

// Start Sentry
  Sentry.init({
    dsn: "https://your-sentry-url",
    environment: "your-env",
    tracesSampleRate: 1.0, //  Capture 100% of the transactions
  });
進入全螢幕模式 退出全螢幕模式
  1. Try to create a completion somewhere in the process after Sentry has been initialized:
const params = {
  model: model,
  stream: true,
  stream_options: {
    include_usage: true
  },
  messages
};
const completion = await openai.chat.completions.create(params);
Enter fullscreen mode Exit fullscreen mode

Results in error:

TypeError: getDefaultAgent is not a function
    at OpenAI.buildRequest (file:///my-project/node_modules/openai/core.mjs:208:66)
    at OpenAI.makeRequest (file:///my-project/node_modules/openai/core.mjs:279:44)

Code snippets

(Included)

OS

All operating systems (macOS, Linux)

Node version

v20.10.0

Library version

v4.56.0

View on GitHub

This turned me away from Bun. I'd found out from our professor we were going to compile an executable later in the course, and I did not want to deal with Bun's problems down the line.

So, I switched to Node. It was painful going from Bun's easy-to-use built-in APIs to having to learn how to use commander for Node. But at least it wouldn't crash.

I had previous experience working with AI models through code thanks to my co-op, but I was unfamiliar with creating a command-line tool. Configuring the options and arguments turned out to be the most time-consuming aspect of the project.

Apart from the core feature we chose for each of our projects - mine being code translation - we were asked to implement any two additional features. One of the features I chose to implement was to save output to a specified file. Currently, I'm not sure this feature is that useful, since you could just redirect the output to a file, but in the future I want to use it to extract the code from the response to the file, and include the AI's rationale behind the translation in the full response to stdout. Writing this feature also helped me learn about global and command-based options using commander.js. Since there was only one command (run) and it was the default, I wanted the option to show up in the default help menu, not when you specifically typed codeshift help run, so I had to learn to implement it as a global option.

I also ended up "accidentally" implementing the feature for streaming the response to stdout. I was at first scared away from streaming, because it sounded too difficult. But later, when I was trying to read the input files, I figured reading large files in chunks would be more efficient. I realized I'd already implemented streaming in my previous C++ courses, and figuring it wouldn't be too bad, I got to work.

Then, halfway through my implementation I realized I'd have to send the whole file at once to the AI regardless.

But this encouraged me to try streaming the output from the AI. So I hopped on MDN and started reading about ReadableStreams and messing around with ReadableStreamDefaultReader.read() for what felt like an hour - only to scroll down the AI provider's documentation and realize all I had to do was add stream: true to my request.

Either way, I may have taken the scenic route but I ended up implementing streaming.

Planned Features

Right now, the program parses each source file individually, with no shared context. So if a file references another, it wouldn't be reflected in the output. I'd like to enable it to have that context eventually. Like I mentioned, another feature I want to add is writing the AI's reasoning behind the translation to stdout but leaving it out of the output file. I'd also like to add some of the other optional features, like options to specify the AI model to use, the API key to use, and reading that data from a .env file in the same directory.

That's about it for this post. I'll be writing more in the coming weeks.

以上是建構碼移的詳細內容。更多資訊請關注PHP中文網其他相關文章!

陳述:
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn