首页  >  文章  >  web前端  >  构建码移

构建码移

WBOY
WBOY原创
2024-09-10 11:08:32929浏览

本周,我一直在开发一个名为 codeshift 的命令行工具,它可以让用户输入源代码文件,选择编程语言,并将其翻译成他们选择的语言。

Building codeshift

幕后并没有什么花哨的东西 - 它只是使用名为 Groq 的 AI 提供商来处理翻译 - 但我想了解开发过程、它的使用方式以及它提供的功能。

Building codeshift 乌代拉纳 / 码移

代码转换

将源代码文件转换为任何语言的命令行工具。

Building codeshift

特点

  • 接受多个输入文件
  • 将输出流式传输到标准输出
  • 可以选择输出语言
  • 可以指定文件路径将输出写入文件
  • 可以在.env中使用自定义API密钥

安装

  • 安装 Node.js
  • 获取 Groq API 密钥
  • 使用 Git 克隆存储库或下载为 .zip
  • 在包含 package.json 的 repo 目录中,运行 npm install
    • (可选)运行 npm install -g 。全局安装该包(让您无需添加节点前缀即可运行它)
  • 创建一个名为 .env 的文件并添加您的 Groq API 密钥:GROQ_API_KEY=API_KEY_HERE

用法

codeshift [-o ]

示例

codeshift -o index.go go Examples/index.js

Building codeshift

选项

  • -o, --output: 指定将输出写入
  • 的文件名
  • -h, --help: 显示命令的帮助
  • -v, --version: 输出版本号

参数

  • :将源文件转换为所需的语言
  • :路径...
在 GitHub 上查看

特征

  • 接受多个输入文件
  • 可以选择输出语言
  • 将输出流式传输到标准输出
  • 可以指定文件路径将输出写入文件
  • 可以在.env中使用自定义API密钥

用法

codeshift [-o ]

例如,要将文件examples/index.js 翻译为Go 并将输出保存到index.go:

codeshift -o index.go go Examples/index.js

Building codeshift

选项

  • -o, --output: 指定文件名以将输出写入
  • -h, --help: 显示命令的帮助
  • -v, --version: 输出版本号

论据

  • :将源文件转换为的所需语言
  • :源文件的路径,以空格分隔

发展

我一直致力于这个项目,作为安大略省多伦多塞内卡理工学院开源开发主题课程的一部分。一开始,我想坚持使用我熟悉的技术,但该项目的说明鼓励我们学习新的东西,比如新的编程语言或新的运行时。

虽然我一直想学习 Java,但在网上做了一些研究后,它似乎不是开发 CLI 工具或与 AI 模型交互的最佳选择。它没有得到 OpenAI 的正式支持,并且其文档中的社区库已被弃用。

我一直坚持使用流行技术 - 它们往往很可靠,并且拥有完整的文档和大量在线信息。但这一次,我决定采取不同的做法。我决定使用 Bun,这是一个很酷的新 JavaScript 运行时,旨在取代 Node。

事实证明我应该坚持我的直觉。我在尝试编译我的项目时遇到了麻烦,我所能做的就是希望开发人员能够解决这个问题。

无法将 OpenAI SDK 与 Sentry Node 代理一起使用:TypeError: getDefaultAgent is not a function 第1010章

Building codeshift
基思沃尔 发布于

确认这是 Node 库问题,而不是底层 OpenAI API 问题

  • [X] 这是 Node 库的问题

描述错误

之前在这里引用过,未解决就关闭:https://github.com/openai/openai-node/issues/903

这是一个相当大的问题,因为它会阻止在使用最新的 Sentry 监控包时使用 SDK。

重现

  1. 通过 npm i @sentry/node --save 安装 Sentry Node sdk
  2. 输入以下代码;
import * as Sentry from '@sentry/node';

// Start Sentry
  Sentry.init({
    dsn: "https://your-sentry-url",
    environment: "your-env",
    tracesSampleRate: 1.0, //  Capture 100% of the transactions
  });
进入全屏模式 退出全屏模式
  1. Try to create a completion somewhere in the process after Sentry has been initialized:
const params = {
  model: model,
  stream: true,
  stream_options: {
    include_usage: true
  },
  messages
};
const completion = await openai.chat.completions.create(params);
Enter fullscreen mode Exit fullscreen mode

Results in error:

TypeError: getDefaultAgent is not a function
    at OpenAI.buildRequest (file:///my-project/node_modules/openai/core.mjs:208:66)
    at OpenAI.makeRequest (file:///my-project/node_modules/openai/core.mjs:279:44)

Code snippets

(Included)

OS

All operating systems (macOS, Linux)

Node version

v20.10.0

Library version

v4.56.0

View on GitHub

This turned me away from Bun. I'd found out from our professor we were going to compile an executable later in the course, and I did not want to deal with Bun's problems down the line.

So, I switched to Node. It was painful going from Bun's easy-to-use built-in APIs to having to learn how to use commander for Node. But at least it wouldn't crash.

I had previous experience working with AI models through code thanks to my co-op, but I was unfamiliar with creating a command-line tool. Configuring the options and arguments turned out to be the most time-consuming aspect of the project.

Apart from the core feature we chose for each of our projects - mine being code translation - we were asked to implement any two additional features. One of the features I chose to implement was to save output to a specified file. Currently, I'm not sure this feature is that useful, since you could just redirect the output to a file, but in the future I want to use it to extract the code from the response to the file, and include the AI's rationale behind the translation in the full response to stdout. Writing this feature also helped me learn about global and command-based options using commander.js. Since there was only one command (run) and it was the default, I wanted the option to show up in the default help menu, not when you specifically typed codeshift help run, so I had to learn to implement it as a global option.

I also ended up "accidentally" implementing the feature for streaming the response to stdout. I was at first scared away from streaming, because it sounded too difficult. But later, when I was trying to read the input files, I figured reading large files in chunks would be more efficient. I realized I'd already implemented streaming in my previous C++ courses, and figuring it wouldn't be too bad, I got to work.

Then, halfway through my implementation I realized I'd have to send the whole file at once to the AI regardless.

But this encouraged me to try streaming the output from the AI. So I hopped on MDN and started reading about ReadableStreams and messing around with ReadableStreamDefaultReader.read() for what felt like an hour - only to scroll down the AI provider's documentation and realize all I had to do was add stream: true to my request.

Either way, I may have taken the scenic route but I ended up implementing streaming.

Planned Features

Right now, the program parses each source file individually, with no shared context. So if a file references another, it wouldn't be reflected in the output. I'd like to enable it to have that context eventually. Like I mentioned, another feature I want to add is writing the AI's reasoning behind the translation to stdout but leaving it out of the output file. I'd also like to add some of the other optional features, like options to specify the AI model to use, the API key to use, and reading that data from a .env file in the same directory.

That's about it for this post. I'll be writing more in the coming weeks.

以上是构建码移的详细内容。更多信息请关注PHP中文网其他相关文章!

声明:
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn