Home > Article > Web Front-end > A brief discussion on how to use node to improve work efficiency
This article will show you how to use node at work and how to use it to improve work efficiency. I hope it will be helpful to you!
In the work project, you need to rely on external files. This file is maintained by other teams, built using jenkins, and the build product is pushed to [Amazon S3](aws.amazon .com/pm/serv-s3/…), we need to manually download the file from S3 and then copy it to the project. The entire process can be automated.
We also encountered a serious problem: the path of the build product we need in S3 is similar to 'a/b//c/', and the extra / is actually a path named '/' folder, this folder can be recognized normally using the S3 Browser of Windows. Under Mac, it is probably because '/' is regarded as a file separator, so several GUI tools cannot recognize the directory normally, so Mac developers still need to Using Windows to download products in a virtual machine is an extremely wasteful and meaningless process. [Recommended learning: "nodejs Tutorial"]
Since Amazon provides API access, I thought of implementing a script to complete the work of downloading updates.
Unused script:
Used script:
jenkins → Product name → Execute script
This can be done directly, eliminating the need for manual processes and eliminating the '/' bug.
Here we use the aws-sdk provided by Amazon, use the S3 client, and pass in the accessKeyId and secretAccessKey to connect:
import S3 from "aws-sdk/clients/s3"; const s3 = new S3({ credentials: { accessKeyId, secretAccessKey } });
aws-sdk provides an interface for adding, deleting, modifying and checking buckets and files. Here we can get the product file name built by jenkins in advance. Here we need to download the file according to the file name and location:
const rs = s3 .getObject({ Bucket: "your bucket name", Key: "file dir + path" }) .createReadStream();
Bucket is the Bucket location where the file is stored. Key is the path information of the file in S3. The entire path is equivalent to the directory name and file name.
Here we can get a ReadStream, and then use node.js to write directly to the local:
const ws = fs.createWriteStream(path.join(__dirname, outputfilename)); rs.pipe(ws);
Use the node-tar tool to decompress and install it directly :
npm install tar
extract The alias is Yes, there is no need to save the original .tar file:
- const ws = fs.createWriteStream(path.join(__dirname, outputfilename)); - rs.pipe(ws); + rs.pipe(tar.x({ C: path.join(__dirname, outputfilename) }));
The pipe operation here will return the stream object, and we can listen to the finish method to handle the subsequent process:
const s = rs.pipe(tar.x({ C: path.join(__dirname, outputfilename) })); s.on('finish', () => { // do something ... })
The original file has subfolders, and we need to move them to the outermost layer, so we need to do a tile folder operation.
Here we use the fs related API for reading. The fs API is divided into two types: synchronous and asynchronous. The synchronous API function name ends with Sync. The asynchronous function defaults to the callback error first style, which is provided under fs/promises. Promise style asynchronous API, you can use it as needed.
Since our directory has only one layer, we only do one layer of flatten. If there are multiple layers, we can use recursion to achieve it:
async function flatten(dir) { const fileAndDirs = await fsp.readdir(dir); const dirs = fileAndDirs.filter((i) => fs.lstatSync(path.join(dir, i)).isDirectory() ); for (const innerDir of dirs) { const innerFile = await fsp.readdir(path.join(dir, innerDir)); await Promise.all([ innerFile .filter((item) => fs.lstatSync(path.join(dir, innerDir, item)).isFile()) .map((item) => fsp.rename(path.join(dir, innerDir, item), path.join(dir, item)) ), ]); remove(path.join(dir, innerDir)); } }
Copy after Just go to our project directory. To copy, you only need to call the copyFile API. For unnecessary files, use regular expressions to configure the exclude blacklist:
async function copy(from, to) { const files = await fsp.readdir(from); await Promise.all( files .filter((item) => !exclude.test(item)) .map((item) => fsp.copyFile(path.join(from, item), path.join(to, item))) ); }
In actual use, The configuration file should be separated from the code. The accessKeyId and secretAccessKey here should be configured by each user, so they are placed in a separate configuration file. This file is created locally by the user and the relevant configuration content is read in the main program:
// config.js module.exports = { s3: { accessKeyId: 'accessKeyId', secretAccessKey: 'secretAccessKey', } }; // main.js if (!fs.existsSync('config.js')) { console.error('please create a config file'); return; } const config = require(path.resolve(__dirname, 'config.js'));
The file name downloaded each time needs to be passed in when calling. It will be frequently modified when written in the file, so it is passed directly as a parameter here.
You can read it through process.argv in node.js. argv is an array. The first element of this array is the installation path where node is located. The second element is the path of the currently executed script. From The third element starts with custom parameters, so it needs to start from process.argv[2]. If you have complex command line parameter requirements, you can use a command line parameter parsing library such as commander. Since this example only requires one parameter, you can read it directly here:
const filename = process.argv[2]; if (!filename) { console.error('please run script with params'); return; }
至此,一个可用的命令行工具就完成了。
node.js 可以开发后端,但是 node.js 意义最大的地方绝不是使用 JS 开发后端。对前端开发者而言,node.js 真正的价值在于它是一个非常实用的工具,前端工具 Webpack、rollup、dev-server 等都是 node 创造的价值,由于 NPM 丰富的生态,使用 node 可以快速完成脚本的开发,处理开发中遇到的一些工具链与效率的问题很有效,在工作中遇到问题时可以考虑尝试。
更多编程相关知识,请访问:编程入门!!
The above is the detailed content of A brief discussion on how to use node to improve work efficiency. For more information, please follow other related articles on the PHP Chinese website!