Lamba LLRT

WBOY
WBOYOriginal
2024-08-21 06:14:05466browse

Disclaimer: any and all content posted is intended to remind or maintain my knowledge and I hope it can help you on your journey through learning too.
This post is live and will be updated periodically.
If you find any flaws or notice that something is missing, help me improve :)


Have you ever stopped to think that we are being increasingly demanded regarding the performance of our applications?
Every day we are encouraged to make them faster and with that, we are led to evaluate solutions and architectures that enable us to achieve the result.


So the idea is to bring a short post, informing about a new evolution that can help us to have a considerable increase in performance in serverless applications in AWS Lambda. This solution is LLRT Javascript.

LLRT Javascript(Low Latency Runtime Javascript)

A new Javascript runtime is being developed by the aws team.
It is currently experimental and there are efforts to try to release a stable version by the end of 2024.

see the description that AWS presents:

LLRT (Low Latency Runtime) is a lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications. LLRT offers up to over 10x faster startup and up to 2x overall lower cost compared to other JavaScript runtimes running on AWS Lambda
It's built in Rust, utilizing QuickJS as JavaScript engine, ensuring efficient memory usage and swift startup.

See that they aim to deliver something up to 10x faster than other JS runtimes.

All of this construction is done using Rust, which is a high-performance language, and QuickJS, which is a lightweight, high-performance JavaScript engine, designed to be small, efficient and compatible with the latest ECMAScript specification. recent, including modern features like classes, async/await, and modules. Furthermore, an approach that does not use JIT is used. Therefore, instead of allocating resources for Just-In-Time compilation, it conserves these resources for executing tasks within the code itself.

But don't worry, not everything is rosy, it's tradeoffs (horrible pun, I know lol).
Therefore, there are some important points to consider before thinking about adopting LLRT JS. See what AWS says:

There are many cases where LLRT shows notable performance drawbacks compared with JIT-powered runtimes, such as large data processing, Monte Carlo simulations or performing tasks with hundreds of thousands or millions of iterations. LLRT is most effective when applied to smaller Serverless functions dedicated to tasks such as data transformation, real time processing, AWS service integrations, authorization, validation etc. It is designed to complement existing components rather than serve as a comprehensive replacement for everything. Notably, given its supported APIs are based on Node.js specification, transitioning back to alternative solutions requires minimal code adjustments.

Also, the idea is that LLRT JS is not a replacement for node.js and nor will it ever be.

See:

LLRT only supports a fraction of the Node.js APIs. It is NOT a drop in replacement for Node.js, nor will it ever be. Below is a high level overview of partially supported APIs and modules. For more details consult the API documentation.


Evaluative Tests

Taking into consideration the applicability mentioned by AWS itself, we will carry out two tests to evaluate and compare LLRT with NodeJS. One of the tests will be for calculating prime numbers and the other will be for a simple API call.

Why use the calculation of prime numbers?
The answer is that the high processing required to identify prime numbers results from the need to perform many mathematical operations (divisions) to verify primality, the unpredictable distribution of primes, and the increasing complexity with the size of the numbers. These factors combine to make primality checking and the search for prime numbers a computationally intensive task, especially at large scales.


Hands on then...

Create the first lambda function with nodejs:

Lamba LLRT

Now, let's create the function with LLRT JS. I chose to use the layer option.

Create the layer:
Lamba LLRT

Then create the function:
Lamba LLRT

And add this layer to the LLRT JS function created:
Lamba LLRT

For the prime number test, we will use the following code:

let isLambdaWarm = false
export async function handler(event)  {

    const limit = event.limit || 100000;  // Defina um limite alto para aumentar a complexidade
    const primes = [];
    const startTime = Date.now()
    const isPrime = (num) => {
        if (num <= 1) return false;
        if (num <= 3) return true;
        if (num % 2 === 0 || num % 3 === 0) return false;
        for (let i = 5; i * i <= num; i += 6) {
            if (num % i === 0 || num % (i + 2) === 0) return false;
        }
        return true;
    };

    for (let i = 2; i <= limit; i++) {
        if (isPrime(i)) {
            primes.push(i);
        }
    }

  const endTime = Date.now() - startTime

    const response = {
        statusCode: 200,
        body: JSON.stringify({
            executionTime: `${endTime} ms`,
            isLambdaWarm: `${isLambdaWarm}`
        }),
    };


    if (!isLambdaWarm) { 
        isLambdaWarm = true
    }

    return response;
};


And for API testing, we will use the code below:

let isLambdaWarm = false
export async function handler(event) {

  const url = event.url || 'https://jsonplaceholder.typicode.com/posts/1'
  console.log('starting fetch url', { url })
  const startTime = Date.now()

  let resp;
  try {
    const response = await fetch(url)
    const data = await response.json()
    const endTime = Date.now() - startTime
    resp = {
      statusCode: 200,
      body: JSON.stringify({
        executionTime: `${endTime} ms`,
        isLambdaWarm: `${isLambdaWarm}`
      }),
    }
  }
  catch (error) {
    resp = {
      statusCode: 500,
      body: JSON.stringify({
        message: 'Error fetching data',
        error: error.message,
      }),
    }
  }

  if (!isLambdaWarm) {
    isLambdaWarm = true
  }

  return resp;
};

Test results

The objective is more educational here, so our sample for each test consists of 15 warm start data and 1 cold start data.

Memory consumption

LLRT JS - for both tests, the same amount of memory was consumed: 23mb.

NodeJS - for the prime number test, nodejs started consuming 69mb and went up to 106mb.
For the API test, the minimum was 86mb and the maximum was 106mb.

Execution time
after removing the outliers, this was the result:

Lamba LLRT

Lamba LLRT

Final report

Memory consumption - for memory consumption it was observed that LLRT made better use of the available resource compared to nodejs.

Performance - we noticed that in the high processing scenario, the node maintained a much higher performance than LLRT, both in cold start and warm start.
For the lower processing scenario, LLRT had a certain advantage, especially in cold start.

Let's then wait for the final results and hope that we can have even more significant improvements, but it's great to see the flexibility of JS and see how much it can and still has to deliver to us.


I hope you enjoyed it and helped you improve your understanding of something or even opened paths to new knowledge. I count on you for criticism and suggestions so that we can improve the content and always keep it updated for the community.

The above is the detailed content of Lamba LLRT. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Previous article:New Frontend Framework?Next article:New Frontend Framework?