Home  >  Article  >  Backend Development  >  Provides a response to an HTTP request after receiving another request

Provides a response to an HTTP request after receiving another request

PHPz
PHPzforward
2024-02-09 13:06:181186browse

收到另一个请求后提供 HTTP 请求的响应

php editor strawberry When developing web applications, we often need to handle HTTP requests and provide corresponding responses. When we receive a request, we need to generate an appropriate response based on the content and purpose of the request. This may involve various operations such as querying the database, processing form data, calling other APIs, etc. In this article, we will explore how to handle HTTP requests in PHP and provide corresponding responses in order to provide better interaction and user experience to the users. Whether you are building a simple static web page or a complex web application, it is important to understand how to handle HTTP requests and generate responses.

Question content

My use case is to provide a response to an HTTP request after receiving another request from a separate server.

  1. I want to do this in the best way possible while keeping scalability in mind.
  2. We use Golang 1.19 and Gin framework.
  3. The server will have multiple Pods, so the channel will not work.
  4. All requests will time out, the initial request will time out after 60 seconds.

My current solution is to use a shared cache, where each Pod is constantly checking the cache. I believe, I can optimize this by channeling, where the system periodically checks for any completed responses, rather than checking the cache one by one.

I would also like to know how to implement it in other programming languages.

PS: This is a design based query and I have some reputation for sharing bounties here so asking here. If the question is unclear, please feel free to edit.

Solution

tl;Doctor

Problem Description

So, assuming your server application is named server_app, for example there are 3 pods:

+---------------------+
     |  server_app_service |
     +---------------------+
     |  server_app_pod_a   |
     |  server_app_pod_b   |
     |  server_app_pod_c   |
     +---------------------+

Your service receives a request named "request a" and decides to pass it on to server_app_pod_a. Now your server_app_pod_a forwards the request to some gateway and waits for some kind of notification to continue processing the client's response. As you know, there is no guarantee that when the gateway executes request b, the service will pass it to server_app_pod_a again. Even if you do this, state management of the application will become a difficult task.

Message passing

As you may have noticed, I bolded the word "notification" in the previous paragraph, that's because if you really think about it, request "b" looks more like There are notifications for some messages instead of requests for certain resources. So my first choice is a message queue like kafka (as you know, there are many of them). The idea is that if you can define an algorithm to calculate the unique key of a request, then you can be notified of the result in the exact same pod. This way, state management will be simpler, and the chance of getting notifications in the same pod will be higher (of course this depends on many factors, such as the state of the message queue). Take a look at your question:

  1. I want to do this in the best way possible while keeping scalability in mind.

Of course, you can use these message queues like kafka to achieve scaling of message queues and applications and reduce data loss.

  1. All requests will time out, the initial request will time out after 60 seconds.

Depending on how you manage timeouts in your code base, using context is a good idea.

I would also like to know how to implement it in other programming languages.

Using a message queue is a general idea that applies to almost any programming language, but depending on the programming paradigm of the language and the language-specific libraries and tools, there may be some other ways to solve this problem. For example in scala, if you use some specific tool called akka (which provides actor model programming paradigm), you can use so-called akka-cluster-sharding to deal with this problem. The idea is very simple, we know that there must be some kind of supervisor that knows the exact location and status of its own subscribers. So when it receives some message it just knows where to forward the request and to which actor (we are talking about actor model programming). In other words, it can be used to share state among participants spawned on a cluster, whether on the same machine or not. But as a personal preference I would not go for language specific communication and would stick with the general idea as this might cause problems in the future.

Summarize

Long enough explanation :). To understand what I'm talking about, let's trace the exact same scenario, but with a different communication model:

  1. The client sends request "a" to the server_app service.
  2. The service selects one of the pods (e.g. server_app_pod_b) to handle the request.
  3. The pod then tries to define some key for the request and passes it along with the request to the gateway and waits for a message with that key to be published in the queue.
  4. The gateway does what it is supposed to do and sends the message to the message queue using the key .
  5. Exactly the same pod serer_app_pod_b Receives the message with the key, gets the data of the message, and continues to process the client's request.

There may be other ways to solve this problem, but this is what I want. hope it helps you!

The above is the detailed content of Provides a response to an HTTP request after receiving another request. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:stackoverflow.com. If there is any infringement, please contact admin@php.cn delete