You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my production environment, I use fasthttp to make requests to third-party services. During peak traffic times, the fasthttp client experiences some latency, with some delays possibly exceeding several seconds. To investigate, I conducted a stress test and discovered that as the number of connections increases, latency issues arise.
Fasthttp version: v1.55.0
Pressure test environment
Model Name: MacBook Pro
Model Identifier: MacBookPro18,3
Model Number: MKGP3CH/A
Chip: Apple M1 Pro
Total Number of Cores: 8 (6 performance and 2 efficiency)
Memory: 16 GB
Simulating a Third-Party Service with Code:
package main
import (
"log"
"time"
"github.com/valyala/fasthttp"
)
var (
strContentType = []byte("Content-Type")
strApplication = []byte("application/json")
body = []byte("{\"message\": \"Hello, world!\"}")
)
func main() {
go func() {
if err := fasthttp.ListenAndServe("localhost:7001", nil); err != nil {
log.Fatalf("Error in ListenAndServe: %v", err)
}
}()
if err := fasthttp.ListenAndServe("localhost:8001", handler); err != nil {
log.Fatalf("Error in ListenAndServe: %v", err)
}
}
func handler(ctx *fasthttp.RequestCtx) {
begin := time.Now()
// handle request
{
ctx.Response.Header.SetCanonical(strContentType, strApplication)
ctx.Response.SetStatusCode(fasthttp.StatusOK)
ctx.Response.SetBody(body)
}
log.Printf("%v | %s %s %v %v",
ctx.RemoteAddr(),
ctx.Method(),
ctx.RequestURI(),
ctx.Response.Header.StatusCode(),
time.Since(begin),
)
}
Code Snippet for Simulating Third-Party Service Calls
As the number of connections increases, it leads to higher latency. However, the third-party service still responds quickly; in this example, the response time is measured in microseconds (µs).
I used flame graphs to help with the analysis. It appears that most of the time is spent on system calls, what can I do to reduce response latency in this situation
The text was updated successfully, but these errors were encountered:
At 2000 connections I still see a 99% latency of 201.06ms. Is that not good? I makes sense that as the number of connections grows the latency increases as both wrk and fasthttp start to take up more CPU. Did you expect anything else here?
In my production environment, I use fasthttp to make requests to third-party services. During peak traffic times, the fasthttp client experiences some latency, with some delays possibly exceeding several seconds. To investigate, I conducted a stress test and discovered that as the number of connections increases, latency issues arise.
Fasthttp version: v1.55.0
Pressure test environment
Simulating a Third-Party Service with Code:
Code Snippet for Simulating Third-Party Service Calls
Results Obtained Using the Load Testing Tool:
1 connection:
10 connections
50 connections
100 connections
500 connections
1000 connections
1500 connections
2000 connections
As the number of connections increases, it leads to higher latency. However, the third-party service still responds quickly; in this example, the response time is measured in microseconds (µs).
I used flame graphs to help with the analysis. It appears that most of the time is spent on system calls, what can I do to reduce response latency in this situation
The text was updated successfully, but these errors were encountered: