Skip to content

Conversation

@davidtrihy-genesys
Copy link
Collaborator

@davidtrihy-genesys davidtrihy-genesys commented Sep 2, 2025

 Instructions     L1 Instr Cache Miss   L2/L3 Instr C Miss     Data Reads(RAM Access)    L1 Data Read Miss     L2/L3 Data Read Miss      Data Writes(RAM Access)     L1 Data Write Miss     L2/L3 Data Write Miss   C Function
 4,536  (0.0%)           675 (0.7%)         33 (0.3%)             1,526  (0.0%)             211  (0.0%)              0                     1,180  (0.0%)              86  (0.0%)               1  (0.0%)            start_async_http_req_v2
 5,072  (0.0%)           590 (0.6%)         37 (0.3%)             1,728  (0.0%)             104  (0.0%)              0                     1,136  (0.0%)               0                       0                    _resume_async_http_req_v2

82,929  (0.0%)         1,123 (1.1%)         59 (0.5%)            21,115  (0.0%)             219  (0.0%)              1  (0.0%)             3,253  (0.0%)             147  (0.1%)               0                    start_async_http_req_v1
 6,859  (0.0%)           667 (0.7%)         38 (0.3%)             2,071  (0.0%)             100  (0.0%)              0                     1,672  (0.0%)               0                       0                     _resume_async_http_req

There is an increase in instructions in libcurl and some other areas but it's still significant reduction which tracks with what I see in how this performs when connecting

This is the rest client changes for 3.4


*p = timeout_ms;

return 0;
Copy link
Collaborator

@bcnewlin bcnewlin Sep 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feedback from ChatGPT:

timer_cb writes timeout_ms into a variable, but the multi-socket path requires you to arm/cancel a real timer when the callback is called:
	•	timeout_ms == -1 → delete timer
	•	timeout_ms == 0 → call curl_multi_socket_action(..., CURL_SOCKET_TIMEOUT, 0) ASAP
Make sure the surrounding code actually honors those semantics; the callback alone is not enough. (This is from libcurl docs

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a simple timer that gets decremented on each call of multi_socket_action and I just have a static variable in the source to track it, if we wanted this to run multiple concurrent requests or attach running handles we'd need to re evaluate that and create some extra context but for now it works fine and it times out when the curl timeout that is configured times out

w_curl_multi_setopt(multi_handle, CURLMOPT_MAXCONNECTS, (long) max_connections);

status = CURL_NONE;
w_curl_easy_setopt(handle, CURLOPT_PREREQFUNCTION, prereq_callback);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only available in libcurl >= 7.80.0. Is that a concern? Should a check be added?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

libcurl 7.80.0 is nearly four years old now, in AL2 we're on libcurl 8.3.0 though I believe CentOS 7 which is the base might have a different version but in the compile targets for 3.6 I got it to compile cleanly but my recent change got cancelled so I'd need to run it again but it didn't complain about this and that means the version OpenSIPs 3.6 compiles against has this and I know we have it for AL2

goto cleanup; \
} \
} while (0)

Copy link
Collaborator

@bcnewlin bcnewlin Sep 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From ChatGPT:

Macros like w_curl_multi_setopt reference rc and mrc, and w_curl_share_setopt references src. Those
variables must be declared in scope before each macro use. From the diff, they are declared in nearby
functions, but this is fragile—consider wrapping them in inline helpers instead of macros to avoid hidden
dependencies. 

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just followed the pattern that was established by OpenSIPs in this source file, I tend to agree with it that inline functions are probably better in general but for this it's either a macro or just explicitly call the code that is inside everywhere the macro is called and just adds extra lines of source, probably best to do one pre arm of all this stuff and one if check

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should curl_share_cleanup be getting called for any open connections on mod_destroy? I'm not sure if it matters since the whole application is stopping at that point, but we don't currently block shutdown on open curl connections and I don't think OpenSIPS does either.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to add a proper mod_destroy function anyway to clean up some memory but I'm not sure if a curl_share_cleanup is wise as the file descriptors could be read from other processes and the TM module has the ability to trigger on a timeout in the resume which then does the curl_share_cleanup, if we were to do that here it could cause a crash and the current implementation does not do any kind of handle cleanup on mod_destroy but it's a good point especially if the OpenSIPs autoscaling is being used

@davidtrihy-genesys davidtrihy-genesys force-pushed the TELECOM-11880-3.4 branch 4 times, most recently from 4d2a28c to ec74262 Compare September 8, 2025 15:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants