-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Was prebuilt envoy with v1.17 able to run on CPU of x86_64 arch? #15235
Comments
What's the output of |
Hi @rojkov here it is:
BTW is there any minimum memory requirement for v1.17 or v.1.16? And if the server doesn't have so much resource for that, is there any work around to make it working? Any related docs will be appreciated as well! Thanks. |
Hm.. memory situation seems to be ok.
The older releases were built with gperftools' allocator. You can build against it by adding But I wonder what your CPU model is. Google's tcmalloc uses assembly for restartable sequences. |
Thanks will give it a try.
model name : Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz |
I thought you're running inside QEMU or something. I've found Intel(R) Xeon(R) Gold 6138F CPU @ 2.00GHz in our lab. It's exactly the same model as yours plus an Omni-Path chip integrated in the same package. At least the docker release build works nicely
Everything should work, it's not a hw compatibility issue. |
OK thanks @rojkov. I've built an envoy with |
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions. |
@rojkov we've seen a few more recent reports here on Arm64. |
Seeing the same error when running on arm64 with 1.17 and 1.18 Docker images.
1.16 images work fine. |
Same issue on a RaspberryPi 4 with 8GB RAM. Starting with the 17.x.x docker images (tcmalloc enabled) they do no longer start.
The issue google/tcmalloc#33 might be relevant as well. However it was closed already, so I guess that tcmalloc should already be compatible with arm64. |
The problem seems to stem from the fact that tcmalloc tries to allocate 1073741824 bytes (1G) of contiguous memory. It does so by getting hints from the kernel with calling You might want to try to build Envoy with defined |
If somebody can give me access to an arm64 machine I'll try it myself. Or please wait until I get a working rpi4 setup. |
@rojkov You could also try to cross compile it using the docker images and upload the resulting envoy executable for testing. I tried to cross compile it with my Mac but so far I have not been successful (compilation starts but fails). For the cross compilation to start I adjusted the Note: Error above: Using environment variables does NOT define TCMALLOC_SMALL_BUT_SLOW |
I was able to compile it on my RaspberryPi 4, however I could not define TCMALLOC_SMALL_BUT_SLOW for tcmalloc. |
I guess you can add Or alternatively edit bazel/repositories.bzl to use |
Thank you very much. I am now rebuilding it. Sadly I wasn't able to reuse the cache since it was somehow corrupted. Will get back with the results in a few hours 😅 |
I still receive the same error with
|
Hm.. I thought mmap'ing 32M shouldn't be a problem for RPi4 even in case of very fragmented memory. I'll try to reproduce it next week. |
Seems like the kernel for Raspberry Pi OS is not compiled with THP support:
This is how to enable THP on a RaspberryPi 4B (recompile kernel): http://www.rkoucha.fr/raspberry_pi/rpi4b_hp_config.html |
Right. Finally I've got a RPi4 running an Aarch64 distro and it lacks THP. No wonder Envoy can't start there. I'll do some experiments and update the docs. |
Would it be possible to additionally upload envoy images that do not require THP support (still use gperftools' allocator)? |
Turned out in case of Aarch64 the issue is not about THP (it's good to have though). The problem is that tcmalloc assumes incorrect page table levels for RPi4. Indeed https://kernel.org/doc/Documentation/arm64/memory.txt says that in Aarch64 virtual addresses for processes may be either 39-bits or 48-bits depending on the number of page table levels (3 and 4 respectively). So, tcmalloc assumes virtual addresses are 48-bits for Aarch64 (the same size as in x86-64 unless you're on Intel Ice Lake CPUs) and calculates hints for mmap() accordingly. mmap() fails. With this patch applied to tcmalloc diff --git a/tcmalloc/internal/config.h b/tcmalloc/internal/config.h
index 93f979f..4f0a1c1 100644
--- a/tcmalloc/internal/config.h
+++ b/tcmalloc/internal/config.h
@@ -46,7 +46,7 @@ inline constexpr int kAddressBits =
// According to Documentation/arm64/memory.txt of kernel 3.16,
// AARCH64 kernel supports 48-bit virtual addresses for both user and kernel.
inline constexpr int kAddressBits =
- (sizeof(void*) < 8 ? (8 * sizeof(void*)) : 48);
+ (sizeof(void*) < 8 ? (8 * sizeof(void*)) : 39);
#else
inline constexpr int kAddressBits = 8 * sizeof(void*);
#endif Envoy starts to work on RPi4. As far as I know the CPU used in RPi4 supports page tables of 4 levels (PGD->PUD->PMD->PTE), so I guess the prebuilt Envoy can work too if the kernel is compiled with To check quickly the size of virtual addresses in your system you can run
For
|
… we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
… we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
… we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
… we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
… we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
…re eBPF/net stuff - enable more eBPF/net stuff for Cilium - switch to 48-bit virtual addresses, so we're tcmalloc compatible - see cilium/cilium#17467 - see envoyproxy/envoy#15235 (comment)
Title: Tried to run the pre-built envoy(v1.17) on Ubuntu(18.04 LTS Bionic Beaver) server which uses CPU of x86_64 arch but failed with errors about tcmalloc lib
Description:
Based on the error msg, I'm wondering if this prebuilt envoy works for my server? If not, does it mean I should build a new one manually?
Thanks in advance!
The text was updated successfully, but these errors were encountered: