-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manager crashes due to dependancy issue on M1-Chip machines #63
Comments
You may want to look at: https://docs.docker.com/build/building/multi-platform/ |
Same error on M2
Any plans to support https://docs.docker.com/build/building/multi-platform/ ? |
But the non-deprecated build-kit images are not working on x86 Looks like this is because the package versions that work for
This is actually kind of weird because it looks like the errors are with amd64, which is what the upstream CI is building. @catarial can you take this over for a bit now that I have the github actions piece all worked out and you have an M1 laptop?
|
As you can see, at the time I pulled from upstream, we are at:
And 1.4.2 seems like it built successfully in case you want to compare working versus non-working logs Although I will note that the PR number there was 56 instead of 61. @catarial I will let you take it from here |
CSMS starts up
I get a warning about the architecture for firestore
|
But is the
Why is it requesting Note that I don't rebuild the firestore image since it is coming directly from an image. I only rebuild the "manager" and "gateway" images. |
This is what I see in the manager logs after sending the profile
|
I tried the same thing on my amd64 laptop but saw different logs
I'm not sure if that has to do with build process or something else |
Nevermind, I just forgot to comment out the part after "waiting for CSMS"
|
Do you still want me to do this? I'm not seeing any crashes. |
@catarial are you saying that the demo works (including the everest manager) on arm64? |
I'm not sure if it works. Everything starts up, but nothing happens when I click the "Car Plugin" button in the UI. I don't actually know how to use the demo |
It seems to get stuck at PrepareCharging |
@the-bay-kay can you verify this on your arm64? I checked the earlier messages, and it looks like the manager wouldn't even start up so I am kind of paranoid here.
And if it does work, you can start using it instead of having to work on your personal laptop! |
@Abby-Wheelis maybe search for something specific to sockets on amd64 containers running on arm64 machines? Otherwise @catarial can try to reproduce, and then work on the buildkit build errors on Monday |
Investigation into this issue on my Macbook Air 2020 w/ M1 chip:
|
what is the line that is the source of the error? if it is not important, maybe we should create a new patch that comments it out throughout? |
This is the line (from the stacktrace) that causes the error:
When I commented that line out, I was able to start charging, but it did go through "car paused" for a while. There are other calls of
|
The Checking into |
Moving this off the milestone since we put it at the last minute. Would still be great to get it to work, but running out of time here. |
It does look like silicon and intel macs have different implementations for virtualization. |
I'm going to experiment with trying some alternatives to docker |
@catarial it looks like this worked for Abby after commenting out a single line - is that true for you too? If so, could we just patch that line out (or remove MULTICAST or sth) for now? Docker on mac runs on virtualbox anyway. Longer-term, I think we should use multiplatform images (see above). |
Yes, it works after I comment the line out. |
@catarial given that this is for a python file, can you add this to the list of runtime patches (https://github.com/EVerest/everest-demo/blob/main/manager/demo-patch-scripts/apply-runtime-patches.sh)? We can later check-in with the community and see if/why it is needed Please also try out a few of the scenarios in #84 and verify that all of them work @the-bay-kay can you also try with the line commented out on your work M1? I anticipate that many at CharIN will have recent Macs |
Already working on it |
The Issue
When attempting to run any of the demos on an M-1 Chip machine, they fail due to a missing manifest within the docker dependancies. Running the MaEVe-based demos results in the following error...
Full MaEVe Failure
And likewise, running CitrineOS results in the following...
CitrineOS Failures
This issue was originally found within this issue. For those without access to the internal repository, below is a copy of the findings:
Findings
Issues with Apple Silicone
I've run into some issues running on Apple's M1 Chips. Below are the specs:
And, the subsequent error when attempting to spin-up any of the demos:
From what I understand, this is an issue with one of our docker dependencies. When attempting a hack-y fix described here and composing locally, we get a bit further -- the script fails with the following:
This is what makes me believe the issue is with our dependencies, not just the platform declaration (as described in the linked thread). Many of the posts I've read have suggested this is an issue with MySQL (link), which doesn't seem relevant. These comments do have a common thread, however, suggesting that one of these dependancies is missing the
linux/arm64/v8
manifest.Looking at EVerest's packages (link), I do see
linux/amd64
listed within the OS / Arch tab... perhaps the packages in the.yaml
need to be linked differently? I'll keep reading up on it, will add updates on this / the PyTest issue as I find more details!Likewise, running a Virtual Machine hosted on an M-1 Chip results in a similar failure (Tested on UTM). Complete emulation of Linux Machines results in a successful launch of the demos, but performance is hindered to the point that this not a viable workaround.
Potential Solutions
As suggested within the original thread, I believe that this issue stems from one of the packages missing an ARM64 dependency. I recall during a discussion with the team that this could be traced back to the internal ghcr for everest, though I cannot find a papertrail for those thoughts. Ideally, this fix should be as simple as finding and updating the correct package within either the ghcr or docker manifests. I will continue to investigate further, and report back what I find!
The text was updated successfully, but these errors were encountered: