-
Notifications
You must be signed in to change notification settings - Fork 30
BUILD_FOR_WEB_DEPLOYMENT, dockerized image #42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…y it in an alpine docker container.
Merging updates from sandialabs:master
|
Thank you for this @JStrader-Mirion ! It's been a few years since I built with All the Docker and CMake changes look good. It might take me some days to get to merging this into the main branch. If you have a specific goal you would like to achieve with this web version of InterSpec, or are running into more troubles, or other shortcomings, please don't hesitate to reach out to InterSpec@sandia.gov, or create a GitHub issue. Hopefully I could save you a little time. again, thank you, |
|
Thanks! I am sure I will have a few questions in the days to come when I get around to working out how to feed cloud-stored spectra in. This is an impressive body of work, and I am please just to be able to contribute. Note, that a couple of commits have been added to the PR, as the --http-address options was broken causing it to ignore web requests when running inside a container. |
|
Thank you for catching that http argument! Will you have multiple users? For non web-deployments there is a kinda API in -will |
|
A couple thoughts on passing an InterSpec instance spectrum file from a cloud environment:
|
Maybe no more than five at a time. For now, I am considering incorporating this as a window into the masses of data we are collecting as part of our gamma-ray detector R&D work. I will %100 want to separate user sessions, so if this moves forward, I will dig into the suggested method (maybe leveraging the sessions to maintain separate SQLite databases). Part of the reason I wanted this dockerized, was so that I could keep it sandboxed (which may open up cases where web-filesystem interplay is slightly less insane). I dig the idea of the spectrum-URI, and as I develop this, I bet I'll find a use-case. A good chunk of my efforts center around reducing friction involved with analyzing and sharing spectra. I am just now building the infrastructure for selecting and pulling data out of hoard I am collecting, and I could see this playing a part. I haven't been able to crack the localhost only server aspect. If you have any further insights, that would be appreciated. |
|
I was leaning towards n42 as the most portable way of handing off spectra and wrestling with compression ideas, so this whitepaper you linked is timely. Thanks! |
Maybe #24 is useful? |
Issue brought up as part of pull-request sandialabs#42 (sandialabs#42)
|
I may have fixed the http-address issue in commit 4826023 (on the main branch, not yours, sorry). |
|
For converting spectra to URI, you can use SpecUtils - it has bindings for a number of languages (including in python there is a pip package). Also, if you just want to display spectra to users, the spectrum display code for InterSpec is actually inside SpecUtils, so you can just have it convert spectrum files to JSON and send to the client, where you use SpectrumChartD3.js to display it, and handle all the user interaction (zoom-in/out, log/lin, etc). InterSpec should handle any number of users no problem, but right now it just uses a cookie to identify them between sessions. Each user is kept separate in the sqlite3 DB, but their isolation just depends on me not having screwed up. If you serve through FCGI you can grab the user authentication through the environment (so let Apache or nginx, or whatever handle authentication), but FCGI is a real pain and I haven't had the greatest luck with it. Another option would be to use authentication provided by Wt. Anyway, send me an email if you want to talk about more options. |
|
@ckuethe - thanks for contributing, and jogging my memory about this! I only tested that it looked like things were binding to 0.0.0.0 - I couldn't actually verify that it was visible to an external network - I'll try to check with proper Docker build at some point. |
|
When I did that diff and ran InterSpec in docker, it definitely was visible to external clients. Also, you can use different addresses in 127/8 if you need an arbitrary bind address. For example, in one terminal run $ echo hello | nc -v 127.0.0.1 6667
nc: connect to 127.0.0.1 port 6667 (tcp) failed: Connection refused
$ echo hello | nc -v 127.0.0.3 6667
Connection to 127.0.0.3 6667 port [tcp/ircd] succeeded! |
|
I have some work to do merging the latest commits properly, as it now segfaults. I'll update when everything works again, and we can resume testing. |
|
Oh man, I just merged some stuff I worked on last night... sorry, I didn't see you had done more. You should probably ignore what I have for the moment; I'll try to pick up your changes and also look at things. |
|
A few things:
I have to switch over to some other tasks, but I'll keep going a little more on this stuff when I can. Thanks again for making all these fixes! |
Issue brought up as part of pull-request sandialabs#42 (sandialabs#42)
|
I too had to hack together a proper debug container. We could probably make this work with a different distribution, but now that I am in this deep, I need to know what is cause of this issue. I usually try to work in dockerized environments for smaller server type applications, as they tend to be relatively bulletproof and a breeze to replicate/reinstall. This could likely be made to "just work" if we switch to a debian/ubuntu container base, but the fact that it fails on Alpine means there is some issue in the code that isn't readily apparent. I am spinning up an Alpine debug environment presently. One question while I am knee-deep here, does anyone run this on anything but x86-64? If docker images are going to be published, it might make sense to have it make an arm/arm64 build as well. |
|
It would be nice to make a static build, and then run everything from a I'll admit I don't know enough about the subtleties of musl vs glibc to hazard a guess about the segfault. I'm un-aware of anyone, besides possibly @ckuethe, that actively uses the Docker build - but my only insight into use is when people open issues or email questions. It would be nice if things ran on a raspberry-pi, or some of these MCAs that also run Linux on an ARM SOM. |
|
If we get this working, it may make sense to just add building this to the CI - if for nothing else, to keep the build from rotting again in the future. |
|
The musl thing must have been it, everything runs fine with a Debian docker base. I didn't even know musl was a thing. I'll package this up with another PR tomorrow. The debian_build_debug.dockerfile in my repo works as expected. |
|
I've just been looking at it, and from what I can tell, the crash is happening while parsing some XML, and I think it's related to the small default stack size of musl. I think I just found a workaround - although I like the idea of just running it in Debian better, since I don't know where else this issue may show up. |
|
I've pushed some fixes to the main branch that get the Alpine build working; it's now a static executable, so the container to run it is based on I think it makes sense to have your Debian based container option as the default people would use, but the Alpine as an option, if they are brave. Sorry this ended up being so much hassle for you! Let me know how I can help to get your files from the "cloud" into InterSpec. |
All my attempts to compile with the BUILD_FOR_WEB_DEPLOYMENT flag in Linux have failed, and I believe that the current implementations are out of date. I have updated /target/docker/alpine_web_container.dockerfile, so that it now compiles and serves InterSpec. While this appears to work perfectly for my environment, it may need a few refinements to ensure it doesn't interfere with your more popular implementations.
One small change was needed to SpecUtils, and I have simply included a patch file (auto-applied in Docker) here before I open a PR against that repo. Note: the Dockerfile currently points to my fork of InterSpec and will need to be updated on merge.