-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support docker file #838
Support docker file #838
Conversation
I suggest we should split 3 nebula services to 3 docker images and follow the recommendation: one service per container. You maybe think there are many same common dependencies for 3 nebula services. We can create a base image described in this document. And 3 nebula services are For easy usage, we can use Best practices for writing Dockerfiles is an excellent reference document. |
Dockerfile
Outdated
|
||
RUN rpm -ivh nebula-1.0.0-beta.el7-5.x86_64.rpm | ||
|
||
RUN rm nebula-1.0.0-beta.el7-5.x86_64.rpm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should extract a NEBULA_VERSION
env variable to avoid updating the package name each release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
each 'RUN' command will create a new layer
refere to #416 |
Thanks for you comment. It's really a good suggestion. Actually, we use This PR should refer to Build docker image from Dockerfile . |
@darionyaphet
we build
Given the following $ docker-compose up -d
We can improve |
<!-- Thanks for your contribution! In order to review PR more efficiently, please add information according to the template. --> ## What type of PR is this? - [ ] bug - [ ] feature - [X] enhancement ## What problem(s) does this PR solve? #### Issue(s) number: #### Description: Let add tag / edge use atomic Op again. 1. before and include Nebula 2.0, we use atomicOp to deal with some atomic operation, e.g. change a tag/edge and its index in a batch. It works, but as we implement this by send raft log in a sync way. (all atomic op should be sent seperately, even if they are disjoint.) this is really slow. 2. In 2.6.x we use memory lock to deal with concurrent control. We check early(in processor) if a request can run or not. If it can, then we do the get/put as a normal log, which we can treat in batch. if it can't, we return error. How ever, some user complain that they meet so many "Conflict error". That they need to retry, they believe it will slow down the bulk insert. We explained that those conflict has to be retry either in Nebula it self or client, but it looks like they didn't agree with us. 3. So now we implement a hybrid mode for this. We had a memory lock in raft. just like Solution2. we check every logs to see if it can be combine with previous logs. If it can, then we send them in batch. if it can't, then we treat it like the atomicOp way (Solution1). ## How do you solve it? ## Special notes for your reviewer, ex. impact of this fix, design document, etc: ## Checklist: Tests: - [ ] Unit test(positive and negative cases) - [ ] Function test - [ ] Performance test - [ ] N/A Affects: - [ ] Documentation affected (Please add the label if documentation needs to be modified.) - [ ] Incompatibility (If it breaks the compatibility, please describe it and add the label.) - [ ] If it's needed to cherry-pick (If cherry-pick to some branches is required, please label the destination version(s).) - [X] Performance impacted: Consumes more CPU/Memory ## Release notes: As describe in the "Description", conflict concurrent insert tag/edge will not report "Data conflict". But execute in a queue. Migrated from vesoft-inc#3926 Co-authored-by: lionel.liu@vesoft.com <52276794+liuyu85cn@users.noreply.github.com>
No description provided.