Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support docker file #838

Closed
wants to merge 1 commit into from
Closed

Support docker file #838

wants to merge 1 commit into from

Conversation

darionyaphet
Copy link
Contributor

No description provided.

@darionyaphet darionyaphet added the do not review PR: not ready for the code review yet label Aug 22, 2019
@yixinglu
Copy link
Contributor

yixinglu commented Aug 24, 2019

I suggest we should split 3 nebula services to 3 docker images and follow the recommendation: one service per container.

You maybe think there are many same common dependencies for 3 nebula services. We can create a base image described in this document. And 3 nebula services are FROM the base image, so there's only one base image layer stored in user‘s disk.

For easy usage, we can use docker-compose to deploy our nebula services with a docker-compose.yaml file.

Best practices for writing Dockerfiles is an excellent reference document.

Dockerfile Outdated

RUN rpm -ivh nebula-1.0.0-beta.el7-5.x86_64.rpm

RUN rm nebula-1.0.0-beta.el7-5.x86_64.rpm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should extract a NEBULA_VERSION env variable to avoid updating the package name each release.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

each 'RUN' command will create a new layer

@yixinglu
Copy link
Contributor

refere to #416

@darionyaphet
Copy link
Contributor Author

refere to #416

Thanks for you comment. It's really a good suggestion. Actually, we use docker as a demo to show how to use Nebula.

This PR should refer to Build docker image from Dockerfile .

Not Using Kubernetes to manage docker containers .

@yixinglu
Copy link
Contributor

yixinglu commented Aug 28, 2019

@darionyaphet
I think 3 nebula services could use a same docker image at this time. later we can build them separately to single image. The Dockerfile is like the following contents:

FROM centos:7.6.1810

MAINTAINER darion.wang "darion.wang@vesoft.com"

ARG NEBULA_VERSION

RUN yum -y install wget openssh-clients openssh-server net-tools

RUN wget -O nebula.el7-5.x86_64.rpm \
  https://github.com/vesoft-inc/nebula/releases/download/v${NEBULA_VERSION}/nebula-${NEBULA_VERSION}.el7-5.x86_64.rpm && \
  rpm -ivh nebula.el7-5.x86_64.rpm && \
  rm nebula.el7-5.x86_64.rpm && \
  mkdir -p /usr/local/nebula/logs

WORKDIR /usr/local/nebula

we build nebula-graph image each release by

$ docker build -t vesoft/nebula-graph:latest -f Dockerfile --build-arg NEBULA_VERSION=1.0.0-beta .

Given the following docker-compose.yaml, users can start nebula services with one shell command:

$ docker-compose up -d
# docker-compose.yaml
version: '3.2'
services:
  metad:
    image: vesoft/nebula-graph:latest
    expose:
      - 45500
    entrypoint:
      - ./bin/nebula-metad
      - --flagfile=./etc/nebula-metad.conf
      - --daemonize=false
      - --meta_server_addrs=172.28.1.1:45500
      - --local_ip=172.28.1.1
      - --port=45500
    networks:
      nebula-net:
        ipv4_address: 172.28.1.1

  graphd:
    image: vesoft/nebula-graph:latest
    ports:
      - 6699:3699
    entrypoint:
      - ./bin/nebula-graphd
      - --flagfile=./etc/nebula-graphd.conf
      - --daemonize=false
      - --meta_server_addrs=172.28.1.1:45500
      - --port=3699
    networks:
      nebula-net:
        ipv4_address: 172.28.1.2

  storaged:
    image: vesoft/nebula-graph:latest
    expose:
      - 44500
    entrypoint:
      - ./bin/nebula-storaged
      - --flagfile=./etc/nebula-storaged.conf
      - --daemonize=false
      - --meta_server_addrs=172.28.1.1:45500
      - --local_ip=172.28.1.3
      - --port=44500
    networks:
      nebula-net:
        ipv4_address: 172.28.1.3

networks:
  nebula-net:
    ipam:
      driver: default
      config:
        - subnet: 172.28.0.0/16

We can improve scripts/nebula.service bash script and let it accept some parameters from outside to simplify above docker-compose entrypoint option.

@dutor dutor closed this in #923 Sep 16, 2019
@darionyaphet darionyaphet deleted the docker-file branch September 16, 2019 05:16
liwenhui-soul pushed a commit to liwenhui-soul/nebula that referenced this pull request May 10, 2022
<!--
Thanks for your contribution!
In order to review PR more efficiently, please add information according to the template.
-->

## What type of PR is this?
- [ ] bug
- [ ] feature
- [X] enhancement

## What problem(s) does this PR solve?
#### Issue(s) number: 

#### Description:

Let add tag / edge use atomic Op again. 

1. before and include Nebula 2.0, we use atomicOp to deal with some atomic operation,
    e.g. change a tag/edge and its index in a batch. 

    It works, but as we implement this by send raft log in a sync way. 
    (all atomic op should be sent seperately, even if they are disjoint.) this is really slow.

2. In 2.6.x we use memory lock to deal with concurrent control. 
    We check early(in processor) if a request can run or not. 
    If it can, then we do the get/put as a normal log, which we can treat in batch.
    if it can't, we return error. 

    How ever, some user complain that they meet so many "Conflict error". 
    That they need to retry, they believe it will slow down the bulk insert.
    We explained that those conflict has to be retry either in Nebula it self or client, 
    but  it looks like they didn't agree with us. 

3. So now we implement a hybrid mode for this. 
    We had a memory lock in raft. just like Solution2. we check every logs to see if it can be combine with previous logs.
    If it can, then we send them in batch. 
    if it can't, then we treat it like the atomicOp way (Solution1).

## How do you solve it?



## Special notes for your reviewer, ex. impact of this fix, design document, etc:



## Checklist:
Tests:
- [ ] Unit test(positive and negative cases)
- [ ] Function test
- [ ] Performance test
- [ ] N/A

Affects:
- [ ] Documentation affected (Please add the label if documentation needs to be modified.)
- [ ] Incompatibility (If it breaks the compatibility, please describe it and add the label.)
- [ ] If it's needed to cherry-pick (If cherry-pick to some branches is required, please label the destination version(s).)
- [X] Performance impacted: Consumes more CPU/Memory


## Release notes:

As describe in the "Description", conflict concurrent insert tag/edge will not report "Data conflict".
But execute in a queue. 

Migrated from vesoft-inc#3926

Co-authored-by: lionel.liu@vesoft.com <52276794+liuyu85cn@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do not review PR: not ready for the code review yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants