Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Containerd CRI Integration #286

Closed
Random-Liu opened this issue Apr 29, 2017 · 18 comments
Closed

Containerd CRI Integration #286

Random-Liu opened this issue Apr 29, 2017 · 18 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/node Categorizes an issue or PR as relevant to SIG Node. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status

Comments

@Random-Liu
Copy link
Member

Feature Name

  • One-line feature description (can be used as a release note): Alpha containerd CRI integration with basic sandbox/container lifecycle management and image management.
  • Primary contact (assignee): @Random-Liu
  • Responsible SIGs: sig-node
  • Design proposal link (community repo): https://github.com/kubernetes-incubator/cri-containerd/blob/master/docs/proposal.md
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @mikebrow @yujuhong
  • Approver (likely from SIG/area to which feature belongs): @dchen1107
  • Feature target (which target equals to which milestone): alpha 1.7
@Random-Liu Random-Liu self-assigned this Apr 29, 2017
@dchen1107 dchen1107 added the sig/node Categorizes an issue or PR as relevant to SIG Node. label May 1, 2017
@dchen1107 dchen1107 added this to the v1.7 milestone May 1, 2017
@idvoretskyi idvoretskyi added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label May 3, 2017
@idvoretskyi
Copy link
Member

@Random-Liu it seems to be a notable feature - any documentation can be provided? /cc @dchen1107 @kubernetes/sig-node-feature-requests

@Random-Liu
Copy link
Member Author

@idvoretskyi Hi, the document will be located in the cri-containerd repo.

We'll add document to k8s.io next release with feature complete.

@idvoretskyi
Copy link
Member

idvoretskyi commented Jun 20, 2017 via email

@calebamiles calebamiles modified the milestones: 1.8, 1.7 Jul 25, 2017
@calebamiles calebamiles added stage/beta Denotes an issue tracking an enhancement targeted for Beta status and removed stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status labels Jul 25, 2017
@dchen1107 dchen1107 added stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status and removed stage/beta Denotes an issue tracking an enhancement targeted for Beta status labels Aug 1, 2017
@idvoretskyi
Copy link
Member

@Random-Liu @kubernetes/sig-node-feature-requests any progress for 1.8?

If yes, please, update the features tracking board with the relevant data.

@calebamiles
Copy link
Contributor

1.0-alpha planned for release at the end of September

@Random-Liu
Copy link
Member Author

Random-Liu commented Sep 12, 2017

Status Update

@calebamiles
Copy link
Contributor

thanks for the update, @Random-Liu

@idvoretskyi idvoretskyi added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Sep 19, 2017
@idvoretskyi
Copy link
Member

@Random-Liu can you update the features tracking board with the relevant data?

Thanks.

@Random-Liu
Copy link
Member Author

@idvoretskyi Will do. Thanks for reminding!

@Random-Liu
Copy link
Member Author

@idvoretskyi Done

k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Nov 3, 2017
Automatic merge from submit-queue (batch tested with PRs 54488, 54838, 54964). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add support to for alternative container runtime in `kube-up.sh`

For kubernetes/enhancements#286.

This PR added 4 new environment variables in `kube-up.sh` to support alternative container runtime:
1) `KUBE_MASTER_EXTRA_METADATA` and `KUBE_NODE_EXTRA_METADATA`. Add extra metadata on master and node instance. With this we could specify different cloud-init for a different container runtime, and also add extra metadata for the new cloud-init, e.g. [master.yaml](https://github.com/Random-Liu/cri-containerd/blob/7d739662141cc137f8b1e82a9824b18be2e5df21/test/e2e/master.yaml)
2) `KUBE_CONTAINER_RUNTIME_ENDPOINT`. Specify different sock for different container runtime. It's only used when it's not empty.
3) `KUBE_LOAD_IMAGE_COMMAND`. Specify different load image command for different container runtime.

An example for cri-containerd:
```
export KUBE_MASTER_EXTRA_METADATA="user-data=${GOPATH}/src/github.com/kubernetes-incubator/cri-containerd/test/e2e/master.yaml,cri-containerd-configure-sh=${GOPATH}/src/github.com/kubernetes-incubator/cri-containerd/test/configure.sh"
export KUBE_NODE_EXTRA_METADATA="user-data=${GOPATH}/src/github.com/kubernetes-incubator/cri-containerd/test/e2e/node.yaml,cri-containerd-configure-sh=${GOPATH}/src/github.com/kubernetes-incubator/cri-containerd/test/configure.sh"
export KUBE_CONTAINER_RUNTIME="remote"
export KUBE_CONTAINER_RUNTIME_ENDPOINT="/var/run/cri-containerd.sock"
export KUBE_LOAD_IMAGE_COMMAND="/home/cri-containerd/usr/local/bin/cri-containerd load"
export NETWORK_POLICY_PROVIDER="calico"
```

Signed-off-by: Lantao Liu <lantaol@google.com>

```release-note
none
```
/cc @yujuhong @dchen1107 @feiskyer @mikebrow @abhi @mrunalp @runcom 
/cc @kubernetes/sig-node-pr-reviews
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Nov 3, 2017
Automatic merge from submit-queue (batch tested with PRs 54488, 54838, 54964). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix calico network policy for opensource.

For kubernetes/enhancements#286

This PR:
1) Add `NON_MASTER_NODE_LABELS` env, and only apply calico node label on non-master nodes.
2) Set ip masq rules in cloud init, so that we don't need the ip masq agent. @dchen1107 @dnardo As we discussed.
3) Let master use `${NETWORK_PROVIDER}` instead of fixed cni, because we won't run calico node agent on master. The master network should be configured separately (kubenet by default).

With this PR, I could bring up a cluster with `NETWORK_POLICY_PROVIDER=calico` on GCE now.
```console
$ cluster/kubectl.sh get pods --all-namespaces
NAMESPACE     NAME                                                  READY     STATUS    RESTARTS   AGE
kube-system   calico-node-9bxbv                                     2/2       Running   0          13m
kube-system   calico-node-kjxtw                                     2/2       Running   0          13m
kube-system   calico-node-vertical-autoscaler-67fb4f45bd-hcjmw      1/1       Running   0          16m
kube-system   calico-node-xs2s2                                     2/2       Running   0          13m
kube-system   calico-typha-7c4d876ddf-d4dtx                         1/1       Running   0          15m
kube-system   calico-typha-horizontal-autoscaler-5f477cdc66-qwwph   1/1       Running   0          16m
kube-system   calico-typha-vertical-autoscaler-58f7d686f7-pn72s     1/1       Running   0          16m
kube-system   etcd-empty-dir-cleanup-e2e-test-lantaol-master        1/1       Running   0          16m
kube-system   etcd-server-e2e-test-lantaol-master                   1/1       Running   0          16m
kube-system   etcd-server-events-e2e-test-lantaol-master            1/1       Running   0          16m
kube-system   event-exporter-v0.1.7-9d4dbb69c-m76v5                 2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0.10-25dmf                             2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0.10-kgxsk                             2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0.10-p75xg                             2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0.10-xzh77                             2/2       Running   0          16m
kube-system   heapster-v1.5.0-beta.0-5cf4d9dff7-dmvm7               4/4       Running   0          13m
kube-system   kube-addon-manager-e2e-test-lantaol-master            1/1       Running   0          15m
kube-system   kube-apiserver-e2e-test-lantaol-master                1/1       Running   0          16m
kube-system   kube-controller-manager-e2e-test-lantaol-master       1/1       Running   0          16m
kube-system   kube-dns-79bdcb6c9f-2bpc8                             3/3       Running   0          15m
kube-system   kube-dns-79bdcb6c9f-gr686                             3/3       Running   0          16m
kube-system   kube-dns-autoscaler-996dcfc9d-pfs4s                   1/1       Running   0          16m
kube-system   kube-proxy-e2e-test-lantaol-minion-group-3khw         1/1       Running   0          16m
kube-system   kube-proxy-e2e-test-lantaol-minion-group-6878         1/1       Running   0          16m
kube-system   kube-proxy-e2e-test-lantaol-minion-group-j9rq         1/1       Running   0          16m
kube-system   kube-scheduler-e2e-test-lantaol-master                1/1       Running   0          16m
kube-system   kubernetes-dashboard-765c6f47bd-lsw5r                 1/1       Running   0          16m
kube-system   l7-default-backend-6d477bf555-x54zf                   1/1       Running   0          16m
kube-system   l7-lb-controller-v0.9.7-e2e-test-lantaol-master       1/1       Running   0          16m
kube-system   metrics-server-v0.2.0-9c4f8c48d-gkl79                 2/2       Running   0          13m
kube-system   monitoring-influxdb-grafana-v4-54df94856c-krkvb       2/2       Running   0          16m
kube-system   rescheduler-v0.3.1-e2e-test-lantaol-master            1/1       Running   0          16m
```

**Note that with this PR, master node will be using kubenet by default. And network policy will not apply on master node.**

**We need this to unblock `cri-containerd` integration with `kube-up.sh`.**
/cc @dchen1107 @dnardo Please take a look.
@kubernetes/sig-network-misc @kubernetes/sig-cluster-lifecycle-misc 

Signed-off-by: Lantao Liu <lantaol@google.com>

```release-note
None
```
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Nov 17, 2017
Automatic merge from submit-queue (batch tested with PRs 55392, 55491, 51914, 55831, 55836). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Rename log-dump env to `LOG_DUMP_SYSTEMD_SERVICES`.

For kubernetes/enhancements#286.

Rename `SYSTEMD_SERVICES` to `LOG_DUMP_SYSTEMD_SERVICES`. test-infra disables log dump in our e2e framework, and uses a different log dump logic https://github.com/kubernetes/test-infra/blob/master/kubetest/e2e.go#L480-L497. So the flags we added in #55288 will not work in test-infra.

Fortrunately, test-infra is using the same script `cluster/log-dump/log-dump.sh`, so we could still configure systemd services by setting the environment variable globally.

The original environment variable name is too general for setting globally, change it to a more specific name.

**Release note**:

```release-note
none
```
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2018
@mikebrow
Copy link
Member

mikebrow commented Jan 7, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 7, 2018
dims pushed a commit to dims/kubernetes that referenced this issue Feb 27, 2018
Automatic merge from submit-queue (batch tested with PRs 60011, 59256, 59293, 60328, 60367). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add CPU/Memory pod stats for CRI stats.

For kubernetes/enhancements#286.

Add CPU and memory stats for pod.

@kubernetes/sig-node-pr-reviews 
/cc @dashpole @yujuhong @abhi @yguo0905 
Signed-off-by: Lantao Liu <lantaol@google.com>



**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Summary API will include pod CPU and Memory stats for CRI container runtime.
```
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 7, 2018
@justaugustus
Copy link
Member

@Random-Liu
Any plans for this in 1.11?

If so, can you please ensure the feature is up-to-date with the appropriate:

  • Description
  • Milestone
  • Assignee(s)
  • Labels:
    • stage/{alpha,beta,stable}
    • sig/*
    • kind/feature

cc @idvoretskyi

@mikebrow
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 17, 2018
@justaugustus
Copy link
Member

@Random-Liu @mikebrow pinging again for status.

Any plans for this in 1.11?

If so, can you please ensure the feature is up-to-date with the appropriate:

  • Description
  • Milestone
  • Assignee(s)
  • Labels:
    • stage/{alpha,beta,stable}
    • sig/*
    • kind/feature

@justaugustus justaugustus added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 20, 2018
@justaugustus justaugustus modified the milestones: v1.8, next-milestone Apr 20, 2018
@Random-Liu Random-Liu added stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status and removed stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status labels Apr 20, 2018
@Random-Liu
Copy link
Member Author

The feature is GA now.

Status Update

@justaugustus
Copy link
Member

Thanks for the update, @Random-Liu!
I'll go ahead and close this out.

/remove-help
/close

@k8s-ci-robot k8s-ci-robot removed the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Apr 21, 2018
@justaugustus justaugustus removed their assignment Apr 21, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/node Categorizes an issue or PR as relevant to SIG Node. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Projects
None yet
Development

No branches or pull requests

8 participants