Skip to content
This repository has been archived by the owner on Jun 28, 2024. It is now read-only.

ci: detect the existence of yq before using filter scheme on aarch64 #868

Merged
merged 1 commit into from
Nov 22, 2018

Conversation

Pennyzct
Copy link
Contributor

Since that filter schme on aarch64 bare-metal machine would fail if it is lack of yq, we should detect the existence of yq before going into and also install it if it is really missing.

Fixes: #867

Signed-off-by: Penny Zheng penny.zheng@arm.com

# install yq if not exist
if ! command -v yq >/dev/null; then
install_yq
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you fix the indentation here - you seen to have three levels of it :)

btw, you could simplify to something like the following:

[ -z "$(command -v yq)" ] && install_yq

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right. prefer the simplification. 😁

Copy link
Contributor

@marcov marcov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @Pennyzct, this looks good, let me know what you think about the comments I left.
Also, theres' a small typo in the commit body (schme).

.ci/lib.sh Outdated
@@ -118,7 +118,7 @@ function install_yq() {
yq_version=$(basename "${yq_latest_url}")

local yq_url="https://${yq_pkg}/releases/download/${yq_version}/yq_${goos}_${goarch}"
curl -o "${yq_path}" -L ${yq_url}
curl -o "${yq_path}" -Ls ${yq_url}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add -S (uppercase) to the flags so that errors could still be printed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right. originally, I just wanted to get rid of these download info,

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   116    0   116    0     0     87      0 --:--:--  0:00:01 --:--:--    87
100 64230    0 64230    0     0  26662      0 --:--:--  0:00:02 --:--:--  120k

which I don't know why it has been sent into stderr. but only -s could also skip real err info.

@@ -33,6 +35,10 @@ filter_and_build()

main()
{
# install yq if not exist
if ! command -v yq >/dev/null; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yq is installed in the non standard PATH ${GOPATH_LOCAL}/bin/, so maybe it's better to check for ${GOPATH_LOCAL}/bin/yq ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but since CI on aarch64 is on bare-metal machine, there are possibilities that yq has already been installed in some other PATH, /usr/bin, /usr/local/bin, etc. ☹

main()
{
# install yq if not exist
if ! command -v yq >/dev/null; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as before

@chavafg
Copy link
Contributor

chavafg commented Oct 29, 2018

btw, the ARM job for this repo can be triggered using: /test-arm
lets see how it goes...
when this job gets stable, we can change to use the same trigger as the other jobs.

@Pennyzct
Copy link
Contributor Author

@chavafg
thanks for launching the ARM Job. I found another bug of not passing the arch value here.

# If CI running on bare-metal, a few clean-up work before walking into test repo
if [ "${BAREMETAL}" == true ]; then
	clean_up_script="${tests_repo_dir}/.ci/${arch}/clean_up_${arch}.sh"
	[ -f "${clean_up_script}" ] && source "${clean_up_script}"
fi

I will add a new commit to fix it.

@marcov
Copy link
Contributor

marcov commented Oct 30, 2018

/test-arm

@marcov
Copy link
Contributor

marcov commented Oct 30, 2018

lgtm

Approved with PullApprove

@grahamwhaley
Copy link
Contributor

ARM CI seemed to have a network timeout issue - have nudged a rebuild.

@marcov
Copy link
Contributor

marcov commented Oct 31, 2018

Getting the following error now, that seems to be related to time settings not correctly configured on the machine:

# cd .; git clone https://github.com/golang/dep /go/src/github.com/golang/dep
Cloning into '/go/src/github.com/golang/dep'...
fatal: unable to access 'https://github.com/golang/dep/': SSL certificate problem: certificate is not yet valid

@Pennyzct could you try to fix this for ARM? You may need to do something like sudo ntpdate pool.ntp.org, and also make sure that ntpdate or similar is installed.

@Pennyzct
Copy link
Contributor Author

Hi~ I will lately log into CI ARM machine on packet.net to try to fix this. However, when I was trying to run the whole CI tests on my local machine, and found that there are quite a few new docker integration tests uploaded, and large portion of them couldn't work well on ARM, such as the new memory hot pluggable test(memory hot plug not supported on arm for now). so maybe I will raise a new issue to configure the filter to skip these tests.

@marcov
Copy link
Contributor

marcov commented Oct 31, 2018

thanks @Pennyzct, that's a good idea :)

@Pennyzct
Copy link
Contributor Author

Pennyzct commented Nov 1, 2018

/test-arm

@Pennyzct
Copy link
Contributor Author

Pennyzct commented Nov 1, 2018

Hi~@marcov I have already ntpdate the ARM CI machine, and git clone https://github.com/golang/dep worked well on this machine. But it seems that i didn't have the authority to trigger the ARM CI. so could you please /test-arm for me again, thanks!😃

@marcov
Copy link
Contributor

marcov commented Nov 1, 2018

let's hope :)
/test-arm

@grahamwhaley
Copy link
Contributor

Think armCI hit a timeout:

INFO: Install kernel from sources
INFO: kernel path does not exist, will download kernel
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  255k  100  255k    0     0   315k      0 --:--:-- --:--:-- --:--:--  315k
INFO: Download kernel version 4.14.67
INFO: Download kernel
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   178  100   178    0     0    229      0 --:--:-- --:--:-- --:--:--   229

  0 96.3M    0 15982    0     0  10218      0  2:44:42  0:00:01  2:44:41 10218
  0 96.3M    0 48750    0     0  19072      0  1:28:14  0:00:02  1:28:12 33032
...

  7 96.3M    7 7391k    0     0  25188      0  1:06:49  0:05:00  1:01:49 16718Build timed out (after 5 minutes). Marking the build as aborted.
Build was aborted

let's nudge it again - and see if that is repeatable etc.

/test-arm

@marcov
Copy link
Contributor

marcov commented Nov 2, 2018

And now we are still getting this:

fatal: unable to access 'https://github.com/golang/dep/': SSL certificate problem: certificate is not yet valid

This happens in a container images, so I exclude it's something related to the SSL certs installed, and the image sha in the log is up to date.

Checking for man opessl-verify:

       X509_V_ERR_CERT_NOT_YET_VALID
           The certificate is not yet valid: the notBefore date is after the current time.

@Pennyzct some things you could try doing:

  • Login to the machine and run docker build --pull -t foobar -f stress/Dockerfile stress.
  • If that does not give errors, add as debug a RUN date line at the beginning of stress/Dockerfile, and let's re-try ro run the CI.

@sboeuf
Copy link

sboeuf commented Nov 5, 2018

/test

@Pennyzct
Copy link
Contributor Author

Pennyzct commented Nov 6, 2018

Hi~@marcov thanks for the detailed instruction!;) The whole stress image building was successful on my local ARM machine, and I will try it on packet.net's machine asap.

@Pennyzct
Copy link
Contributor Author

Pennyzct commented Nov 7, 2018

The time in container is inconsistent with the time in host, which i think it should be the same in default.

  • host
root@testing-1:~# date
Wed Nov  7 16:37:01 CST 2018
  • container
root@testing-1:~# docker run -it ubuntu
root@14834c3b0c89:/# date
Mon Mar  5 22:16:43 UTC 2018
 root@testing-1:~# docker run -it alpine
/ # date
Mon Mar  5 22:16:43 UTC 2018

I have tried multiple images, such like alpine, ubuntu, etc, it share the same output.
Even when i tried to cover the /etc/localtime with the one in host, it couldn't been interpreted correctly.

root@testing-1:~# docker run -it -v /etc/localtime:/etc/localtime:ro ubuntu
root@f6d9f22653c3:/# date
Tue Mar  6 06:16:43 CST 2018

Above errors only occur in this machine on packet.net, and won't occur in my other local ARM machine.
I'm quite confused now. 😭 any thoughts? @marcov @jodh-intel @grahamwhaley maybe i will re-install docker to see if it will repeat again?

@marcov
Copy link
Contributor

marcov commented Nov 7, 2018

hi @Pennyzct, thanks for the investigation!
This looks related to the way the time is set inside the guest VM, so I don't think reinstalling docker makes a difference.
Let's see what others here think, and if there's no obvious solutions you could open a new issue in kata-containers/runtime to gather feedback from more people.

@grahamwhaley
Copy link
Contributor

Nice find, and yeah, that to me feels like it is to do with how the time (actually, I suspect the timezone) is mapped (or not) into the container.
afaik, we don't do anything specific in kata to handle this. It's a little hard for me to test locally, as my local timezone matches UTC right now :-). If you change the above date to date -u, does it at least show that the containers time is in sync with the host - just the timezone is different?
If it does, then I would expect that not to affect the certificate check :-)

More investigation needed. I agree with @marcov that I doubt a docker re-install will change this.

@Pennyzct
Copy link
Contributor Author

Pennyzct commented Nov 8, 2018

We have resolved it! 🎊🎉🎊🎉 it’s because that the device rtc is missing in VM. And Wei will refine the kernel config to fix it asap.

@Weichen81
Copy link

@Pennyzct kata-containers/packaging#239 this one fix it : )

@marcov
Copy link
Contributor

marcov commented Nov 8, 2018

kata-containers/packaging#239 was just merged, so let's re-test
/test-arm

@marcov
Copy link
Contributor

marcov commented Nov 8, 2018

@Pennyzct can you please check the docker installation on the ARM CI machine? It's not finding the docker binary

+ .ci/jenkins_job_build.sh github.com/kata-containers/tests
Setup env for kata repository: github.com/kata-containers/tests
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/.ci/aarch64/clean_up_aarch64.sh: line 11: docker: command not found
Build step 'Execute shell' marked build as failure

@grahamwhaley
Copy link
Contributor

Hmm, that error looks to be in the cleanup scripts - maybe that is not invalid (and/or should not be a fatal error).
Possibly related (I'll dig, and open a new Issue probably), but I'm seeing some docker install issues on my new metrics CI bare metal machine:

12:01:59 Install haveged
12:02:03 manage_ctr_mgr.sh - WARNING: docker is not installed on this system
12:02:03 manage_ctr_mgr.sh - WARNING: docker is not installed on this system
12:02:03 <13>Nov  8 12:02:03 manage_ctr_mgr.sh: Installing docker v18.06-ce
12:02:03 Reading package lists...
12:02:04 Building dependency tree...
12:02:04 Reading state information...
12:02:04 ca-certificates is already the newest version (20170717~16.04.1).
12:02:04 software-properties-common is already the newest version (0.96.20.7).
12:02:04 The following packages will be upgraded:
12:02:04   apt-transport-https
12:02:04 1 upgraded, 0 newly installed, 0 to remove and 27 not upgraded.
12:02:04 Need to get 26.2 kB of archives.
12:02:04 After this operation, 1,024 B of additional disk space will be used.
12:02:04 Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-transport-https amd64 1.2.29 [26.2 kB]
12:02:05 debconf: unable to initialize frontend: Dialog
12:02:05 debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.)
12:02:05 debconf: falling back to frontend: Readline
12:02:05 debconf: unable to initialize frontend: Readline
12:02:05 debconf: (This frontend requires a controlling tty.)
12:02:05 debconf: falling back to frontend: Teletype
12:02:05 dpkg-preconfigure: unable to re-open stdin: 
12:02:05 Fetched 26.2 kB in 0s (116 kB/s)
12:02:05 (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 60864 files and directories currently installed.)
12:02:05 Preparing to unpack .../apt-transport-https_1.2.29_amd64.deb ...
12:02:05 Unpacking apt-transport-https (1.2.29) over (1.2.27) ...
12:02:05 Setting up apt-transport-https (1.2.29) ...
12:02:06 OK
12:02:08 Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
12:02:08 Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
12:02:08 Get:3 https://download.docker.com/linux/ubuntu xenial InRelease [66.2 kB]
12:02:08 Hit:4 http://ppa.launchpad.net/alexlarsson/flatpak/ubuntu xenial InRelease
12:02:08 Ign:5 http://download.opensuse.org/repositories/home:/katacontainers:/release/xUbuntu_16.04  InRelease
12:02:08 Hit:6 http://download.opensuse.org/repositories/home:/katacontainers:/release/xUbuntu_16.04  Release
12:02:08 Get:7 http://archive.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
12:02:08 Get:8 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages [5,295 B]
12:02:09 Fetched 287 kB in 1s (271 kB/s)
12:02:11 Reading package lists...
12:02:11 E: read, still have 1 to read but none left
12:02:11 Reading package lists...
12:02:12 Building dependency tree...
12:02:12 Reading state information...
12:02:12 E: Version '' for 'docker-ce' was not found
12:02:12 Build step 'Execute shell' marked build as failure

I wonder if the docker repos are down maybe...

@chavafg
Copy link
Contributor

chavafg commented Nov 8, 2018

@grahamwhaley, docker released new 18.09, but seems like now the package is docker-ce-cli, so manage_ctr_manager.sh is currently broken.

Since that filter scheme on aarch64 bare-metal machine would fail if it
is lack of yq, we should detect the existence of yq before going into
and also install it if it is really missing.

Fixes: kata-containers#867

Jira: ENTOS-480
Change-Id: I361e55449df618e2129f4957aff537fd38995b9b
Signed-off-by: Penny Zheng penny.zheng@arm.com
@chavafg
Copy link
Contributor

chavafg commented Nov 9, 2018

/test

@chavafg
Copy link
Contributor

chavafg commented Nov 9, 2018

/test-arm

@chavafg
Copy link
Contributor

chavafg commented Nov 9, 2018

still getting the docker: command not found error.
@Pennyzct can you please install it manually?

Seems like we still have an issue on baremetal machines where docker is not installed.
The cleanup scripts make use of docker and are executed at the very beginning of the jenkins_job_build.sh, where we still do not install docker using the CI scripts.

/cc @grahamwhaley

@grahamwhaley
Copy link
Contributor

Ah, OK - we should probably either not have -e on the cleanup scripts, or predicate the docker commands with a cmd docker type check..

@Pennyzct
Copy link
Contributor Author

so sorry for the delay, got offline for a few days due to the offsite meeting.
@chavafg docker have already been re-installed. let's have another try.
@grahamwhaley i think that maybe we should do the cmd docker type check before running func clean_env

@marcov
Copy link
Contributor

marcov commented Nov 16, 2018

/test-arm

@Pennyzct
Copy link
Contributor Author

I have reserved 10 mins for docker build vish/stress image, but it seemed that it broke some CI system time-out???

Running command '/usr/bin/docker [docker build -t vish/stress /home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/stress]'
Build timed out (after 5 minutes). Marking the build as aborted.
Build was aborted

I have logged into ARM CI machine and run the all docker integration tests, and it turned out that it had passed the docker build vish/stress image and the failure all had been expected to been blocked by memory tests, which would be solved by #882.

Summarizing 10 Failures:

[Fail] Hotplug memory when create containers Hotplug memory when create containers [It] hotplug memory when create containers should not fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:74

[Fail] Hotplug memory when create containers Hotplug memory when create containers [It] hotplug memory when create containers should not fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:74

[Fail] Hotplug memory when create containers Hotplug memory when create containers [It] hotplug memory when create containers should not fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:74

[Fail] Hotplug memory when create containers Hotplug memory when create containers [It] hotplug memory when create containers should not fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:74

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should not fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should not fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should not fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should not fail 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] memory constraints run container using memory constraints [It] should have applied the constraints 
/root/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:179

Ran 160 of 220 Specs in 1812.211 seconds
FAIL! -- 150 Passed | 10 Failed | 0 Pending | 60 Skipped --- FAIL: TestIntegration (2365.78s)
FAIL

Ginkgo ran 1 suite in 39m40.887588811s
Test Suite Failed
Makefile:50: recipe for target 'docker' failed
make: *** [docker] Error 1

@Pennyzct
Copy link
Contributor Author

Hi~ @jodh-intel the missing docker has already been mentioned and resolved here, and after that, @marcov has helped me re-triggered ARM CI here. However, it still got stuck by time-out.
I have logged into the CI machine early and tried to re-run the whole docker integrations tests locally, The output is posted in my previous comment.
BTW, could I maybe have the authority to trigger the ARM CI? it was unstable recently and I have to constantly bother your guys to trigger for me again and again.😥

@jodh-intel
Copy link
Contributor

Hi @Pennyzct - I don't think I have those "super-powers" - @chavafg - could you help with this maybe?

@grahamwhaley
Copy link
Contributor

I think the correct route to enable you to do this @Pennyzct is to add you to the kata github community (project) :-) To do that I think iirc you need to become a kata maintainer. First, let me ask, are you OK taking on the responsibility that brings? :-)
If so, then iirc we do an email to the dev list asking for some sponsors, and all being well, we then add you to the github community and that will then recognise your CI trigger action comments on Issues.
@chavafg , do the mechanics there sound right?
@kata-containers/architecture-committee - am I remembering the process correctly (I thought we might have slightly more docs on this, but I failed to find any in a quick surf).

@grahamwhaley
Copy link
Contributor

@Pennyzct - heh, I'm now running into the 'docker not found' thing on my metrics machines - let me go do that 'cmd docker' checking PR to the cleanup scripts.... yeah, the cleanup scripts should never fail - I might also just || true them in the jenkins script as well.

@chavafg
Copy link
Contributor

chavafg commented Nov 20, 2018

@Pennyzct I increased the timeout from 5 to 10 minutes to see how this goes. I have also added you as an admin on this arm job, so you should be able to re-trigger using:

/test-arm

Let me know if you find an issue triggering the job.

Copy link

@devimc devimc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@Pennyzct
Copy link
Contributor Author

Finally, we got into this output😭:

Summarizing 10 Failures:

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should not fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should not fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should not fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] run container and update its memory constraints should have applied the memory constraints [It] update memory constraints should not fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:247

[Fail] memory constraints run container using memory constraints [It] should have applied the constraints 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:179

[Fail] Hotplug memory when create containers Hotplug memory when create containers [It] hotplug memory when create containers should not fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:74

[Fail] Hotplug memory when create containers Hotplug memory when create containers [It] hotplug memory when create containers should not fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:74

[Fail] Hotplug memory when create containers Hotplug memory when create containers [It] hotplug memory when create containers should not fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:74

[Fail] Hotplug memory when create containers Hotplug memory when create containers [It] hotplug memory when create containers should not fail 
/home/jenkins/workspace/kata-containers-tests-ARM-18.04-PR/go/src/github.com/kata-containers/tests/integration/docker/mem_test.go:74

Ran 160 of 220 Specs in 1808.729 seconds
FAIL! -- 150 Passed | 10 Failed | 0 Pending | 60 Skipped --- FAIL: TestIntegration (2356.05s)
FAIL

@chavafg thanks for admin authority and increasing the time-out level.
Like I previously said, all expected failed tests are related to memory. Since the new memory-hotplug feature isn't working well in aarch64 (which is what I am working on recently), I pulled #882 to skip them for now. So after merging #882 firstly, and then this, I think that arm CI could work well again.

@Pennyzct
Copy link
Contributor Author

@grahamwhaley thanks for the proposal, very honored to be taking on responsibility ;) and very happy to continue working on ARM CI and related ARM issues, with extreme help from my mentor @Weichen81.

@gnawux
Copy link
Member

gnawux commented Nov 21, 2018

@Pennyzct looking forward to getting more contributions from you and other ARM folks. 👍

@Pennyzct
Copy link
Contributor Author

/test

@Pennyzct
Copy link
Contributor Author

Hi~all @marcov @grahamwhaley @jodh-intel this one also got green-check on ARM CI ;).

@jodh-intel
Copy link
Contributor

jodh-intel commented Nov 22, 2018

lgtm!

Approved with PullApprove

@jodh-intel jodh-intel merged commit ef00036 into kata-containers:master Nov 22, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants