Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ci busy no more #143

Merged
merged 24 commits into from
Jun 28, 2019
Merged
Changes from 7 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
74676e5
Save performance in ci
kevindweb Jun 5, 2019
3a97ae4
Add flask to install
kevindweb Jun 5, 2019
ef255d0
Allowed for public linting of unauthorized users
koolzz Jun 5, 2019
8ba65de
Starting the queue
koolzz Jun 6, 2019
3a1874b
Start to fixing the queue problem
koolzz Jun 7, 2019
bf5e2a8
Merge branch 'develop' of https://github.com/kevindweb/openNetVM into…
koolzz Jun 8, 2019
17882a6
Added run mode for future advancements
koolzz Jun 8, 2019
1c2f52d
Major fixes to ci list system
koolzz Jun 8, 2019
29bbef6
Remove debug lines
koolzz Jun 8, 2019
a1749cb
Clean up definitions
koolzz Jun 8, 2019
0c3a90b
Remove exit statement
koolzz Jun 8, 2019
45fdf70
Merge branch 'develop' of https://github.com/sdnfv/openNetVM-dev into…
koolzz Jun 9, 2019
c10dbb4
Add error handling in run_ci and fix docs
kevindweb Jun 9, 2019
5b06b38
Create polling thread for requests
kevindweb Jun 10, 2019
12935e3
Delete new line
kevindweb Jun 11, 2019
b771745
Account for run mode from #140
kevindweb Jun 12, 2019
04380b1
Fixes to merge run mode conflict
kevindweb Jun 12, 2019
e7639ef
Merge branch 'ci_performance_update' into ci_queue
kevindweb Jun 12, 2019
ed65879
Errors are much more difficult in threads :(
kevindweb Jun 12, 2019
3b01660
Smarter than using tuples
kevindweb Jun 12, 2019
58a13ff
Merge branch 'develop' of https://github.com/kevindweb/openNetVM into…
kevindweb Jun 12, 2019
4bef1aa
Fix merge conflict
kevindweb Jun 12, 2019
05a89f8
Added event-based handling instead of polling
onvmstats Jun 17, 2019
24965be
Let request handler clears event
onvmstats Jun 19, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 47 additions & 7 deletions ci/webhook-receiver.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@
import os
import subprocess
import logging
import time
import threading

# Global vars
EVENT_URL = "/github-webhook"
Expand All @@ -23,6 +25,8 @@
secret_file_name = None
private_key_file = None
secret = None
queue_lock = None
ci_request_list = None

app = Flask(__name__)

Expand Down Expand Up @@ -158,6 +162,20 @@ def filter_to_prs_and_pr_comments(json):

return None

# run the manager, and rest of CI process
def run_manager(request_ctx):
log_access_granted(request_ctx, "Running CI")
os.system("./manager.sh config {} \"{}\" \"{}\"".format(request_ctx['id'], request_ctx['repo'], request_ctx['body']))

def poll_request_list():
while True:
# continuously check for waiting requests
if len(ci_request_list) > 0:
run_manager(ci_request_list.pop(0))
else:
# no requests, sleep
time.sleep(1)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Figured we could just sleep for a second, but not sure how long is best

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want a sleeping queue sort of implementation in the future, but good enough for now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you mean?


@app.route(EVENT_URL, methods=['POST'])
def init_ci_pipeline():
request_ctx = filter_to_prs_and_pr_comments(request.json)
Expand Down Expand Up @@ -196,13 +214,28 @@ def init_ci_pipeline():
out, err = proc2.communicate()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically if CI has finished a run and is in the middle of spawning a call this out value will be wrong anyway, why don't we use that + length of ci request queue to properly determine if we can run instantly


if (out):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is any reason to separate this into 2 branches. lets just always do the one that grabs the queue_lock and does w.e processing it needs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even with changing it so we check the "out" logic with ci_request_list.length > 0, how can we ignore the else condition? Why would we want to "nuke" that?

print("Can't run CI, another CI run in progress")
log_access_granted(request_ctx, "CI busy, posting busy msg")
os.system("./ci_busy.sh config {} \"{}\" \"{}\" \"Another CI run in progress, please try again in 15 minutes\""
.format(request_ctx['id'], request_ctx['repo'], request_ctx['body']))
print("Can't run CI, another CI run in progress, placing in the queue")
log_access_granted(request_ctx, "CI busy, placing request in queue")
duplicate_req = False
busy_msg = "Duplicate request already waiting, ignoring message"
# make sure we don't bypass thread safety
with queue_lock:
for req in ci_request_list:
# make sure this is the same PR
if req['id'] == request_ctx['id'] and req['repo'] == request_ctx['repo']:
duplicate_req = True
break
if not duplicate_req:
# don't have this request yet, put in queue
ci_request_list.append(request_ctx)
busy_msg = "Another CI run in progress, adding request to the end of the list"
# ending frees the lock
os.system("./ci_busy.sh config {} \"{}\" \"{}\" \"{}\""
koolzz marked this conversation as resolved.
Show resolved Hide resolved
.format(request_ctx['id'], request_ctx['repo'], request_ctx['body'], busy_msg))
else:
log_access_granted(request_ctx, "Running CI")
os.system("./manager.sh config {} \"{}\" \"{}\"".format(request_ctx['id'], request_ctx['repo'], request_ctx['body']))
# no manager running yet, put in the list (presumably the top)
with queue_lock:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nuke this branch, combine with above

ci_request_list.append(request_ctx)

return jsonify({"status": "ONLINE"})

Expand Down Expand Up @@ -251,8 +284,15 @@ def parse_config(cfg_name):
cfg_name = sys.argv[4]

parse_config(cfg_name)

secret = decrypt_secret()
queue_lock = threading.Lock()
ci_request_list = []

# dedicate thread to check for and run new requests
poll_request = threading.Thread(target=poll_request_list, args=[])
# run as daemon to catch ^C termination
poll_request.daemon = True
poll_request.start()

logging.info("Starting the CI service")
app.run(host=host, port=port)