Skip to content
This repository has been archived by the owner on Dec 15, 2021. It is now read-only.

Maximum timeout for python functions #433

Merged
merged 11 commits into from
Nov 23, 2017

Conversation

andresmgot
Copy link
Contributor

@andresmgot andresmgot commented Nov 16, 2017

Issue Ref: #365

Summary:

This PR adds a timeout handler for the Python runtime. If a function takes more than 3 seconds on its execution the subprocess get killed. The timeout is configurable using the environment variable FUNC_TIMEOUT.

Also, to avoid duplicated kubeless.py files (from python-2.7 and python-3.4 folders) they are now hard links.

TODOs:

@murali-reddy
Copy link
Contributor

FUNC_TIMEOUT env not being set in deployment spec. How timeout can be configured?

@andresmgot
Copy link
Contributor Author

the idea is to execute kubeless function deploy --env FUNC_TIMEOUT=10 .... It is not specified in the dpm spec because the function uses a default value if it is not set.

@murali-reddy
Copy link
Contributor

i see. Was not aware of --env flag. Changes look good to me. Document the behaviour.

@anguslees
Copy link
Contributor

Multiprocessing is a pretty heavyweight way to add a timeout - you're forking a new process and reserialising the request and response objects for every function call.

A. Do we care about performance?
B. Can we just do this with a thread? Or if we have to handle the case where the entire process is deadlocked - perhaps we can just run the watcher in a persistent separate process so we can pass the timeout and not the much larger request/response bodies across processes.

@murali-reddy
Copy link
Contributor

murali-reddy commented Nov 17, 2017

With Python's Global interpretter Lock and bottle, threading is useless anyway. If not for timeout we do need better way for parallel execution.

@andresmgot
Copy link
Contributor Author

I will try to do it with threads

@murali-reddy
Copy link
Contributor

Perhaps this is out of conext of this bug. I opened #435 for consideration pythons multiprocessing

@andresmgot
Copy link
Contributor Author

andresmgot commented Nov 20, 2017

@anguslees I tried with threads but there is not an official way of terminating threads (and seems that is something wrong to do: https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python). If we don't terminate the function it will keep using the container resources until it fails.

I tried as well using a Timer but this has the issue that the timeout function is executed in a different thread so the main thread remains executing the function anyway.

I am reverting to the previous approach with processes, it is the only way of properly terminating the process if it takes more than N seconds.

@andresmgot andresmgot merged commit 4c0c40c into vmware-archive:master Nov 23, 2017
sayanh pushed a commit to sayanh/kubeless that referenced this pull request Jan 17, 2018
* Maximum timeout for python functions

* Fix image reference

* Fix unit test

* Fix pubsub34 test and fix it

* Use threads instead of processes

* Adapt unit test

* Revert "Adapt unit test"

This reverts commit b2434c9.

* Revert "Use threads instead of processes"

This reverts commit 045dcdd.

* Increase timeout to 180s

* Fix unit test
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants