-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proxy doesn't know if karma webserver runs on different port than originally configured #1476
proxy doesn't know if karma webserver runs on different port than originally configured #1476
Comments
I would suggest adding an event here https://github.com/karma-runner/karma/blob/canary/lib/server.js#L131 like this: webServer.on('listening', function () {
self.emit('webserver_ready')
}) and then use this event on the emitter, to only start the proxy when it's emitted. |
Also experienced this just now, took some time to figure out what went wrong, hence my follow-up question: is it possible to tell karma to fail if the port is in use? |
@metamatt I know it's been a while, but did you end up using a workaround of some sort for this? Every time I get a failure it takes me a while to remember that it's due to the proxy failing due to the different port. It's a bugger. |
Go the same problem and it´s very annoying especially if you run tests with async(), which will fail, when proxied request are denied. |
If you're using gulp (or another task runner) to run karma, you can workaround this by manually finding an open port and then specifying that in the config. What I'm doing
|
Check this link to avoid 404 errors on karma using different port |
I encountered this bug when running tests concurrently (using different instances of Karma). An easy way to reproduce it is to launch your instance of karma (one that uses proxies). |
I'm using karma to test a project loaded by systemjs. For lack of a better way, my karma config depends heavily on use of proxies to rewrite paths (but I'm proxying back to karma's internal webserver, not a different webserver).
I'm also running these tests via a parallel test runner that runs several copies of karma. The first karma starts up on the port configured in karma.conf.js (9050 for me). The second one gets EADDRINUSE, logs "port 9050 in use", and happily rolls onto port 9051.
The problem is that karma configures the proxy with the port specified via config (thanks to #1007) falling back to default 443/80 if the config doesn't specify. And karma configures the proxy library before it calls listen on its internal webserver, that is, before it discovers the port conflict and starts the webserver on a different port. And when it does discover the port conflict, it never tells the proxy library.
The result in my parallel test runner scenario is this: I start N copies of karma, they start their internal webservers on N different ports starting at 9050, but they've all configured their proxy to proxy to 9050. So all the proxied requests go to the webserver for the first karma instance to start. This actually works out better than you might expect, since they're all serving the same code... until the first karma test passes and exits (the batch tests run in single-run mode), and the rest of the tests keep trying to talk to the webserver that just exited.
This is easy to repro without the parallel scenario; any port conflict will do:
netcat -lp [the port the karma config uses]
You'll see karma itself log "port in use" and increment until it finds an open port, but continue throwing proxied requests at the originally configured port.
A brute force fix for this is to defer evaluation of the proxyConfig port until it's actually needed, by which time karma's listening on the right port. A cleaner fix that doesn't rely on getters would probably have the outer glue code that starts everything wait till the internal webserver owns a port and then tell the proxy code explicitly to use that port. The initialization order and data flow is not very explicit thanks to DI so it's not immediately obvious to me how to accomplish that.
The text was updated successfully, but these errors were encountered: