You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 19, 2021. It is now read-only.
When delegating tests (and ports for those tests) to workers we start at BASE_PORT_START we increment by BASE_PORT_SPACING checking that those ports are available.
Setting a smaller BASE_PORT_RANGE would accomplish a similar thing, but the port availability check will still move forward from current port to MAX_PORT. Always check forward is the fastest way to get a free port range as we can always assume the next port range is always available.
However, since there is a chance that zombie child process might happen from previous worker(s), we cannot have a hard limit of BASE_PORT_RANGE (which is ideally [port range per worker] * [worker amount]).
When re-checking the ports, we need to send http request to check them (or if there is a better way to handle this) as mentioned previously there might be zombie process still holding it.
One solution is to scan from BASE_PORT_START per port check. Usually when there is no zombie process, magellan releases ports by range per worker. So once the free port range is located, it can be assigned to next work directly. But when multiple workers are running, there is a chance this solution has more port check (more http requests to be sent) than the current solution.
Did we test whether the magellan works on windows machine? I don't have logs to share as my machine got changed to Mac recently and unable to reproduce on Mac. But just few days before when I ran the nightwatch boilerplate, I saw connection refused error: 127.0.0.1:12000. I believe the 12000 is the default BASE_PORT_START.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
this will minimize the need of opening new ports per worker
The text was updated successfully, but these errors were encountered: