| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
This can happen if a stream is used blocking exclusively (the FD is
never registered with watcher, but is removed in the stream's destructor
just in case it ever was - doing this conditionally would require an
additional flag in streams). There may be no thread reading from
the read end of the notify pipe (e.g. in starter), causing the write
to the notify pipe to block after it's full. Anyway, doing a relatively
expensive FD update is unnecessary if there were no changes.
Fixes #1453.
|
|
|
|
|
| |
Since the FD set could get rebuilt quite often this change avoids having
to allocate memory just to enumerate the registered FDs.
|
|
|
|
|
|
|
| |
With LinuxThreads, poll() is unfortunately no cancellation point. It seems
that poll gets woken up after cancellation, but we actively must check
for cancellation before re-entering poll to properly shut down the watcher
thread.
|
|
|
|
|
|
|
| |
poll() may return POLLHUP or POLLNVAL for given file descriptors. To handle
these properly, we signal them to the EXCEPT watcher state, if registered. If
not, we call the read/write callbacks, so they can properly fail when trying
to read from or write to the file descriptor.
|
| |
|
| |
|
|
|
|
|
| |
This allows a user to check if the watcher is actually running, and potentially
perform read operations directly instead of relying on watcher.
|
|
|
|
|
|
|
|
| |
If file descriptors get added and removed in rapid succession, the active
watcher thread might not take notice of it and continues running. However, add()
spawns a watcher thread whenever a file descriptor is added to an empty set.
This could result in multiple watcher threads, which is fixed by a proper
check for running watchers.
|
|
|
|
|
|
|
|
|
|
| |
Instead of a pipe we use a TCP socketpair (can't select() a _pipe()), and
Windsock2 send/recv functions instead of read/write.
Currently supported (and required) are file descriptors provided by Winsock
only; we might use a separate mechanism for traditional file handles if
required (or switch to Windows events and WaitForMultipleObjects) for a future
version.
|
|
|
|
|
|
|
|
| |
During shutdown, waiting for callbacks might never complete, as queued
callbacks might not get executed under certain conditions. Not the clean fix,
but works good enough for now.
Seen on Windows in vici tests.
|
|
|
|
|
|
|
|
|
| |
While we don't add FDs with an active callback to the watched FDSET, we still
can get notifications for callbacks active due the asynchronous processing
of the same.
To avoid queue multiple callbacks, we check for queued callbacks before
activating new ones.
|
| |
|
|
|
|
|
|
| |
This should make sure we refresh the fdset if a user closes an FD it just
removed. Some selects() seem to complain about the bad FD before signaling the
notification pipe.
|
| |
|
|
|
|
|
| |
Just queueing is problematic, as all threads might be busy waiting for events
that the queued (but never executed) job delivers.
|
| |
|
|
|
|
|
| |
Use non-blocking I/O on the read end of the notify pipe. This also makes sure
the read does not block should select() signal data while there is none.
|
| |
|
|
|
|
|
| |
This is important during shutdown, where we might need to signal some FDs while
all idle threads are gone already.
|
| |
|
|
|
|
|
|
|
| |
During daemon shutdown, users might call remove() after processor.set_threads(0)
has been called. This gets problematic, as a watch event might be unable
to signal completion when no threads are available anymore. Work around this
issue by cancelling waiters once processor.cancel() has been called.
|
|
|