| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
With LinuxThreads, poll() is unfortunately no cancellation point. It seems
that poll gets woken up after cancellation, but we actively must check
for cancellation before re-entering poll to properly shut down the watcher
thread.
|
|
|
|
| |
References #840.
|
|
|
|
|
|
| |
While they usually are not included in a normal strongSwan build, the XPC
header indirectly defines these Mach types. To build charon-xpc, which uses
both XPC and strongSwan includes, we have to redefine these types.
|
|
|
|
|
|
|
| |
poll() may return POLLHUP or POLLNVAL for given file descriptors. To handle
these properly, we signal them to the EXCEPT watcher state, if registered. If
not, we call the read/write callbacks, so they can properly fail when trying
to read from or write to the file descriptor.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This allows a user to check if the watcher is actually running, and potentially
perform read operations directly instead of relying on watcher.
|
|
|
|
|
|
|
|
| |
If file descriptors get added and removed in rapid succession, the active
watcher thread might not take notice of it and continues running. However, add()
spawns a watcher thread whenever a file descriptor is added to an empty set.
This could result in multiple watcher threads, which is fixed by a proper
check for running watchers.
|
|
|
|
|
|
|
|
|
|
| |
Instead of a pipe we use a TCP socketpair (can't select() a _pipe()), and
Windsock2 send/recv functions instead of read/write.
Currently supported (and required) are file descriptors provided by Winsock
only; we might use a separate mechanism for traditional file handles if
required (or switch to Windows events and WaitForMultipleObjects) for a future
version.
|
|
|
|
|
|
|
|
| |
During shutdown, waiting for callbacks might never complete, as queued
callbacks might not get executed under certain conditions. Not the clean fix,
but works good enough for now.
Seen on Windows in vici tests.
|
|
|
|
|
|
|
|
|
| |
While we don't add FDs with an active callback to the watched FDSET, we still
can get notifications for callbacks active due the asynchronous processing
of the same.
To avoid queue multiple callbacks, we check for queued callbacks before
activating new ones.
|
|
|
|
|
| |
During shutdown, cancel queued jobs earlier to avoid having cleanup functions
accessing infrastructure not available anymore, for example watcher.
|
| |
|
| |
|
|
|
|
|
|
| |
This should make sure we refresh the fdset if a user closes an FD it just
removed. Some selects() seem to complain about the bad FD before signaling the
notification pipe.
|
| |
|
|
|
|
|
|
| |
During daemon shutdown, some idle threads might be lingering around even if
set_threads(0) already has been called. To avoid any races, we enforce
synchronous execution of the job.
|
|
|
|
| |
Partially based on an old patch by Adrian-Ken Rueegsegger.
|
| |
|
|
|
|
|
| |
Just queueing is problematic, as all threads might be busy waiting for events
that the queued (but never executed) job delivers.
|
|
|
|
|
|
|
| |
If all worker threads are busy and waiting for an event, we must ensure that
a job delivering that event gets executed. This new method has this property
for CRITICAL jobs, using a worker if we have one, but executing the job directly
if not.
|
| |
|
|
|
|
|
| |
Use non-blocking I/O on the read end of the notify pipe. This also makes sure
the read does not block should select() signal data while there is none.
|
| |
|
|
|
|
|
| |
This is important during shutdown, where we might need to signal some FDs while
all idle threads are gone already.
|
| |
|
| |
|
|
|
|
|
|
|
| |
During daemon shutdown, users might call remove() after processor.set_threads(0)
has been called. This gets problematic, as a watch event might be unable
to signal completion when no threads are available anymore. Work around this
issue by cancelling waiters once processor.cancel() has been called.
|
| |
|
| |
|
|
|
|
|
|
| |
If a lock is held when queue_job() is called and the same lock is
required during the destruction of a job, holding the internal lock
in the processor while calling destroy() could result in a deadlock.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids race conditions between calls to cancel() and jobs that like
to be rescheduled. If jobs were able to reschedule themselves it would
theoretically be possible that two worker threads have the same job
assigned (the one currently executing the job and the one executing the
same but rescheduled job if it already is time to execute it), this means
that cancel() could be called twice for that job.
Creating a new job based on the current one and reschedule that is also
OK, but rescheduling itself is more efficient for jobs that need to be
executed often.
|
|
|
|
|
|
|
|
|
|
| |
This ensures that no threads are active when plugins and the rest of the
daemon are unloaded.
callback_job_t was simplified a lot in the process as its main
functionality is now contained in processor_t. The parent-child
relationships were abandoned as these were only needed to simplify job
cancellation.
|
|
|
|
|
|
|
|
|
|
|
| |
Jobs are now destroyed by the processor, but they are allowed to
reschedule themselves. That is, parts of the reschedule functionality
already provided by callback_job_t is moved to the processor. Not yet
fully supported is JOB_REQUEUE_DIRECT and canceling jobs.
Note: job_t.destroy() is now called not only for queued jobs but also
after execution or cancellation of jobs. job_t.status can be used to
decide what to do in said method.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Warnings like
comparison of unsigned expression < 0 is always false
are reported with -Wextra when enum types that are compiled to an
unsigned type (which is up to the compiler) are checked for negativity.
|
|
|
|
| |
Mostly found by 'codespell'.
|
|
|
|
|
|
|
|
|
| |
During destruction the main thread locks the mutex in processor_t and
waits on a condvar for threads to have terminated. Because the mutex
has also to be locked to decrement the thread count the condvar cannot
be signaled before doing that as otherwise the main thread might already
be waiting to join the threads while locking the mutex and thus causing
a deadlock.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
none active
|
| |
|