This page last changed on Nov 06, 2006 by aconway.
Event channel IO abstraction.
Goals: provide an IO abstraction layer that can be efficiently implemented using differente techniques:
- select/poll/epoll
- aio_
- ec_ new linux event channel.
- shared mem, IPC etc.
The event channel is the central IO absctraction.
Async requests are posted to the channel as Events. When the request is complete it is returned from getEvent() with the data filled in.
We provide synchronous APIs to wrap post(event), wait for getEvent(). On posix these APIs are actually implemented using user-level context swithching so we get a simple programing model with minimal blocking and kernel context switching.
Note: this means that code before and after an apparently synchronous call ''may execute in different threads''. Don't use thread-local
storage. The term "task" will denote the user-level execution context and we'll provide "task-locak storage" that is carried with the user
context if we need it.
We can provide some simple in-process synchronization via the event channel to allow use level tasks to block on application events.
Core concepts:
EventChannel:
- Worker threads loop getting and processing events.
- Async requests: post request event, it will be processed when complete.
- Notification: Threads can block on a notification event to be woken when some other thread posts that event.
Task:
- ucontext APIs for user-level context switch.
Linux ec_ + ucontext implementation:
- EventChannel is thin facade over native ec_ APIs.
- Tasks are scheduled onto threads.
- ideally our threads ''never block'' (but they can be preempted)
- when a thread hits a blocking point it suspends the current task and swaps to a ready task.
- when the suspended task is unblocked (e.g. async IO completes) it becomes ready and will be picked up by another thread.
Linux epoll + ucontext:
- Use traditional polling inside EventChannel.
APR portable impl: only need client support - simple blocking socket calls.
Computing thread pool size:
- Initial size based on availabe CPU paralleism
- On linux /proc/cpuinfo? Any portable options.
- Thread pool grows automatically to avoid deadlock.
ThreadPool: Size should stay close to actuall hardware paralellism + some delta due to pre-empted threads and thread-blocking
synchronization calls required in the event channel implementation itself.
Questions:
- Does the thread pool need to shrink to reclaim resources?
- Is there a risk of unbounded growth? How to avoid without deadlock?
|