Table of Contents
Requires: (import threading)
SISC provides a comprehensive library for executing Scheme code in parallel in multiple concurrent threads. This allows code for simple code for handling blocking I/O sources (such as network servers) or for the ability to do parallel computation across multiple processors.
In addition, functions are provided to ensure mutually exclusive access to data (mutexes), to assign priorities to Scheme threads, and for inter-thread signaling and synchronization.
Care has been taken to ensure that Scheme code executing concurrently in two or more threads does not result in unpredictable behavior. Assuming the executing threads do not share data, executing code in multiple threads should behave just as executing the code in a single thread.
All threads executing in the system share some resources. The top-level and symbolic environments are shared by all threads. If a thread makes a change to these environments, the change will be visible in all other threads. Unless changed by the thread, all threads share the same console input and output ports. As threads originate from a thunk created in the primordial thread or a child thread, the lexical environment captured by the thunk may include some lexical variables from the parent thread. These variables will be visible by both the parent and child threads. The lexical environments created in an executing thread are visible by that thread only (unless that thread spawns a child thread whose thunk binds its parent's lexicals).
Some resources can be shared but may also be distinct from thread to thread. These resources are inherited from the parent thread, but may be changed by the child or parent without affecting the other. The dynamic environment (including the console input and output port, parameters, etc.) are inherited from the parent, as is the dynamic-wind stack.
When a thread begins, it is considered to be isolated from its parent in terms of the dynamic-wind stack. If a parent spawns a thread in the during section of a dynamic-wind call, the spawned thread escapes the restrictions of the dynamic-wind call. This is logical, as the parent thread may then exit from the dynamic extent even as the child thread executes, or may remain there waiting for the child thread to finish, in which case the parent has not left the dynamic extent of the call. In short, the dynamic-wind is protecting only the parent thread.
It is possible for more than one thread to access the same memory location (be it a lexical variable or a named variable in the top-level or another symbolic environment), it is also possible that interactions on shared variables can have unpredictable results. As in any multi-threaded language, unprotected access to shared variables can result in race conditions and other concurrency mishaps. If the programmer anticipates concurrent access to a shared variable and if any thread is to write to the variable, sections of code that access the variable should use a protection mechanism from the section called “ Synchronization Primitives ”.
A thread can complete in one of two ways. If the thunk that contains the thread's code exits, the thread will terminate and the thread handle will contain the return value of the thunk. This completion condition is called a clean exit. If during the execution of the thunk's body an error is raised and is not subsequently caught, the thread will terminate and the thread handle will trap the error. The error will be raised to any caller that attempts to retrieve the return value of the thread.
It is perfectly legal for a thunk to both capture and invoke continuations, even continuations created by other threads. When applying a continuation captured outside of the thread, the resources of the executing thread are used, though the thread may be accessing lexical environments created by other threads.
Once created, a thread can be in one of four states: ready, running, finished, or finished-with-error. The first two states indicate a newly created thread and a running thread, repectively. The last two represent the end stages of a thread, finished indicating a thread that has exited cleanly, and finished-with-error indicating exit with an error.
There are no guarantees that a Scheme thread will ever exit. It is perfectly valid for a thread to execute indefinitely. Furthermore, the SISC environment will not exit until all threads have completed, either cleanly or with a failure, unless all remaining threads are so-called daemon threads. Daemon threads are threads that may run indefinitely but will be forcibly terminated if no non-daemon threads (of which the primordial thread is one) are still running. Termination of a daemon thread when no non-daemon threads exist is the only instance where a thread can be forcibly terminated. There is no guaranteed thread stop or destroy operation.
This section describes the basic, low-level operations on threads, including how to create a thread, how to start it, how to wait for it to terminate, and how to retrieve its result. A thread is managed in Scheme by its thread handle, an opaque value that is used to identify the thread. A thread handle is present as an argument to most of the thread library functions.
Threads are created with
thread/new. This function
takes as its sole argument a thunk. The body of the thunk is
the code that the thread will execute when started.
(thread? value) => #t/#f
Returns true if and only if the provided value is a thread handle.
(thread/new thunk) => thread-handle
Creates a new thread handle whose code is defined in the provided
Once a thread-handle is created, the thread is in the ready
state, and can be started at any time by calling
thread/start. At this point one can also
set various thread parameters, such as thread priority and
(thread/start thread-handle) => undefined
Starts the thread identified by
thread-handle. The thread must be in the ready state. It is an error to start a thread in any other state.
(thread/daemon! thread-handle boolean) => undefined
Sets the daemon status of a thread. It is an error to change the status of a thread that is not in the
(thread/daemon? thread-handle) => #t/#f
#tif the given thread is a daemon thread.
It is common pattern to create a thread and immediately start
it. The convenience method
encapsulates this operation.
(thread/spawn thunk) => thread-handle
Creates a new thread handle whose code is defined in the provided
thunk, and starts the thread.
Once started, the thread will be in the running state. The
body of the thunk is now being evaluated in parallel to the
parent thread. The thread will remain in the running state
until it completes and enters one of the two finished states.
The state can be read using
(thread/state thread-handle) => symbol
Returns the state of the thread identified by
thread-handle. The state is one of
The parent thread may continue executing its own code, or may
attempt to join the child thread.
To join another thread is to wait until the other thread has
completed. The parent thread can join a child using
thread/join. The parent can
wait indefinitely or may specify a timeout, after which the
thread/join command will
(thread/join thread-handle [timeout]) => boolean
Attempts to join with the indicated thread. If the thread terminates, thread/join will return a non-false value. If a timeout is specified thread/join will only wait
timeoutmilliseconds for the thread to complete. If the thread does not terminate before the timeout,
#fwill be returned.
It is possible to join on an already completed thread. In
such a case the join will immediately return #t.
The behavior of a join on a thread in the ready state is
unspecified, and may cause an error. Finally, it is possible
that a join may return
#f, even if no timeout is
specified. Though unlikely, programmers who wish to wait
indefinitely for a thread to complete should check the return
repeat the join until
#t is returned.
If enabled (using the
configuration parameter, see the section called “Configuration Parameters”), running threads may be
interrupted at both the host language level and when executing
Scheme code with the
function. If disabled (the default), only a host-language
interrupt signal can be sent.
(thread/interrupt thread-handle) => undefined
Sends an interrupt signal to the given thread. This will cause an error to be raised from Scheme code, and a thread interrupt in the host language.
When invoked, a signal is sent to the running thread which will
cause an error to be raised from some point in its execution.
If not caught, the thread will terminate in the
finished-with-error state, and the
raised error will be rethrown if
thread/result is called. The error-continuation
of the error thrown inside the thread, when invoked with no arguments,
will restart the computation exactly where it left off. A thread may
not properly resume if its code calls back into Scheme using any of the
mechanisms described in the section called “Calling Scheme from Java”.
If a thread is interrupted and later its computation is resumed by calling the error-continuation, the thread that hosts the resumed computation will exit when the computation terminates. As a consequence, if a computation that ran in a separate thread is resumed in the primordial thread (that usually hosts the REPL), the primordial thread will terminate as soon as the computation completes. This will circumvent the REPL entirely and cause SISC to exit if no non-daemon threads remain. In general, interrupted threads should be resumed in a newly created thread to avoid this scenario.
It is important to note that
not guarantee the termination of a thread.
A thread may still capture the error at the scheme level with a
with/fc, or catch the interrupt signal if
executing in the host language. In either case, the running
code is not required to rethrow the error.
Once a thread has completed, the parent thread may wish to
retrieve the result of the thread's thunk, be it an error or a
valid result. This can be done with the
(thread/result thread-handle) => value
Returns the return value from a completed thread. If the thread completed with error, that error is raised from this call.
An error will be raised if an attempt is made to retrieve the result of a thread before that thread has completed.
In addition to the basic thread operations, some high level syntax is provided to simplify some general case thread use.
(parallel thunk1 thunk2 [thunks] ...) => multiple values
Executes each thunk in its own thread in parallel. The call to parallel blocks until all the threads have finished. If all threads completed without error, the results of each thunk are returned as multiple values. If any thread raised an error, that error is raised from the call to parallel. The error is raised only after all other thunks have also completed. If more than one thunk raises an error, it is undefined which error will be raised to the caller.
All Scheme threads created in SISC are preemptive and managed by a scheduler. It is possible for a program to manage the priorities of threads in order to give execution preference to higher priority threads. It is also possible for threads to give up their execution time to other blocked threads.
The priority of a thread is represented by an integer. The range of priorities and the default priority of a thread is unspecified and may be platform specific. Larger integers represent higher priorities then smaller integers. If a higher or lower priority thread is desired, the recommended procedure is to get the current priority of a thread and increment or decrement it. Though unspecified, it is possible that an error will be raised if a priority level outside the platform specific range is selected.
Though not guaranteed, the behavior of the scheduler when two threads, one with higher priority than another are both runnable but only one processor is available to run a thread, is that the higher priority thread will be selected. If only equal priority threads are available to be run, the scheduler can choose any thread to run. No guarantees are made about latency or fairness.
Thread priorities can be set by the parent thread or the thread itself. The behavior of a child thread attempting to set the parent's priority, or a sibling's priority is undefined.
(thread/priority thread-handle) => integer
Retrieves the current priority of the given thread.
(thread/priority! thread-handle new-priority) => undefined
Attempts to set the given thread's priority to the integer
In addition to setting priority levels, a program may wish to yield its execution time temporarily to other threads. Performing a yield allows the scheduler to select a thread to run on the processor of the thread that just yielded control. It is possible that the yielding thread may be selected again, or another thread may be chosen.
(thread/yield) => undefined
Causes the currently executing thread to yield to other threads.
Finally, a thread (including the main thread) may sleep for a specified amount of time, allowing other threads to execute in the mean time. If no other threads are running, sleeping effectively pauses the program for the given time period.
SISC provides an implementation of mutexes and condition variables designed to support a wide range of synchronization models, including the monitor paradigm previously found in SISC. The implementation provides both a mutex and condition-variable first-class value for concurrency protection and inter-thread communication. Mutexes provide mutual exclusion locking, while condition variables faciliate synchronization of threads with the change in state of monitored data. SISC's mutexes are reentrant, that is, if a thread locks a mutex and then attempts to lock it again, the lock is granted immediately- it does not block. Correspondingly, the mutex is not unlocked until the same number of unlock operations as previous lock operations are performed by the owning thread.
SISC's synchronization primitives map very closely to SRFI-18, which is supported by SISC. In fact, it is recommended that programmers write to SRFI-18 if at all possible, as they will gain maximum portability with little or no loss in efficiency on SISC.
Throughout the following sections, a thread is often said to block because of some circumstance. While a thread is blocked on some resource, other threads are allowed to execute freely, at the discretion of the scheduler.
Several functions exist for performing low level operations
such as creating mutexes and condition variables. All require
the mutex or condition variable as the first parameter.
Mutexes are represented in Scheme as opaque values displayed as
#<mutex> while condition variables
are represented as
It is important to understand that all the synchronization
operations depend on the same mutex and/or condition variable
being shared between any threads using the functionality.
In order to protect a segment of Scheme code from concurrent
access, one can create a mutex that is shared by all
threads that may access the segment. When entering the
contested region of code (the critical
section), a thread would call
mutex/lock. Upon exiting
(mutex? value) => #t/#f
Returns true if and only if the provided value is a mutex.
(mutex/new) => mutex
Creates and returns a new mutex object.
(mutex-of value) => mutex
Returns a mutex that is uniquely associated to the given value. Subsequent (or concurrent) calls to this function are guaranteed to return the same mutex if given the same value.
(mutex/lock! mutex) => undefined
Attempts to acquire the lock on the given mutex. Returns only when the lock has been successfully acquired.
(mutex/unlock! mutex) => undefined
Releases the lock on the given mutex. The behavior when unlocking a mutex when the running thread does not have the lock is undefined, and may raise an error.
The semantics of
mutex/lock! ensure that
only one thread can execute beyond the lock call at any one
time. The first thread that reaches the call
acquires the lock on the mutex. Any
later threads will block at the call to
mutex/lock! until the
thread that owns the lock releases the lock with
A condition variable allows one thread to sleep until another wakes it. A common situation is for one thread to check the status of a variable, and sleep if the condition is not met. While the thread sleeps, one or more separate threads may execute and satisfy the condition (by changing the state of the variable) and then notify the sleeping thread via the condition variable. The thread then awakes, checks the state variable, and proceeds if the condition is met. If not, it sleeps again. This construct allows for cooperation between multiple threads on a computation.
To wait on a condition variable, the
mutex/unlock! function is again used with
a condition variable as an additional argument.
applied to a monitor. This will cause the thread to unlock
the given mutex, then block until notified by another thread.
When it is notified, or the provided timeout expires, the lock
is not reacquired.
(condvar? value) => #t/#f
Returns true if the provided value is a condition variable.
(condvar/new) => condition variable
Creates a new condition variable.
(mutex/unlock! mutex condvar [timeout]) => #t/#f
Causes the thread to sleep until notified on the provided condition variable by another thread. This call will not return until notified, unless the optional timeout is specified and
timeoutmilliseconds have elapsed without a notification. Before blocking, the lock on
mutexis released. If the timeout is reached before notification,
#fis returned, otherwise
Another thread may wake a single waiting thread with the
When called, one thread waiting on the condition variable is
woken. If no threads are waiting this call has no effect.
If more than one thread is waiting, exactly one will
be woken. Which is woken is unspecified. If a thread
wishes to wake all threads waiting on a
given monitor, it may use the
(condvar/notify condvar) => undefined
Wakes exactly one thread waiting on the condition variable, if any such threads exist. If the notifying thread holds the lock on
condvar, the waiting thread will not proceed until the notifying thread releases the lock.
(condvar/notify-all condvar) => undefined
Wakes all threads waiting on the condition variable, if any waiting threads exist. If the notifying thread holds the lock on
condvar, the waiting thread will not procede until the notifying thread releases the lock.
In addition to the low-level operations on mutexes and condition variables, two library functions are provided to greatly ease the construction and readability of thread-safe code.
(mutex/synchronize mutex thunk) => value
Protects execution of thunk as a critical section by holding the mutex's lock during evaluation of the thunk. The result of the thunk's evaluation becomes the result of the
(mutex/synchronize-unsafe mutex thunk) => value
Behaves exactly as mutex/synchronize without automatic unlocking when an error is raised or a continuation escapes when executing thunk.
the mutex while the thunk provided is being
executed. The lock is automatically released when the
expression has completed. Also, if an error is raised or a
continuation is invoked that escapes the call to
lock is automatically released.
The added safety provided by
slow the execution of code that repeatedly calls a critical
section. If the programmer is absolutely sure that no error
can be raised and that no continuations will be applied to
escape the call,
may be used. It provides no safety guarantees in those
situations. If an error is raised or an escaping
continuation invoked, the mutex will remain locked which
could cause a deadlock if another thread attempts to acquire