Parlib

Easing Cross-Platform Development of Next Generation Parallel Systems

View Linux ports on GitHub View Akaros ports on GitHub

Parlib has been designed in conjunction with the Akaros OS project in the AMP Lab at UC Berkeley. It defines a set of low-level interfaces for helping build highly parallel systems across multiple operating systems and varying hardware platforms. Ports currently exist for Linux/i686, Linux/x86_64, Akaros/i686, and Akaros/RISC-V.

The most common use case of Parlib is to build custom user-level schedulers, such as Upthread or Lithe.

An online reference to the current Parlib API can be found here.
Mostly notably, Parlib currently provides abstractions for the following:

Parlib is made available under the GNU Lesser General Public License.

Its source code is currently spread across 2 different repositories, depending on which operating system you are building it for.

Akaros

The Akaros port of Parlib is contained in the user-level portion of the main Akaros repository, and can be seen here.

On Akaros, Parlib is a required library and is therefore built and installed as part of the Akaros gcc cross-compiler (similar to the way glibc is). Much like glibc, by installing and using the Akaros cross compiler, you are automatically linked with Parlib without needing to include it as an external library.

Instructions for building and running Akaros can be found here.

Linux

A tarball of the latest Parlib release on Linux can be found here.

If you prefer to work directly from the development branch, we also provide read-only access to our shared github repo, with a tag for the current release:

git clone git://github.com/klueska/parlib.git
git checkout release_1_0_0

Note: The Linux version of Parlib must be configured with one of two options related to the use of global TLS variables

--with-uthread-tls=no (default)
or
--with-uthread-tls=yes
Choosing --with-uthread-tls=no will allow your user-level threads to context switch four times faster than a traditional Linux pthread. However, this disallows any of your user-level threads (including any libraries that they call) from defining global __thread variables and using them for TLS (as described here). Instead, all user-level threads will need to use the dtls support provided by Parlib for any of their thread-local storage needs. If __thread variables do get defined, they are treated as vcore-local storage, but using them in this way is discouraged.

Choosing --with-uthread-tls=yes will allow your user-level threads to define __thread variables, but will cause your context switch times to be two times slower than simply using traditional Linux pthreads. This option mainly exists because future versions of Linux will support the WRFSBASE instruction, causing this overhead to disappear. At that time, this will become the default option.

Once you've decided which configuration option is appropriate for your use case, follow the standard GNU installation procedure to compile and install Parlib. If you install into a non-standard location, don't forget to set your LD_LIBRARY_PATH when running executables that link against Parlib as a shared library.

cd parlib
./bootstrap
mkdir build
cd build
../configure --prefix=<install_dir> --with-uthread-tls=<yes|no>
make
<sudo> make install

A number of test applications are included with Parlib and can be compiled from the Parlib build directory as follows:

cd parlib/build
make check
The current tests are:
  • lock_test
  • vcore_test
  • pool_test
  • slab_test
To run them, simply execute them from the current directory.
Note: The lock-test and vcore-test applications run in an infinite loop and need to be killed with Ctrl-C.

An online reference to the current Parlib API can be found here.

A pdf version is also available for download.

On Linux, if you installed parlib into a standard location, you should also be able to access the API reference via one (or both) of the following two commands:

man parlib
info parlib

We have created a parlib-users Google group.
Feel free to join the list and post any questions you have there.

Current Contributors: