ScaLAPACK is a software package provided by Univ. of Tennessee; Univ. of California, Berkeley; Univ. of Colorado Denver; and NAG Ltd..

Presentation

ScaLAPACK is a library of high-performance linear algebra routines for parallel distributed memory machines. ScaLAPACK solves dense and banded linear systems, least squares problems, eigenvalue problems, and singular value problems. The key ideas incorporated into ScaLAPACK include the use of

  1. a block cyclic data distribution for dense matrices and a block data distribution for banded matrices, parametrizable at runtime;

  2. block-partitioned algorithms to ensure high levels of data reuse;

  3. well-designed low-level modular components that simplify the task of parallelizing the high level routines by making their source code the same as in the sequential case.

The goals of the ScaLAPACK project are the same than the one’s of LAPACK, namely:

Many of these goals, particularly portability, are aided by developing and promoting standards , especially for low-level communication and computation routines. We have been successful in attaining these goals, limiting most machine dependencies to three standard libraries called the BLAS, or Basic Linear Algebra Subprograms, LAPACK and BLACS, or Basic Linear Algebra Communication Subprograms. LAPACK will run on any machine where the BLAS are available, and ScaLAPACK will run on any machine where BLAS, LAPACK and the BLACS are available.

The library is currently written in Fortran (with the exception of a few symmetric eigenproblem auxiliary routines written in C). The name ScaLAPACK is an acronym for Scalable Linear Algebra PACKage, or Scalable LAPACK. The most recent version of ScaLAPACK is 2.0.0, released in November 11, 2011.

Acknowledgments

Since 2010, this material is based upon work supported by the National Science Foundation under Grant No. NSF-OCI-1032861. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). Until 2006, this material was based upon work supported by the National Science Foundation under Grant No. ASC-9313958, NSF-0444486 and DOE Grant No. DE-FG03-94ER25219. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF) or the Department of Energy (DOE).

Software

Licensing

ScaLAPACK is a freely-available software package. It is available from netlib via anonymous ftp and the World Wide Web at http://www.netlib.org/scalapack . Thus, it can be included in commercial software packages (and has been). We only ask that proper credit be given to the authors.

The license used for the software is the modified BSD license, see:

Like all software, it is copyrighted. It is not trademarked, but we do ask the following:

ScaLAPACK, version 2.0.0

Errata

ScaLAPACK Installer [for Linux]

Python based installer for ScaLAPACK. Download, configure, compile and install all libraries needed for ScaLAPACK (Ref BLAS, LAPACK, BLACS and ScaLAPACK)

ScaLAPACK for Windows

Please see: http://icl.cs.utk.edu/lapack-for-windows/scalapack

SVN Access

The ScaLAPACK SVN repository is open for read-only for our users to be able to get the latest bug fixed.

svn co http://icl.cs.utk.edu/svn/scalapack-dev/scalapack/trunk

Support

Contributors

ScaLAPACK is a community-wide effort. ScaLAPACK relies on many contributors.

If you are wishing to contribute, please have a look at the LAPACK Program Style. This document has been written to facilitate contributions to LAPACK/ScaLAPACK by documenting their design and implementation guidelines.

LAPACK/ScaLAPACK Project Software Grant and Corporate Contributor License Agreement (“Agreement”) [Download]

Contributions are always welcome and can be sent to the ScaLAPACK team.

Documentation

Improvements

ScaLAPACK is a currently active project, we are striving to bring new improvements and new algorithms on a regular basis.

Please contribute to our wishlist if you feel some functionality or algorithms are missing by emailing the ScaLAPACK team.

FAQ

Consult ScaLAPACK Frequently Asked Questions.

Please contribute to our FAQ if you feel some questions are missing by emailing the ScaLAPACK team.

The LAPACK Users' Forum is also a good source to find answers.

Users' Guide

HTML version of the ScaLAPACK Users' Guide

LAWNS: LAPACK/ScaLAPACK Working Notes

LAWNS

Release History

  • Version 1.0 : February 28, 1995

  • Version 1.1 : March 20, 1995

  • Version 1.2 : May 10, 1996

  • Version 1.3 : June 5, 1996

  • Version 1.4 : November 17, 1996

  • Version 1.5 : May 1, 1997

  • Version 1.6 : November 15, 1997

  • Version 1.7 : August 31, 2001

  • Version 1.8 : April 5, 2007 (scalapack-1.8.tgz) (release notes)

  • Version 2.0 : November 11, 2011 (scalapack-2.0.tgz) (release notes)

LAPACK

LAPACK website

PLASMA

The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project aims to address the critical and highly disruptive situation that is facing the Linear Algebra and High Performance Computing community due to the introduction of multi-core architectures.

PLASMA’s ultimate goal is to create software frameworks that enable programmers to simplify the process of developing applications that can achieve both high performance and portability across a range of new architectures.

The development of programming models that enforce asynchronous, out of order scheduling of operations is the concept used as the basis for the definition of a scalable yet highly efficient software framework for Computational Linear Algebra applications.

PLASMA website

MAGMA

The MAGMA (Matrix Algebra on GPU and Multicore Architectures) project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current “Multicore+GPU” systems.

The MAGMA research is based on the idea that, to address the complex challenges of the emerging hybrid environments, optimal software solutions will themselves have to hybridize, combining the strengths of different algorithms within a single framework. Building on this idea, we aim to design linear algebra algorithms and frameworks for hybrid manycore and GPUs systems that can enable applications to fully exploit the power that each of the hybrid components offers.

MAGMA website