Skip to content

sliwowitz/llama

 
 

Repository files navigation

LLAMA – Low-Level Abstraction of Memory Access

ReadTheDocs Doxygen Language Paper Preprint DOI codecov

LLAMA

LLAMA is a cross-platform C++17/C++20 header-only template library for the abstraction of data layout and memory access. It separtes the view of the algorithm on the memory and the real data layout in the background. This allows for performance portability in applications running on heterogeneous hardware with the very same code.

Documentation

Our extensive user documentation is available on Read the Docs. It includes:

  • Installation instructions
  • Motivation and goals
  • Overview of concepts and ideas
  • Descriptions of LLAMA's constructs

An API documentation is generated by Doxygen from the C++ source. Please read the documentation on Read the Docs first!

Supported compilers

LLAMA tries to stay close to recent developments in C++ and so requires fairly up-to-date compilers. The following compilers are supported by LLAMA and tested as part of our CI:

Linux Windows MacOS
g++ 10 - 13
clang++ 12 - 17
icpx (latest)
nvc++ 23.5
nvcc 11.6 - 12.3
Visual Studio 2022
(latest on GitHub actions)
clang++
(latest from brew)

Single header

We create a single-header version of LLAMA on each commit, which you can find on the single-header branch.

This also useful, if you would like to play with LLAMA on Compiler explorer:

#include <https://raw.githubusercontent.com/alpaka-group/llama/single-header/llama.hpp>

Contributing

We greatly welcome contributions to LLAMA. Rules for contributions can be found in CONTRIBUTING.md.

Scientific publications

We published an article on LLAMA in the journal of Software: Practice and Experience. We gave a talk on LLAMA at CERN's Compute Accelerator Forum on 2021-05-12. The video recording (starting at 40:00) and slides are available here on CERN's Indico. Mind that some of the presented LLAMA APIs have been renamed or redesigned in the meantime.

We presented recently added features to LLAMA at the ACAT22 workshop as a poster and a contribution to the proceedings. Additionally, we gave a talk at ACAT22 on LLAMA's instrumentation capabilities during a case study on AdePT, again, with a contribution to the proceedings.

Attribution

If you use LLAMA for scientific work, please consider citing this project. We upload all releases to Zenodo, where you can export a citation in your preferred format. We provide a DOI for each release of LLAMA. Additionally, consider citing the LLAMA paper.

License

LLAMA is licensed under the MPL-2.0.

About

Low Level Abstraction of Memory Access

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 98.4%
  • CMake 1.3%
  • Shell 0.3%