Skip to content
Parsa Amini edited this page Feb 20, 2019 · 1 revision

Table of Contents:

  1. Introduction
  2. Requirements
  3. Vanguard Project Ideas (New or Revamped Ideas for GSoC 2019)
  1. Classic Project Ideas (Legacy Projects That Still Need Doing)

Introduction

Welcome to the HPX home page for Google Summer of Code (GSoC). This page provides information about student projects, proposal submission templates, advice on writing good proposals, and links to information on getting started writing with HPX. This page is also used to collect project ideas for the Google Summer of Code 2019. The STE||AR Group will apply as an organization and our goal is to get at least five students funded.

We are looking to fund work on a number of different kinds of proposals (for more details about concrete project ideas, see below):

  • Extensions to existing library features,
  • New distributed data structures and algorithms
  • Multiple competing proposals for the same project

Requirements

Students must submit a proposal. A template for the proposal can be found here. Hints for writing a good proposal can be found here.

We strongly suggest that students interested in developing a proposal for HPX discuss their ideas on the mailing list in order to help refine the requirements and goals. Students who actively discuss projects on the mailing list are also ranked before those that do not.

If the descriptions of these projects seem a little vague... Well, that is intentional. We are looking for students to develop requirements for their proposals by doing initial background research on the topic, and interacting with the community on the HPX mailing list (hpx%20mailing%20list) to help identify expectations.

Optimizers implementation for Phylanx

  • Abstract: Phylanx is a platform for computations on distributed arrays for applied statistics and machine learning on commodity cloud systems. This project will require the student to implement at least three of the following optimizers for Phylanx: Stochastic gradient descent, RMSProp, Adagrad, Adadelta, Adam, Adamax and Nesterov Adam optimizers.

  • Difficulty: Medium-Hard

  • Expected result: Performance results with the implemented optimizers.

  • Knowledge Prerequisite: C++, Python

  • Mentor: Bita Hasheminezhad (bita%20hasheminezhad), Bibek Wagle (bibek%20wagle)

Test Framework for Phylanx Algorithms

  • Abstract: Phylanx is a platform for computations on distributed arrays for applied statistics and machine learning on cloud systems. Currently, it has optimized implementations for some algorithms. The idea of this project is to apply one of the ALS (Alternating Least Squares), LDA (Latent Dirichlet Allocation) or k-means clustering algorithms on a real-world dataset to make a real-world application (which is not implemented before). You can find a dataset on Kaggle or use any desired open dataset.
  • Difficulty: Easy-Medium
  • Expected result: An integrated Phylanx test for one of the mentioned algorithms on a real-world dataset
  • Knowledge Prerequisite: Basic knowledge of Data Science, Python, and C++
  • Mentor: Bita Hasheminezhad (bita%20hasheminezhad)

pip package for Phylanx

  • Abstract: Phylanx relies on many external libraries which makes the building process tedious and error prone, especially to the target audience of the software- domain scientist. The goal of this project is to automate the build process of Phylanx by creating a distribution package. The distribution package should build and install Phylanx and its requirements through Python's pip package manager.
  • Difficulty: Easy-Medium
  • Expected result: Phylanx can be installed with pip
  • Knowledge Prerequisite: Python, CMake
  • Mentor: Patrick Diehl (patrick%20diehl)

Domain decomposition and load balancing for crack and fracture mechanics code

  • Abstract: Peridynamics is used to model cracks and fractures in materials. In recent years there have been several numerical approximations proposed for peridynamics. Peridynamics is a nonlocal model, i.e. to compute force/strain/energy at a point, one has to look at the neighbors of a point within a finite radius. A code utilizing the parallel algorithms and futurization within HPX for a single shared memory node is available. This code should be extended with a domain decomposition for computations on several nodes. Here, the challenge would be to perform an efficient load balancing of the domain partitions. In case of damage, the computational costs in the domain where the damage occurs in the material decreases. In this project, an efficient algorithm which detects where the computational costs are decreasing and how to redistribute the domains such that load is balanced should be developed.
  • Difficulty: Medium
  • Expected result:
  1. Extend the existing implementation with domain decomposition for multiple nodes
  2. Provide an efficient load balancing algorithms which redistribute the domains after damage.
  • Knowledge Prerequisite: C++, STL
  • Mentor: Patrick Diehl (patrick%20diehl), Prashant Jha (prashant%20jha), and Robert Lipton

Conflict (Range-Based) Locks

  • Abstract: Some multi-threaded algorithms may require resources that must be held using a lock, but the locking mechanism may be range-based rather than absolute. Consider a large array of N items where a task requires some small subset of the items to be locked whilst a second task requires a second range. If these tasks are placed into a DAG so that task2 can only run once task1 has completed, it will be inefficient when the range of items used by task2 does not overlap the range from task1. When many tasks operate on the range, with randomly overlapping or non-overlapping regions, DAG based task scheduling leads to a highly inefficient strategy. We need a range based lock that can be templated over <items> and that can then be locked/unlocked on ranges (of those items) and interact with our future<> based scheduling so that items will become ready when the range they need has no locks outstanding, and so that when a task releases a lock, any other tasks that overlap the range are in turn signaled as possibly ready. (For an example of how this is used in conventional HPC programming, look up Byte Range locks in MPI for Parallel IO to a single file). A successful implementation can be extended to multi-dimensional locking *2D/3D etc, ideally templated over dimensions and types).
  • Difficulty: Medium/Hard
  • Expected result: A test application that creates arrays of items and randomly assigns tasks to operate on regions of those items with locking and schedules the tasks to operate in a non-conflicting way.
  • Knowledge Prerequisite: Thread safe programming. Futures.
  • Mentor: John Biddiscombe (john%20biddiscombe)

Concurrent Data structure Support

  • Abstract: STL containers such as vectors/maps/sets/etc are not thread safe. One cannot safely add or remove elements from one of these containers in one thread, whilst iterating or adding/removing in another thread without potentially catastrophic consequences (usually segmentation faults leading to eventual program failure). Some work has begun on implementing concurrent structures in HPX, a concurrent unordered map with reader/writer lock and a partial implementation of concurrent vector exist, but they have not all been completed, do not have unit tests and need to be unified into an hpx::concurrent namespace. A number of libraries implementing thread safe (sometimes lockfree) containers already exist that can be used for ideas and where code uses a boost compatible license can be integrated into HPX. The aim of the project is to collect as much information and as many implementations of threads safe containers and create or integrate them into the HPX library.
  • Difficulty: Medium/Hard
  • Expected result: A contribution of an hpx::concurrent namespace including as many STL compatible containers (and/or helper structures) as possible, with unit testing and examples that use them.
  • Knowledge Prerequisite: Thread safe programming.
  • Mentor: John Biddiscombe (john%20biddiscombe), Hartmut Kaiser (hartmut%20kaiser) and Marcin Copik (marcin%20copik)
  • See issue #2235 on HPX bug tracker

Create Generic Histogram Performance Counter

  • Abstract: HPX supports performance counters that return a set of values for each invocation. We have used this to implement performance counters collecting histograms for various characteristics related to parcel coalescing (such as the histogram of the time intervals between parcels). The idea of this project is to create a general purpose performance counter which collects the value of any other given performance at given time intervals and calculates a histogram for those values. This project could be combined with Add more arithmetic performance counters.
  • Difficulty: Medium
  • Expected result: Implement a functioning performance counter which returns the histogram for any other given performance counter as collected at given time intervals.
  • Knowledge Prerequisite: Minimal knowledge of statistical analysis is required.
  • Mentor: Hartmut Kaiser (hartmut%20kaiser) and Mikael Simberg (mikael%20simberg)
  • See issue #2237 on HPX bug tracker

Add More Arithmetic Performance Counters

  • Abstract: HPX already supports performance counters that can be used to dynamically calculate the result of addition, subtraction, multiplication, and division of values gathered from a set of other given performance counters. The idea of this project is to create more performance counters which are very similar to the existing ones, except that those calculate various other statistical results, such as minimum/maximum, mean, and median value (more are possible). This project could be combined with Create generic histogram performance counter.
  • Difficulty: Easy/Medium
  • Expected result: Implement a set of functioning performance counters which return the result of various statistical operations for a set of other given performance counters.
  • Knowledge Prerequisite: Minimal knowledge of statistical analysis is required.
  • Mentor: Hartmut Kaiser (hartmut%20kaiser) and Marcin Copik (marcin%20copik)
  • See issue #2455 on HPX bug tracker

Add Vectorization to par_unseq Implementations of Parallel Algorithms

  • Abstract: Our parallel algorithms currently don't support the par_unseq execution policy. This project is centered around the idea to implement this execution policy for at least some of the existing algorithms (such as for_each and similar).
  • Difficulty: Medium/Hard
  • Expected result: The result should be functioning parallel algorithms when used with the par_unseq execution policy. The loop body should end up being vectorized.
  • Knowledge Prerequisite: Vectorization, parallel algorithms.
  • Mentor: Marcin Copik (marcin%20copik)
  • See issue #2271 on HPX bug tracker

ROCm backend for HPX.Compute

  • Abstract: HPX.Compute is a layer on top of HPX which provides a way to distribute work and data for parallel algorithms on accelerators. The existing implementation supports execution on CUDA-enabled GPUs. In this project a ROCm backend for AMD GPUs should be implemented based on the existing CUDA backend. The work could involve either implementing a completely new backend optimized for AMD GPUs or the existing CUDA backend could be ported to use HIP which would allow a single implementation to be used for both AMD and NVidia GPUs. Other tasks involve implementing and testing additional parallel algorithms, implementing a concurrent executor, supporting work dispatch to multiple devices, or optimizing and comparing the performance of different backends.
  • Difficulty: Medium-Hard
  • Expected result: The backend is comparable with CUDA in terms of supported features and can schedule at least few algorithms, including the index-based parallel for-loop.
  • Knowledge Prerequisite: Basic knowledge in CUDA or ROCm, good knowledge in C++.
  • Mentor: Mikael Simberg (mikael%20simberg), Thomas Heller (thomas%20heller)

Charting Support for HPX OTF2 Trace Visualization

  • Abstract: HPX traces are collected with APEX and written in as OTF2 files with extensions. These trace files are typically visualized using a Gantt chart or collection of timelines. The Gantt chart visualization provides difficult information but is quite large, making it difficult to navigate. Integration of this visualization with summary charts, such as histograms which show the distribution of thread types, status, or counter information, could be created to help users understand the global picture of the trace. Through linked (Javascript) interactions, they could help users further filter the trace data to a more manageable, less-cluttered size. Collecting information for these charts will require manipulating the trace file itself (C++). This project will also require some iterations of interface design.
  • Difficulty: Medium
  • Expected result: The interactive Gantt chart is complimented with interactive histograms which can be queried in dynamic time ranges.
  • Knowledge Prerequisite: C++, Javascript.
  • Mentor: Kate Isaacs (kate%20isaacs)

Large File Support for HPX OTF2 Trace Visualization

  • Abstract: HPX traces are collected with APEX and written in as OTF2 files with extensions. These trace files are typically visualized using a Gantt chart or collection of timelines. The present implementation reads the entirety of the trace file before generating the visualization. However, the OTF2 interface has support for partial reading of the file as well as using a parallel backend. This project would modify the Gantt chart backend (C++) to utilize these features, thus supporting larger files. The project could also modify the front end to use WebGL (Javascript) when the number of data items is large.
  • Difficulty: Medium
  • Expected result: Files requiring more memory than a single machine can be run from that machine. The time from program-start to visualization is decreased due to the use of large file features.
  • Knowledge Prerequisite: C++, Javascript.
  • Mentor: Kate Isaacs (kate%20isaacs) is a side project to HPX which tries to uses HPX's facilities to achieve asynchronous I/O on top of POSIX libraries as well as OrangeFS distributed file system. Lustre is parallel distributed file system that is used in many clusters. Adding a Lustre backend to hpxio will be a great addition hpxio since many clusters already use Lustre file system.
  • Difficulty: Easy/Medium
  • Expected result: hpxio library will be able to lustre file system as backend.
  • Knowledge Prerequisite: Good C++ knowledge
  • Mentor: Alireza Kheirkhahan (alireza%20kheirkhahan), Hartmut Kaiser (hartmut%20kaiser)

Port HPX to iOS

  • Abstract: HPX has already proven to run efficiently on ARM based systems. This has been demonstrated with an application written for Android tablet devices. A port to handheld devices running with iOS would be the next logical steps! In order to be able to run HPX efficiently on there, we need to adapt our build system to be able to cross compile for iOS and add a code to interface with the iOS GUI and other system services.
  • Difficulty: Easy-Medium
  • Expected result: Provide a prototype HPX application running on an iPhone or iPad
  • Knowledge Prerequisite: C++, Objective-C, iOS
  • Mentor: Hartmut Kaiser (hartmut%20kaiser) and Thomas Heller (thomas%20heller)

Augment CSV Files

  • Abstract: A counter destination .csv file with header that contains small counter labels such that results of multiple samples with multiple input parameters can be logged with counters, but does not work correctly when counters are queried for multiple OS threads. This should be a simple fix and should be extended to multiple localities as well. Make additions to HPX that would add user define parameters to the counter destination file. This would enable a user to have their information along with the HPX counter info in one CSV file. This could include input parameters as well as output such as Execution Time, or other pertinent output from the application, or the runtime (i.e. number of threads etc.). Then write some python/pandas or R to do statistical processing and/or plots. Database ideas welcome, or get familiar with the APEX and Tau interfaces with HPX to do some data processing visualization using them and write up examples for users.
  • Difficulty: Easy
  • Expected result: Augment CSV files with user defined parameters and fix hpx-counters for multiple OS threads and interval, perform statistics and plot capabilites.
  • Knowledge Prerequisite: familiarity and willingness to work with C++, python and pandas
  • Mentors: Bibek Wagle (bibek%20wagle) , Hartmut Kaiser (hartmut%20kaiser) and Mikael Simberg (mikael%20simberg)

Create A Parcelport Based on WebSockets

  • Abstract: Create a new parcelport which is based on WebSockets. The WebSockets++ library seems to be a perfect starting point to avoid having to dig into the WebSocket protocol too deeply.
  • Difficulty: Medium-Hard
  • Expected result: A proof of concept parcelport based on WebSockets with benchmark results
  • Knowledge Prerequisite: C++, knowing WebSockets is a plus
  • Mentor: Hartmut Kaiser (hartmut%20kaiser) and Thomas Heller (thomas%20heller)

Script Language Bindings

  • Abstract: Design and implement Python bindings for HPX exposing all or parts of the HPX functionality with a 'Pythonic' API. This should be possible as Python has a much more dynamic type system than C++. Using Boost.Python seems to be a good choice for this.
  • Difficulty: Medium
  • Expected result: Demonstrate functioning bindings by implementing small example scripts for different simple use cases
  • Knowledge Prerequisite: C++, Python
  • Mentor: Hartmut Kaiser (hartmut%20kaiser) and Adrian Serio (adrian%20serio)

All to All Communications

  • Abstract: Design and implement efficient all-to-all communication LCOs. While MPI provides mechanisms for broadcasting, scattering and gathering with all MPI processes inside a communicator, HPX currently misses this feature. It should be possible to exploit the Active Global Address Space to mimic global all-to-all communications without the need to actually communicate with every participating locality. Different strategies should be implemented and tested. A first and very basic implementation of broadcast already exists which tries to tackle the above described problem, however, more strategies to granularity control and locality exploitation need to be investigated an implemented. We also have a first version of a gather utility implemented.
  • Difficulty: Medium-Hard
  • Expected result: Implement benchmarks and provide performance results for the implemented algorithms
  • Knowledge Prerequisite: C++
  • Mentor: Thomas Heller (thomas%20heller) and Andreas Schäfer (andreas%20schäfer)

Distributed Component Placement

  • Abstract: Implement a EDSL to specify the placement policies for components. This could be done similar to [Chapels Domain Maps] (http://chapel.cray.com/tutorials/SC12/SC12-6-DomainMaps.pdf). In Addition, allocators can be built on top of those domain maps to use with C++ standard library containers. This is one of the key features to allow users to efficiently write parallel algorithms without having them worried to much about the initial placement of their distributed objects in the Global Address space
  • Difficulty: Medium-Hard
  • Expected result: Provide at least one policy which automatically creates components in the global address space
  • Knowledge Prerequisite: C++
  • Mentor: Thomas Heller (thomas%20heller) and Hartmut Kaiser (hartmut%20kaiser)

Resumable Function Implementation

  • Abstract: Implement resumable functions either in GNU g++ or Clang. This should be based on the corresponding proposal to the C++ standardization committee (see N4286. While this is not a project which directly related HPX, having resumable functions available and integrated with hpx::future would allow to improve the performance and readability of asynchronous code. This project sounds to be huge - but it actually should not be too difficult to realize.
  • Difficulty: Medium-Hard
  • Expected result: Demonstrating the await functionality with appropriate tests
  • Knowledge Prerequisite: C++, knowledge of how to extend clang or gcc is clearly advantageous
  • Mentor: Hartmut Kaiser (hartmut%20kaiser)

Coroutine-like Interface

  • Abstract: HPX is an excellent runtime system for doing task based parallelism. In its current form however results of tasks can only be expressed in terms of returning from a function. However, there are scenarios where this is not sufficient. One example would be lazy ranges of integers (For example fibonacci, 0 to n, etc.). For those a generator/yield construct would be perfect!
  • Difficulty: Easy-Medium
  • Expected result: Implement yield and demonstrate on at least one example
  • Knowledge Prerequisite: C++
  • Mentor: Hartmut Kaiser (hartmut%20kaiser) and Thomas Heller (thomas%20heller)

Bug Hunter

  • Abstract: In addition to our extensive ideas list, there are several active tickets listed in our issue tracker which are worth tackling as a separate project. Feel free to talk to us if you find something which is interesting to you. A prospective student should pick at least one ticket with medium to hard difficulty and discuss how it could be solved
  • Difficulty: Medium-Hard
  • Expected result: The selected issues need to be fixed
  • Knowledge Prerequisite: C++
  • Mentor: Thomas Heller (thomas%20heller)

Graphical and Terminal User Interface for Scimitar

  • Abstract: Scimitar, the HPX debugger is a distributed front-end for GDB with HPX support. In its current form it is a command-line application and in order to make the tool easier to use it needs a graphical interface and a terminal interface. This is not a difficult task but is expected to be very time consuming.
  • Difficulty: Easy-Medium
  • Expected result: A GUI and terminal interface (ncurses, etc) for Scimitar.
  • Knowledge Prerequisite: Python, C++, Qt or a comparable library, and possibly x86 Assembly
  • Mentor: Parsa Amini (parsa%20amini)

Port Graph500 to HPX

  • Abstract: Implement Graph500 using the HPX Runtime System. Graph500 is the benchmark used by HPC industry to model important factors of many modern parallel analytical workloads. The Graph500 list is a performance list of systems using the benchmark and was designed to augment the Top 500 list. The current Graph500 benchmarks are implemented using OpenMP and MPI. HPX is well suited for the fine-grain and irregular workloads of graph applications. Porting Graph500 to HPX would require replacing the inherent barrier synchronization with asynchronous communications of HPX, producing a new benchmark for the HPC community as well as an addition to the HPX benchmark suite. See http://www.graph500.org/ for information on the present Graph500 implementations.
  • Difficulty: Medium
  • Expected result: New implementation of the Graph500 benchmark.
  • Knowledge Prerequisite: C++
  • Mentor: Patricia Grubel (patricia%20grubel), and Thomas Heller (thomas%20heller)

Port Mantevo MiniApps to HPX

  • Abstract: Implement a version of one or more mini apps from the Mantevo project (http://mantevo.org/ "Mantevo Project Home Page") using HPX Runtime System. We are interested in mini applications ported to HPX that have irregular workloads. Some of these are under development and we will have access to them in addition to those listed on the site. On the site, MiniFE and phdMESH would be a good additions to include in HPX benchmark suites. Porting the mini apps would require porting the apps from C to C++ and replacing the inherent barrier synchronization with HPX's asynchronous communication. This project would be a great addition to the HPX benchmark suite and the HPC community.
  • Difficulty: Medium
  • Expected result: New implementation of a Mantevo mini app or apps.
  • Knowledge Prerequisite: C, C++
  • Mentor: Patricia Grubel (patricia%20grubel) and Thomas Heller (thomas%20heller)

Create An HPX Communicator for Trilinos Project Teuchos Subpackage

  • Abstract: The trilinos project (http://trilinos.org/) consists of many libraries for HPC applications in several capability areas (http://trilinos.org/capability-areas/). Communication between parallel processes is handled by an abstract communication API (http://trilinos.org/docs/dev/packages/teuchos/doc/html/index.html#TeuchosComm_src) which currently has implementations for MPI and serial only. Extending the implementation with an HPX backend would permit any of the Teuchos enabled Trilinos libraries to run in parallel using HPX in place of MPI. Of particular interest is the mesh partitioning library Zoltan2 (http://trilinos.org/packages/zoltan2/ "Zoltan2 - A Package for Load Balancing and Combinatorial Scientific Computing") which would be used as a test case for the new communications interface. Note that some new collective HPX algorithms may be required to fulfill the API requirements (see all-to-all-communications project above).
  • Difficulty: Medium-Hard
  • Expected result: A demo application for partitioning meshes using HPX and Zoltan.
  • Knowledge Prerequisite: C, C++, (MPI)
  • Mentor: John Biddiscombe (john%20biddiscombe) and Thomas Heller (thomas%20heller)

Extension/Evaluation of LibGeoDecomp::Region As An Alternative Adjacency Container to Boost Graph Library

  • Abstract: The Boost Graph Library (BGL) offers a set of data structures to store various kinds of graphs, together with generic algorithms to operate on these. For certain classes of graphs, which are relevant to high performance computing (HPC), the adjacency information could be stored more efficiently via a data structure we have developed for LibGeoDecomp: the Region class. Region stores basically a set of 1D/2D/3D coordinates with run-length compression. A set of 2D coordinates is equivalent to a set of directed edges of a graph. For this project we'd be interested in an adaptation of the Region interface to make it usable within BGL. The expected interface of adjacency classes is well defined within BGL.
  • Difficulty: Medium
  • Expected result: Adapter class or extended Region class for use in BGL, evaluation via set of relevant benchmarks
  • Knowledge Prerequisite: basic C++ and basic graph theory
  • Mentor: Andreas Schaefer (gentryx)

Add Mask Move/Assign Wrappers for Vectorization Intrinsics

  • Abstract: Vectorization is a key technique to leverage the full potential of modern CPUs. LibFlatArray is a C++ library which helps with transitioning scalar numerical algorithms on objects to vectorized implementations. It comes with expression templates that enable the user to write code that encapsulate vector intrinsics but appear to the user like standard mathematical datatypes and operations. These templates (dubbed short_vec in LibFlatArray) currently lack a mechanism to selectively set certain lanes of the vector registers via conditional masks. If we had this functionality we'd be able to represent if/then/else constructs way more idiomatically. Intrinsics for mask generation/application are readily available in all current vector instruction sets (Intel/ARM/IBM), we simply lack convenient/efficient wrappers to utilize them.
  • Difficulty: Medium
  • Expected result: Wrapper functions for comparison (to generate masks) and conditional assignment (using masks)
  • Knowledge Prerequisite: basic C++, vectorization via SSE, AVX/AVX2/AVX512
  • Mentor: Andreas Schaefer (gentryx)

Implement Your Favorite Parcelport Backend

  • Abstract: The HPX runtime system uses a module called Parcelport to deliver packages over the network. An efficient implementation of this layer is indispensable and we are searching for new backend implementations based on CCI, ucx or libfabric. All of these mentioned abstractions over various network transport layers offer the ability to do fast, one-sided RDMA transfers. The purpose of this project is to explore one of these and implement a parcelport using it.
  • Difficulty: Medium-Hard
  • Expected result: A proof of concept for a chosen backend implementation with performance results
  • Knowledge Prerequisite: C++, Basic understanding of Network transports
  • Mentor: Thomas Heller (thomas%20heller)

Implement a Faster Associative Container for GIDs

  • Abstract: The HPX runtime system uses Active Global Address Space (AGAS) to address global objects. Objects in HPX are identified by a 128-bit unique global identifier, abbreviated as a GID. The performance of HPX relies on fast lookups of GIDs in associative containers. We have experimented with binary search trees (std::map) and hash maps (std::unordered_map). However, we believe that we can implement a search data structure based on n-ary trees, tries or radix trees that exploit the structure of GIDs such that it allows us to have faster lookup and insertion.
  • Difficulty: Medium-Hard
  • Expected result: Various different container approaches to choose from together with realistic benchmarks to show the performance properties
  • Knowledge Prerequisite: C++, Algorithms
  • Mentor: Thomas Heller (thomas%20heller)

HPX in the Cloud

  • Abstract: The HPX runtime system permits the dynamic (re-)allocation of computing resources during execution. This feature is especially interesting for dynamic execution environments, i.e., Cloud Computing, as the availability of resources is not necessarily statically defined before the deployment of a computational task. Yet, the implementation of a demonstrator is not merely restricted to writing source code, but as there is currently little experience with dynamic resizing and redistribution of tasks during runtime we also want to develop a set of heuristics for HPX on dynamic execution environments.
  • Difficulty: Medium-Hard
  • Expected result: Sample Benchmarks and Implementations for dynamic scaling using HPX
  • Knowledge Prerequisite: C++, MPI, virtualization technologies (VMs and containers)
  • Mentor: Alexander Ditter (alexander%20ditter)

Hybrid Cloud-Batch-Scheduling

  • Abstract: Currently HPX is mostly executed on conventional batch scheduled HPC systems, using tools like SLURM, TORQUE or PBS for the deployment of jobs. Along with the trend towards more dynamic execution environments, i.e., Cloud Computing, grows the need to supply available resources from existing cluster systems in order deploy software in virtual machines or containers. For this reason, we want to provide an intermediate scheduler that allows the concurrent use of batch scheduling and cloud middleware on the same physical infrastructure. This meta scheduler receives requests for the deployment of virtual machines and maps them onto the batch scheduling systems that manages the cluster infrastructure. An existing approach may be extended, refactored or just considered for inspirational purposes.
  • Difficulty: Medium-Hard
  • Expected result: An extensible open source meta scheduler for the concurrent use of cloud middleware (e.g., OpenNebula or Open Stack) on top of batch systems.
  • Knowledge Prerequisite: Basic knowledge on virtualization (e.g. libvirt), software engineering and scripting language(s) - e.g., Python
  • Mentor: Alexander Ditter (alexander%20ditter)

Modularization of HPX

  • Abstract: HPX is currently built (mostly) as a single, monolithic library with strong dependencies between modules. For the long-term sustainability of HPX it would be important to untangle those dependencies to ensure that modules don't have unnecessary dependencies. This would allow users to more selectively use parts of the HPX project, and allow developers to more easily test and develop new functionality. Work has been started to separate HPX into smaller modules (https://github.com/STEllAR-GROUP/hpx/issues/3636), and this project would continue from there. This project can be tuned from easy (simply separating parts of HPX into separate modules) to difficult (restructuring HPX to be easier to manage).
  • Difficulty: Easy-Hard
  • Expected result: Ideally all of HPX split into suitable modules but
  • Knowledge Prerequisite: CMake and C++
  • Mentor: Mikael Simberg (mikael%20simberg) and Thomas Heller (thomas%20heller)

Project: Template

  • Abstract:
  • Difficulty:
  • Expected result:
  • Knowledge Prerequisite:
  • Mentor:
Clone this wiki locally