Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checkpoint/Resume #2081

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
1145f0c
Add basic serialization infrastructure, some simple tests.
thorstenhater Jan 17, 2023
25005b7
Add the tests.
thorstenhater Jan 18, 2023
6d3030f
Polish the tests a bit.
thorstenhater Jan 18, 2023
5a9fd9b
SerDes for Simulation w/o cell_groups!
thorstenhater Jan 18, 2023
1e5c3e3
SerDes for Simulation w/o cell_groups!
thorstenhater Jan 18, 2023
60c918d
Simplify and enhance serdes!
thorstenhater Jan 19, 2023
99bd50f
Drill down into cable cell
thorstenhater Jan 19, 2023
837c1fe
Squash the code size in SERDES. Not necessarily the complexity.
thorstenhater Jan 19, 2023
304cf48
Proceed serialization; intermediate checkpoint (!) before refactoring…
thorstenhater Jan 19, 2023
90b882c
Enable SERDES on events and streams.
thorstenhater Jan 23, 2023
3794cd9
Get single cell simulation SERDES in order.
thorstenhater Jan 23, 2023
97aa29d
Get single cell simulation SERDES in order.
thorstenhater Jan 23, 2023
930e90a
Tweak the network.
thorstenhater Jan 23, 2023
ab6100e
Make network larger.
thorstenhater Jan 23, 2023
61bcbb6
CMaaaaaaaake.
thorstenhater Feb 1, 2023
97c5b92
Black.
thorstenhater Feb 1, 2023
adecbff
CMake fussing.
thorstenhater Feb 2, 2023
f4e0c2b
Merge remote-tracking branch 'origin/master' into feat/check-point-ch…
thorstenhater Feb 2, 2023
a31d318
Split out writer into Arborio.
thorstenhater Feb 2, 2023
7385dc7
Add missing file.
thorstenhater Feb 3, 2023
43b1538
Fix CMake?
thorstenhater Feb 3, 2023
e5257f9
Fix CMake?
thorstenhater Feb 3, 2023
600c398
Snapshot
thorstenhater Feb 7, 2023
fe90ae1
Serdes is now freestanding
thorstenhater Feb 8, 2023
eb7b854
Add docs
thorstenhater Feb 8, 2023
5a403df
Fix-up our namespacing.
thorstenhater Feb 9, 2023
70443fd
Warnings fixed.
thorstenhater Feb 9, 2023
5aec9f4
merge.
thorstenhater Feb 9, 2023
75b3a61
Appease the linker.
thorstenhater Feb 10, 2023
b611178
Docs.
thorstenhater Feb 10, 2023
3ccd4d2
Re-enable mc_cell_group serdes.
thorstenhater Feb 11, 2023
48527f8
Get namespaces "right".
thorstenhater Feb 11, 2023
feda5eb
Get namespaces "right"?.
thorstenhater Feb 13, 2023
7275eaf
Schedules.
thorstenhater Feb 14, 2023
6a1fb50
GPU tests. Includes.
thorstenhater Feb 14, 2023
9b7353a
GPU tests. Includes.
thorstenhater Feb 14, 2023
05e7423
Revert changes in crazytown.
thorstenhater Feb 14, 2023
5ed83e7
Add Python.
thorstenhater Feb 14, 2023
2f74882
Add a Test .
thorstenhater Feb 14, 2023
2754ac2
Add a Deserialize test.
thorstenhater Feb 14, 2023
48850c9
Linters.
thorstenhater Feb 14, 2023
d6e0376
Typo.
thorstenhater Feb 14, 2023
88450cf
Linters.
thorstenhater Feb 14, 2023
a5c766e
Stupid mistake.
thorstenhater Feb 15, 2023
c41b8aa
No string equality.
thorstenhater Feb 15, 2023
8676056
Add Python example.
thorstenhater Feb 15, 2023
524d581
Linter.
thorstenhater Feb 15, 2023
1b44eff
Merge.
thorstenhater Apr 18, 2023
c972bd0
clarify scope and docs.
thorstenhater Apr 18, 2023
c3ed29e
Merge remote-tracking branch 'origin/master' into feat/check-point-ch…
thorstenhater Apr 19, 2023
1a45b8c
Fix typos and style.
thorstenhater Apr 24, 2023
324cbe9
Polish docs.
thorstenhater Apr 25, 2023
782f66e
Apply suggestions from code review
thorstenhater Jun 7, 2023
2c7b7b1
Fix review comments: Namespaces, and more.
thorstenhater Jun 7, 2023
7ee0d7c
Merged and deconflicted
thorstenhater Jun 26, 2023
6d44618
Merge and deconflict II.
thorstenhater Jun 26, 2023
a36851b
Merge remote-tracking branch 'origin/master' into feat/check-point-ch…
thorstenhater Jul 27, 2023
19ef79e
Start fixing GPU problems
thorstenhater Jul 27, 2023
1cd59b0
Merge remote-tracking branch 'refs/remotes/hater/feat/check-point-cha…
thorstenhater Jul 27, 2023
36e3762
Snapshot
thorstenhater Jul 27, 2023
b5a47b4
Hoist ser/des from namespace
thorstenhater Jul 28, 2023
8b7c9c8
Fix namespaces?
thorstenhater Jul 31, 2023
4f90020
Bump CUDA to 17, GPU compiles now.
thorstenhater Jul 31, 2023
125a90a
More namespace fun.
thorstenhater Jul 31, 2023
99b1883
Minor tweak.
thorstenhater Jul 31, 2023
06849da
GPU back to working
thorstenhater Aug 1, 2023
d090082
Bump CUDA, allow Hopper.
thorstenhater Aug 2, 2023
b23487c
Merge remote-tracking branch 'hater/feat/check-point-charlie' into fe…
thorstenhater Aug 2, 2023
9313422
Add virtual dtor to silence warning, remove redundant overload
thorstenhater Aug 4, 2023
6c49379
Merge remote-tracking branch 'origin/master' into feat/check-point-ch…
thorstenhater Aug 4, 2023
3b1be79
Clean-up.
thorstenhater Aug 4, 2023
1ca974a
While we are at it, reformat wrappers.
thorstenhater Aug 4, 2023
31617e9
Merge remote-tracking branch 'origin/master' into feat/check-point-ch…
thorstenhater Aug 7, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 19 additions & 9 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -136,9 +136,20 @@ if(ARB_GPU STREQUAL "cuda")
set(CMAKE_CUDA_HOST_COMPILER ${CMAKE_CXX_COMPILER})
enable_language(CUDA)
find_package(CUDAToolkit)
if(NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
set(CMAKE_CUDA_ARCHITECTURES 60 70 80)
if(${CUDAToolkit_VERSION_MAJOR} GREATER_EQUAL 12)
if(NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
# Pascal, Volta, Ampere, Hopper
set(CMAKE_CUDA_ARCHITECTURES 60 70 80 90)
endif()
elseif(${CUDAToolkit_VERSION_MAJOR} GREATER_EQUAL 11)
if(NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
# Pascal, Volta, Ampere
set(CMAKE_CUDA_ARCHITECTURES 60 70 80)
endif()
else()
message(FATAL_ERROR "Need at least CUDA 11, got ${CUDAToolkit_VERSION_MAJOR}")
endif()

# We _still_ need this otherwise CUDA symbols will not be exported
# from libarbor.a leading to linker errors when link external clients.
# Unit tests are NOT external enough. Re-review this somewhere in the
Expand Down Expand Up @@ -181,7 +192,7 @@ include("CheckCompilerXLC")
include("CompilerOptions")
add_compile_options("$<$<COMPILE_LANGUAGE:CXX>:${CXXOPT_WALL}>")
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CUDA_STANDARD 14)
set(CMAKE_CUDA_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

Expand Down Expand Up @@ -241,19 +252,18 @@ install(TARGETS arborio-public-deps EXPORT arborio-targets)

install(PROGRAMS scripts/arbor-build-catalogue DESTINATION ${CMAKE_INSTALL_BINDIR})
install(FILES mechanisms/BuildModules.cmake DESTINATION ${ARB_INSTALL_DATADIR})
# External libraries in `ext` sub-directory: json, tinyopt and randon123.
# Creates interface libraries `ext-json`, `ext-tinyopt` and `ext-random123`

# External libraries in `ext` sub-directory: json, tinyopt and random123.
# Creates interface libraries `ext-tinyopt` and `ext-random123`

cmake_dependent_option(ARB_USE_BUNDLED_FMT "Use bundled FMT lib." ON "ARB_USE_BUNDLED_LIBS" OFF)
cmake_dependent_option(ARB_USE_BUNDLED_PUGIXML "Use bundled XML lib." ON "ARB_USE_BUNDLED_LIBS" OFF)
cmake_dependent_option(ARB_USE_BUNDLED_GTEST "Use bundled GoogleTest." ON "ARB_USE_BUNDLED_LIBS" OFF)

cmake_dependent_option(ARB_USE_BUNDLED_JSON "Use bundled Niels Lohmann's json library." ON "ARB_USE_BUNDLED_LIBS" OFF)
if(NOT ARB_USE_BUNDLED_JSON)
find_package(nlohmann_json)
set(json_library_name nlohmann_json::nlohmann_json)
else()
unset(nlohmann_json_DIR CACHE)
find_package(nlohmann_json 3.11.2 CONFIG REQUIRED)
message(STATUS "Using external JSON = ${nlohmann_json_VERSION}")
endif()

cmake_dependent_option(ARB_USE_BUNDLED_RANDOM123 "Use bundled Random123 lib." ON "ARB_USE_BUNDLED_LIBS" OFF)
Expand Down
23 changes: 23 additions & 0 deletions arbor/backends/event.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

#include <arbor/common_types.hpp>
#include <arbor/fvm_types.hpp>
#include <arbor/serdes.hpp>
#include <arbor/mechanism_abi.h>
#include <arbor/generic_event.hpp>

Expand All @@ -17,10 +18,22 @@ struct target_handle {
cell_local_size_type mech_index; // instance of the mechanism

target_handle() = default;

target_handle(cell_local_size_type mech_id, cell_local_size_type mech_index):
mech_id(mech_id), mech_index(mech_index) {}

ARB_SERDES_ENABLE(target_handle, mech_id, mech_index);
};

}

template<typename K>
void serialize(arb::serializer &ser, const K &k, const arb::target_handle&);
template<typename K>
void deserialize(arb::serializer &ser, const K &k, arb::target_handle&);

namespace arb {

struct deliverable_event {
time_type time = 0;
float weight = 0;
Expand All @@ -29,11 +42,21 @@ struct deliverable_event {
deliverable_event() = default;
deliverable_event(time_type time, target_handle handle, float weight):
time(time), weight(weight), handle(handle) {}

ARB_SERDES_ENABLE(deliverable_event, time, weight, handle);
};

template<>
struct has_event_index<deliverable_event> : public std::true_type {};

// Subset of event information required for mechanism delivery.
struct deliverable_event_data {
cell_local_size_type mech_id; // same as target_handle::mech_id
cell_local_size_type mech_index; // same as target_handle::mech_index
float weight;
ARB_SERDES_ENABLE(deliverable_event_data, mech_id, mech_index, weight);
};

// Stream index accessor function for multi_event_stream:
inline cell_local_size_type event_index(const arb_deliverable_event_data& ed) {
return ed.mech_index;
Expand Down
1 change: 0 additions & 1 deletion arbor/backends/event_stream_base.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@ class event_stream_base {
ev_spans_.clear();
index_ = 0;
}

};

} // namespace arb
60 changes: 55 additions & 5 deletions arbor/backends/gpu/event_stream.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,17 @@
#include "util/rangeutil.hpp"
#include "util/transform.hpp"
#include "threading/threading.hpp"
#include <arbor/mechanism_abi.h>

ARB_SERDES_ENABLE_EXT(arb_deliverable_event_data, mech_index, weight);

namespace arb {
namespace gpu {

template <typename Event>
class event_stream : public event_stream_base<Event, typename memory::device_vector<::arb::event_data_type<Event>>::view_type> {
class event_stream :
public event_stream_base<Event,
typename memory::device_vector<::arb::event_data_type<Event>>::view_type> {
public:
using base = event_stream_base<Event, typename memory::device_vector<::arb::event_data_type<Event>>::view_type>;
using size_type = typename base::size_type;
Expand Down Expand Up @@ -70,12 +75,14 @@ class event_stream : public event_stream_base<Event, typename memory::device_vec
// host span
auto host_span = memory::make_view(base::ev_data_)(offset, offset + size);
// make event data and copy
std::copy_n(util::transform_view(staged[i], [](const auto& x) {
return event_data(x);}).begin(), size, host_span.begin());
std::copy_n(util::transform_view(staged[i],
[](const auto& x) { return event_data(x); }).begin(),
size,
host_span.begin());
// sort if necessary
if constexpr (has_event_index<Event>::value) {
util::stable_sort_by(host_span, [](const event_data_type& ed) {
return event_index(ed); });
util::stable_sort_by(host_span,
[](const event_data_type& ed) { return event_index(ed); });
}
// copy to device
memory::copy_async(host_span, base::ev_spans_[i]);
Expand All @@ -84,6 +91,49 @@ class event_stream : public event_stream_base<Event, typename memory::device_vec
arb_assert(num_events == base::ev_data_.size());
}

friend void serialize(serializer& ser, const std::string& k, const event_stream<Event>& t) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be less error-prone if the base class template event_stream_base had its own serialization hooks which could be called from here?

Copy link
Contributor Author

@thorstenhater thorstenhater Aug 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. It's a tradeoff. In the current way I can use the ENABLE macro in the multicore subclass
and need a custom serializer in the gpu case. Your proposal is slightly more code to write:

  1. ENABLE in base
  2. custom one-liner in multicore
  3. custom serializer in gpu minus the base members plus one line for serializing base

The problem arises when writing out the event spans. One is a GPU memory view aka (ptr, length)
and the multicore is a range aka (ptr_beg, ptr_end). It seems prudent to maybe merge representations
first?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I see your point. Let's leave it the way it is.

ser.begin_write_map(::arb::to_serdes_key(k));
ARB_SERDES_WRITE(ev_data_);
ser.begin_write_map("ev_spans_");
auto base_ptr = t.device_ev_data_.data();
for (size_t ix = 0; ix < t.ev_spans_.size(); ++ix) {
ser.begin_write_map(std::to_string(ix));
const auto& span = t.ev_spans_[ix];
ser.write("offset", static_cast<unsigned long long>(span.begin() - base_ptr));
ser.write("size", static_cast<unsigned long long>(span.size()));
ser.end_write_map();
}
ser.end_write_map();
ARB_SERDES_WRITE(index_);
ARB_SERDES_WRITE(device_ev_data_);
ARB_SERDES_WRITE(offsets_);
ser.end_write_map();
}

friend void deserialize(serializer& ser, const std::string& k, event_stream<Event>& t) {
ser.begin_read_map(::arb::to_serdes_key(k));
ARB_SERDES_READ(ev_data_);
ser.begin_read_map("ev_spans_");
for (size_t ix = 0; ser.next_key(); ++ix) {
ser.begin_read_map(std::to_string(ix));
unsigned long long offset = 0, size = 0;
ser.read("offset", offset);
ser.read("size", size);
typename base::span_type span{t.ev_data_.data() + offset, size};
if (ix < t.ev_spans_.size()) {
t.ev_spans_[ix] = span;
} else {
t.ev_spans_.emplace_back(span);
}
ser.end_read_map();
}
ser.end_read_map();
ARB_SERDES_READ(index_);
ARB_SERDES_READ(device_ev_data_);
ARB_SERDES_READ(offsets_);
ser.end_read_map();
}

private:
template<typename D>
static void resize(D& d, std::size_t size) {
Expand Down
3 changes: 3 additions & 0 deletions arbor/backends/gpu/rand.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
#include <vector>

#include <arbor/mechanism.hpp>
#include <arbor/serdes.hpp>

#include <util/pimpl.hpp>
#include <backends/rand_fwd.hpp>
Expand All @@ -19,6 +20,8 @@ class random_numbers {

void update(mechanism& m);

ARB_SERDES_ENABLE(random_numbers, data_, random_number_update_counter_);

private:
// random number device storage
array data_;
Expand Down
16 changes: 7 additions & 9 deletions arbor/backends/gpu/shared_state.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@
#include "util/range.hpp"
#include "util/strprintf.hpp"

#include <iostream>

using arb::memory::make_const_view;

namespace arb {
Expand Down Expand Up @@ -175,11 +177,11 @@ shared_state::shared_state(task_system_handle tp,
const std::vector<arb_value_type>& diam,
const std::vector<arb_value_type>& area,
const std::vector<arb_index_type>& src_to_spike_,
const fvm_detector_info& detector,
const fvm_detector_info& detector_info,
unsigned, // align parameter ignored
arb_seed_type cbprng_seed_):
thread_pool(tp),
n_detector(detector.count),
n_detector(detector_info.count),
n_cv(n_cv_),
cv_to_cell(make_const_view(cv_to_cell_vec)),
voltage(n_cv_),
Expand All @@ -193,11 +195,7 @@ shared_state::shared_state(task_system_handle tp,
src_to_spike(make_const_view(src_to_spike_)),
cbprng_seed(cbprng_seed_),
sample_events(thread_pool),
watcher{n_cv_,
src_to_spike.data(),
detector.cv,
detector.threshold,
detector.ctx}
watcher{n_cv_, src_to_spike.data(), detector_info}
{
memory::fill(time_since_spike, -1.0);
add_scalar(temperature_degC.size(), temperature_degC.data(), -273.15);
Expand Down Expand Up @@ -242,8 +240,8 @@ void shared_state::instantiate(mechanism& m,
m.ppack_.time_since_spike = time_since_spike.data();
m.ppack_.n_detectors = n_detector;

if (storage.find(id) != storage.end()) throw arb::arbor_internal_error("Duplicate mech id in shared state");
auto& store = storage.emplace(id, thread_pool).first->second;
if (storage.count(id)) throw arb::arbor_internal_error("Duplicate mech id in shared state");
auto& store = storage.emplace(id, mech_storage{thread_pool}).first->second;

// Allocate view pointers
store.state_vars_ = std::vector<arb_value_type*>(m.mech_.n_state_vars);
Expand Down
62 changes: 38 additions & 24 deletions arbor/backends/gpu/shared_state.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ struct ARB_ARBOR_API ion_state {
array Xi_; // (mM) internal concentration
array Xd_; // (mM) diffusive concentration
array Xo_; // (mM) external concentration
array gX_; // (kS/m²) per-species conductivity
array gX_; // (kS/m²) per-species conductivity

array init_Xi_; // (mM) area-weighted initial internal concentration
array init_Xo_; // (mM) area-weighted initial external concentration
Expand All @@ -64,9 +64,7 @@ struct ARB_ARBOR_API ion_state {

ion_state() = default;

ion_state(const fvm_ion_config& ion_data,
unsigned align,
solver_ptr ptr);
ion_state(const fvm_ion_config& ion_data, unsigned align, solver_ptr ptr);

// Set ion concentrations to weighted proportion of default concentrations.
void init_concentration();
Expand Down Expand Up @@ -116,24 +114,23 @@ struct ARB_ARBOR_API istim_state {
istim_state() = default;
};

struct ARB_ARBOR_API shared_state: shared_state_base<shared_state, array, ion_state> {
struct mech_storage {
mech_storage() = default;
mech_storage(task_system_handle tp) : deliverable_events_(tp) {}

array data_;
iarray indices_;
std::vector<arb_value_type> globals_;
std::vector<arb_value_type*> parameters_;
std::vector<arb_value_type*> state_vars_;
std::vector<arb_ion_state> ion_states_;
memory::device_vector<arb_value_type*> parameters_d_;
memory::device_vector<arb_value_type*> state_vars_d_;
memory::device_vector<arb_ion_state> ion_states_d_;
random_numbers random_numbers_;
deliverable_event_stream deliverable_events_;
};
struct mech_storage {
mech_storage() = default;
mech_storage(task_system_handle tp) : deliverable_events_(tp) {}
array data_;
iarray indices_;
std::vector<arb_value_type> globals_;
std::vector<arb_value_type*> parameters_;
std::vector<arb_value_type*> state_vars_;
std::vector<arb_ion_state> ion_states_;
memory::device_vector<arb_value_type*> parameters_d_;
memory::device_vector<arb_value_type*> state_vars_d_;
memory::device_vector<arb_ion_state> ion_states_d_;
random_numbers random_numbers_;
deliverable_event_stream deliverable_events_;
};

struct ARB_ARBOR_API shared_state: shared_state_base<shared_state, array, ion_state> {
task_system_handle thread_pool;

using cable_solver = arb::gpu::matrix_state_fine<arb_value_type, arb_index_type>;
Expand Down Expand Up @@ -183,7 +180,7 @@ struct ARB_ARBOR_API shared_state: shared_state_base<shared_state, array, ion_st
const std::vector<arb_index_type>& cv_to_cell_vec,
const fvm_cv_discretization& D,
const std::vector<arb_index_type>& src_to_spike,
const fvm_detector_info& detector,
const fvm_detector_info& detector_info,
const std::unordered_map<std::string, fvm_ion_config>& ions,
const fvm_stimulus_config& stims,
unsigned align,
Expand All @@ -197,7 +194,7 @@ struct ARB_ARBOR_API shared_state: shared_state_base<shared_state, array, ion_st
D.diam_um,
D.cv_area,
src_to_spike,
detector,
detector_info,
align,
cbprng_seed_}
{
Expand All @@ -216,7 +213,7 @@ struct ARB_ARBOR_API shared_state: shared_state_base<shared_state, array, ion_st
const std::vector<arb_value_type>& diam,
const std::vector<arb_value_type>& area,
const std::vector<arb_index_type>& src_to_spike,
const fvm_detector_info& detector,
const fvm_detector_info& detector_info,
unsigned, // align parameter ignored
arb_seed_type cbprng_seed_ = 0u);

Expand Down Expand Up @@ -251,4 +248,21 @@ struct ARB_ARBOR_API shared_state: shared_state_base<shared_state, array, ion_st
ARB_ARBOR_API std::ostream& operator<<(std::ostream& o, shared_state& s);

} // namespace gpu

ARB_SERDES_ENABLE_EXT(gpu::ion_state, Xd_, gX_);
ARB_SERDES_ENABLE_EXT(gpu::mech_storage,
data_,
// NOTE(serdes) ion_states_, this is just a bunch of pointers
random_numbers_,
deliverable_events_);
ARB_SERDES_ENABLE_EXT(gpu::shared_state,
cbprng_seed,
ion_data,
storage,
voltage,
current_density,
conductivity,
time_since_spike,
time, time_to,
dt);
} // namespace arb
4 changes: 2 additions & 2 deletions arbor/backends/gpu/stimulus.cu
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
#include <cmath>

#include <arbor/fvm_types.hpp>
#include <arbor/gpu/gpu_api.hpp>
#include <arbor/gpu/gpu_common.hpp>
#include <arbor/gpu/math_cu.hpp>

#include "backends/gpu/stimulus.hpp"
#include "stimulus.hpp"


namespace arb {
namespace gpu {
Expand Down
Loading