runtime#

group Runtime and library contexts

Runtime and Library contexts for the management and launching of tasks.

Enums

enum class ExceptionMode : std::uint8_t#

Enum for exception handling modes.

Values:

enumerator IMMEDIATE#

Handles exceptions immediately. Any throwable task blocks until completion.

enumerator DEFERRED#

Defers all exceptions until the current scope exits.

enumerator IGNORED#

All exceptions are ignored.

Functions

std::int32_t start(std::int32_t argc, char *argv[])#

Starts the Legate runtime.

Deprecated:

Use the argument-less version of this function instead: start()

See also

start()

Parameters:
  • argc – Argument is ignored.

  • argv – Argument is ignored.

Returns:

Always returns 0

void start()#

Starts the Legate runtime.

This makes the runtime ready to accept requests made via its APIs. It may be called any number of times, only the first call has any effect.

Throws:
  • ConfigurationError – If runtime configuration fails.

  • AutoConfigurationError – If the automatic configuration heuristics fail.

bool has_started()#

Checks if the runtime has started.

Returns:

true if the runtime has started, false if the runtime has not started yet or after finish() is called.

bool has_finished()#

Checks if the runtime has finished.

Returns:

true if finish() has been called, false otherwise.

std::int32_t finish()#

Waits for the runtime to finish.

The client code must call this to make sure all Legate tasks run

Returns:

Non-zero value when the runtime encountered a failure, 0 otherwise

void destroy()#
template<typename T>
void register_shutdown_callback(T &&callback)#

Registers a callback that should be invoked during the runtime shutdown.

Any callbacks will be invoked before the core library and the runtime are destroyed. All callbacks must be non-throwable. Multiple registrations of the same callback are not deduplicated, and thus clients are responsible for registering their callbacks only once if they are meant to be invoked as such. Callbacks are invoked in the FIFO order, and thus any callbacks that are registered by another callback will be added to the end of the list of callbacks. Callbacks can launch tasks and the runtime will make sure of their completion before initializing its shutdown.

Parameters:

callback – A shutdown callback

mapping::Machine get_machine()#

Returns the machine for the current scope.

Returns:

Machine object

bool is_running_in_task()#

Checks if the code is running in a task.

Returns:

true If the code is running in a task

Returns:

false If the code is not running in a task

class Library
#include <legate/runtime/library.h>

A library class that provides APIs for registering components.

Public Functions

std::string_view get_library_name() const

Returns the name of the library.

Returns:

Library name

std::string_view get_task_name(LocalTaskID local_task_id) const

Returns the name of a task.

Parameters:

local_task_id – Task id

Returns:

Name of the task

Scalar get_tunable(std::int64_t tunable_id, const Type &type)

Retrieves a tunable parameter.

Parameters:
  • tunable_id – ID of the tunable parameter

  • typeType of the tunable value

Returns:

The value of tunable parameter in a Scalar

template<typename REDOP>
GlobalRedopID register_reduction_operator(
LocalRedopID redop_id
)

Registers a library specific reduction operator.

The type parameter REDOP points to a class that implements a reduction operator. Each reduction operator class has the following structure:

struct RedOp {
  using LHS = ...; // Type of the LHS values
  using RHS = ...; // Type of the RHS values

  static const RHS identity = ...; // Identity of the reduction operator

  template <bool EXCLUSIVE>
  LEGATE_HOST_DEVICE inline static void apply(LHS& lhs, RHS rhs)
  {
    ...
  }
  template <bool EXCLUSIVE>
  LEGATE_HOST_DEVICE inline static void fold(RHS& rhs1, RHS rhs2)
  {
    ...
  }
};

Semantically, Legate performs reductions of values V0, …, Vn to element E in the following way:

RHS T = RedOp::identity;
RedOp::fold(T, V0)
...
RedOp::fold(T, Vn)
RedOp::apply(E, T)
I.e., Legate gathers all reduction contributions using fold and applies the accumulator to the element using apply.

Oftentimes, the LHS and RHS of a reduction operator are the same type and fold and apply perform the same computation, but that’s not mandatory. For example, one may implement a reduction operator for subtraction, where the fold would sum up all RHS values whereas the apply would subtract the aggregate value from the LHS.

The reduction operator id (REDOP_ID) can be local to the library but should be unique for each operator within the library.

Finally, the contract for apply and fold is that they must update the reference atomically when the EXCLUSIVE is false.

Warning

Because the runtime can capture the reduction operator and wrap it with CUDA boilerplates only at compile time, the registration call should be made in a .cu file that would be compiled by NVCC. Otherwise, the runtime would register the reduction operator in CPU-only mode, which can degrade the performance when the program performs reductions on non-scalar stores.

Template Parameters:

REDOP – Reduction operator to register

Parameters:

redop_id – Library-local reduction operator ID

Returns:

Global reduction operator ID

void register_task(
LocalTaskID local_task_id,
const TaskInfo &task_info
)

Register a task with the library.

See also

find_task()

Parameters:
  • local_task_id – The library-local task ID to assign for this task.

  • task_info – The TaskInfo object describing the task.

Throws:
  • std::out_of_range – If the chosen local task ID exceeds the maximum local task ID for the library.

  • std::invalid_argument – If the task (or another task with the same local_task_id) has already been registered with the library.

TaskInfo find_task(LocalTaskID local_task_id) const

Look up a task registered with the library.

See also

register_task()

Parameters:

local_task_id – The task ID to find.

Throws:

std::out_of_range – If the task could not be found.

Returns:

The TaskInfo object describing the task.

struct ResourceConfig#
#include <legate/runtime/resource.h>

POD for library configuration.

Public Members

std::int64_t max_tasks = {1024}#

Maximum number of tasks that the library can register.

std::int64_t max_dyn_tasks = {0}#

Maximum number of dynamic tasks that the library can register (cannot exceed max_tasks)

std::int64_t max_reduction_ops = {}#

Maximum number of custom reduction operators that the library can register.

class Runtime
#include <legate/runtime/runtime.h>

Class that implements the Legate runtime.

The legate runtime provides common services, including as library registration, store creation, operator creation and submission, resource management and scoping, and communicator management. Legate libraries are free of all these details about distribute programming and can focus on their domain logics.

Public Functions

Library create_library(
std::string_view library_name,
const ResourceConfig &config = ResourceConfig{},
std::unique_ptr<mapping::Mapper> mapper = nullptr,
std::map<VariantCode, VariantOptions> default_options = {}
)

Creates a library.

A library is a collection of tasks and custom reduction operators. The maximum number of tasks and reduction operators can be optionally specified with a ResourceConfig object. Each library can optionally have a mapper that specifies mapping policies for its tasks. When no mapper is given, the default mapper is used.

Parameters:
  • library_nameLibrary name. Must be unique to this library

  • config – Optional configuration object

  • mapper – Optional mapper object

  • default_options – Optional default task variant options

Throws:

std::invalid_argument – If a library already exists for a given name

Returns:

Library object

Library find_library(std::string_view library_name) const

Finds a library.

Parameters:

library_nameLibrary name

Throws:

std::out_of_range – If no library is found for a given name

Returns:

Library object

std::optional<Library> maybe_find_library(
std::string_view library_name
) const

Attempts to find a library.

If no library exists for a given name, a null value will be returned

Parameters:

library_nameLibrary name

Returns:

Library object if a library exists for a given name, a null object otherwise

Library find_or_create_library(
std::string_view library_name,
const ResourceConfig &config = ResourceConfig{},
std::unique_ptr<mapping::Mapper> mapper = nullptr,
const std::map<VariantCode, VariantOptions> &default_options = {},
bool *created = nullptr
)

Finds or creates a library.

The optional configuration and mapper objects are picked up only when the library is created.

Parameters:
  • library_nameLibrary name. Must be unique to this library

  • config – Optional configuration object

  • mapper – Optional mapper object

  • default_options – Optional default task variant options

  • created – Optional pointer to a boolean flag indicating whether the library has been created because of this call

Returns:

Context object for the library

AutoTask create_task(Library library, LocalTaskID task_id)

Creates an AutoTask.

Parameters:
  • libraryLibrary to query the task

  • task_id – Library-local Task ID

Returns:

Task object

ManualTask create_task(
Library library,
LocalTaskID task_id,
const tuple<std::uint64_t> &launch_shape
)

Creates a ManualTask.

Parameters:
  • libraryLibrary to query the task

  • task_id – Library-local Task ID

  • launch_shape – Launch domain for the task

Returns:

Task object

ManualTask create_task(
Library library,
LocalTaskID task_id,
const Domain &launch_domain
)

Creates a ManualTask.

This overload should be used when the lower bounds of the task’s launch domain should be non-zero. Note that the upper bounds of the launch domain are inclusive (whereas the launch_shape in the other overload is exclusive).

Parameters:
  • libraryLibrary to query the task

  • task_id – Library-local Task ID

  • launch_domain – Launch domain for the task

Returns:

Task object

void issue_copy(
LogicalStore &target,
const LogicalStore &source,
std::optional<ReductionOpKind> redop_kind = std::nullopt
)

Issues a copy between stores.

The source and target stores must have the same shape.

Parameters:
  • target – Copy target

  • source – Copy source

  • redop_kind – ID of the reduction operator to use (optional). The store’s type must support the operator.

Throws:

std::invalid_argument – If the store’s type doesn’t support the reduction operator

void issue_copy(
LogicalStore &target,
const LogicalStore &source,
std::optional<std::int32_t> redop_kind
)

Issues a copy between stores.

The source and target stores must have the same shape.

Parameters:
  • target – Copy target

  • source – Copy source

  • redop_kind – ID of the reduction operator to use (optional). The store’s type must support the operator.

Throws:

std::invalid_argument – If the store’s type doesn’t support the reduction operator

void issue_gather(
LogicalStore &target,
const LogicalStore &source,
const LogicalStore &source_indirect,
std::optional<ReductionOpKind> redop_kind = std::nullopt
)

Issues a gather copy between stores.

The indirection store and the target store must have the same shape.

Parameters:
  • target – Copy target

  • source – Copy source

  • source_indirect – Store for source indirection

  • redop_kind – ID of the reduction operator to use (optional). The store’s type must support the operator.

Throws:

std::invalid_argument – If the store’s type doesn’t support the reduction operator

void issue_gather(
LogicalStore &target,
const LogicalStore &source,
const LogicalStore &source_indirect,
std::optional<std::int32_t> redop_kind
)

Issues a gather copy between stores.

The indirection store and the target store must have the same shape.

Parameters:
  • target – Copy target

  • source – Copy source

  • source_indirect – Store for source indirection

  • redop_kind – ID of the reduction operator to use (optional). The store’s type must support the operator.

Throws:

std::invalid_argument – If the store’s type doesn’t support the reduction operator

void issue_scatter(
LogicalStore &target,
const LogicalStore &target_indirect,
const LogicalStore &source,
std::optional<ReductionOpKind> redop_kind = std::nullopt
)

Issues a scatter copy between stores.

The indirection store and the source store must have the same shape.

Parameters:
  • target – Copy target

  • target_indirect – Store for target indirection

  • source – Copy source

  • redop_kind – ID of the reduction operator to use (optional). The store’s type must support the operator.

Throws:

std::invalid_argument – If the store’s type doesn’t support the reduction operator

void issue_scatter(
LogicalStore &target,
const LogicalStore &target_indirect,
const LogicalStore &source,
std::optional<std::int32_t> redop_kind
)

Issues a scatter copy between stores.

The indirection store and the source store must have the same shape.

Parameters:
  • target – Copy target

  • target_indirect – Store for target indirection

  • source – Copy source

  • redop_kind – ID of the reduction operator to use (optional). The store’s type must support the operator.

Throws:

std::invalid_argument – If the store’s type doesn’t support the reduction operator

void issue_scatter_gather(
LogicalStore &target,
const LogicalStore &target_indirect,
const LogicalStore &source,
const LogicalStore &source_indirect,
std::optional<ReductionOpKind> redop_kind = std::nullopt
)

Issues a scatter-gather copy between stores.

The indirection stores must have the same shape.

Parameters:
  • target – Copy target

  • target_indirect – Store for target indirection

  • source – Copy source

  • source_indirect – Store for source indirection

  • redop_kind – ID of the reduction operator to use (optional). The store’s type must support the operator.

Throws:

std::invalid_argument – If the store’s type doesn’t support the reduction operator

void issue_scatter_gather(
LogicalStore &target,
const LogicalStore &target_indirect,
const LogicalStore &source,
const LogicalStore &source_indirect,
std::optional<std::int32_t> redop_kind
)

Issues a scatter-gather copy between stores.

The indirection stores must have the same shape.

Parameters:
  • target – Copy target

  • target_indirect – Store for target indirection

  • source – Copy source

  • source_indirect – Store for source indirection

  • redop_kind – ID of the reduction operator to use (optional). The store’s type must support the operator.

Throws:

std::invalid_argument – If the store’s type doesn’t support the reduction operator

void issue_fill(const LogicalArray &lhs, const LogicalStore &value)

Fills a given array with a constant.

Parameters:
  • lhs – Logical array to fill

  • value – Logical store that contains the constant value to fill the array with

void issue_fill(const LogicalArray &lhs, const Scalar &value)

Fills a given array with a constant.

Parameters:
  • lhs – Logical array to fill

  • value – Value to fill the array with

LogicalStore tree_reduce(
Library library,
LocalTaskID task_id,
const LogicalStore &store,
std::int32_t radix = 4
)

Performs reduction on a given store via a task.

Parameters:
  • library – The library for the reducer task

  • task_id – reduction task ID

  • store – Logical store to reduce

  • radix – Optional radix value that determines the maximum number of input stores to the task at each reduction step

void submit(AutoTask &&task)

Submits an AutoTask for execution.

Each submitted operation goes through multiple pipeline steps to eventually get scheduled for execution. It’s not guaranteed that the submitted operation starts executing immediately.

The runtime takes the ownership of the submitted task. Once submitted, the task becomes invalid and is not reusable.

Parameters:

task – An AutoTask to execute

void submit(ManualTask &&task)

Submits a ManualTask for execution.

Each submitted operation goes through multiple pipeline steps to eventually get scheduled for execution. It’s not guaranteed that the submitted operation starts executing immediately.

The runtime takes the ownership of the submitted task. Once submitted, the task becomes invalid and is not reusable.

Parameters:

task – A ManualTask to execute

LogicalArray create_array(
const Type &type,
std::uint32_t dim = 1,
bool nullable = false
)

Creates an unbound array.

Parameters:
  • type – Element type

  • dim – Number of dimensions

  • nullable – Nullability of the array

Returns:

Logical array

LogicalArray create_array(
const Shape &shape,
const Type &type,
bool nullable = false,
bool optimize_scalar = false
)

Creates a normal array.

Parameters:
  • shapeShape of the array. The call does not block on this shape

  • type – Element type

  • nullable – Nullability of the array

  • optimize_scalar – When true, the runtime internally uses futures optimized for storing scalars

Returns:

Logical array

LogicalArray create_array_like(
const LogicalArray &to_mirror,
std::optional<Type> type = std::nullopt
)

Creates an array isomorphic to the given array.

Parameters:
  • to_mirror – The array whose shape would be used to create the output array. The call does not block on the array’s shape.

  • type – Optional type for the resulting array. Must be compatible with the input array’s type

Returns:

Logical array isomorphic to the input

StringLogicalArray create_string_array(
const LogicalArray &descriptor,
const LogicalArray &vardata
)

Creates a string array from the existing sub-arrays.

The caller is responsible for making sure that the vardata sub-array is valid for all the descriptors in the descriptor sub-array

Parameters:
  • descriptor – Sub-array for descriptors

  • vardata – Sub-array for characters

Throws:

std::invalid_argument – When any of the following is true: 1) descriptor or vardata is unbound or N-D where N > 1 2) descriptor does not have a 1D rect type 3) vardata is nullable 4) vardata does not have an int8 type

Returns:

String logical array

ListLogicalArray create_list_array(
const LogicalArray &descriptor,
const LogicalArray &vardata,
std::optional<Type> type = std::nullopt
)

Creates a list array from the existing sub-arrays.

The caller is responsible for making sure that the vardata sub-array is valid for all the descriptors in the descriptor sub-array

Parameters:
  • descriptor – Sub-array for descriptors

  • vardata – Sub-array for vardata

  • type – Optional list type the returned array would have

Throws:

std::invalid_argument – When any of the following is true: 1) type is not a list type 2) descriptor or vardata is unbound or N-D where N > 1 3) descriptor does not have a 1D rect type 4) vardata is nullable 5) vardata and type have different element types

Returns:

List logical array

LogicalStore create_store(const Type &type, std::uint32_t dim = 1)

Creates an unbound store.

Parameters:
  • type – Element type

  • dim – Number of dimensions of the store

Returns:

Logical store

LogicalStore create_store(
const Shape &shape,
const Type &type,
bool optimize_scalar = false
)

Creates a normal store.

Parameters:
  • shapeShape of the store. The call does not block on this shape.

  • type – Element type

  • optimize_scalar – When true, the runtime internally uses futures optimized for storing scalars

Returns:

Logical store

LogicalStore create_store(
const Scalar &scalar,
const Shape &shape = Shape{1}
)

Creates a normal store out of a Scalar object.

Parameters:
  • scalar – Value of the scalar to create a store with

  • shapeShape of the store. The volume must be 1. The call does not block on this shape.

Returns:

Logical store

LogicalStore create_store(
const Shape &shape,
const Type &type,
void *buffer,
bool read_only = true,
const mapping::DimOrdering &ordering = mapping::DimOrdering::c_order()
)

Creates a store by attaching to an existing allocation.

See also

legate::ExternalAllocation For important instructions regarding the mutability and lifetime management of the attached allocation.

Parameters:
  • shapeShape of the store. The call does not block on this shape.

  • type – Element type.

  • buffer – Pointer to the beginning of the allocation to attach to; allocation must be contiguous, and cover the entire contents of the store (at least extents.volume() * type.size() bytes).

  • read_only – Whether the allocation is read-only.

  • ordering – In what order the elements are laid out in the passed buffer.

Returns:

Logical store.

LogicalStore create_store(
const Shape &shape,
const Type &type,
const ExternalAllocation &allocation,
const mapping::DimOrdering &ordering = mapping::DimOrdering::c_order()
)

Creates a store by attaching to an existing allocation.

See also

legate::ExternalAllocation For important instructions regarding the mutability and lifetime management of the attached allocation.

Parameters:
  • shapeShape of the store. The call does not block on this shape.

  • type – Element type.

  • allocation – External allocation descriptor.

  • ordering – In what order the elements are laid out in the passed allocation.

Returns:

Logical store.

std::pair<LogicalStore, LogicalStorePartition> create_store(
const Shape &shape,
const tuple<std::uint64_t> &tile_shape,
const Type &type,
const std::vector<std::pair<ExternalAllocation, tuple<std::uint64_t>>> &allocations,
const mapping::DimOrdering &ordering = mapping::DimOrdering::c_order()
)

Creates a store by attaching to multiple existing allocations.

External allocations must be read-only.

See also

legate::ExternalAllocation For important instructions regarding the mutability and lifetime management of the attached allocation.

Parameters:
  • shapeShape of the store. The call can BLOCK on this shape for constructing a store partition.

  • tile_shapeShape of tiles.

  • type – Element type.

  • allocations – Pairs of external allocation descriptors and sub-store colors.

  • ordering – In what order the elements are laid out in the passed allocatios.

Throws:

std::invalid_argument – If any of the external allocations are not read-only.

Returns:

A pair of a logical store and its partition.

void prefetch_bloated_instances(
const LogicalStore &store,
tuple<std::uint64_t> low_offsets,
tuple<std::uint64_t> high_offsets,
bool initialize = false
)

Gives the runtime a hint that the store can benefit from bloated instances.

The runtime currently does not look ahead in the task stream to recognize that a given set of tasks can benefit from the ahead-of-time creation of “bloated” instances encompassing multiple slices of a store. This means that the runtime will construct bloated instances incrementally and completely only when it sees all the slices, resulting in intermediate instances that (temporarily) increases the memory footprint. This function can be used to give the runtime a hint ahead of time about the bloated instances, which would be reused by the downstream tasks without going through the same incremental process.

For example, let’s say we have a 1-D store A of size 10 and we want to partition A across two GPUs. By default, A would be partitioned equally and each GPU gets an instance of size 5. Suppose we now have a task that aligns two slices A[1:10] and A[:9]. The runtime would partition the slices such that the task running on the first GPU gets A[1:6] and A[:5], and the task running on the second GPU gets A[6:] and A[5:9]. Since the original instance on the first GPU does not cover the element A[5] included in the first slice A[1:6], the mapper needs to create a new instance for A[:6] that encompasses both of the slices, leading to an extra copy. In this case, if the code calls prefetch(A, {0}, {1}) to pre-alloate instances that contain one extra element on the right before it uses A, the extra copy can be avoided.

A couple of notes about the API:

  • Unless initialize is true, the runtime assumes that the store has been initialized. Passing an uninitialized store would lead to a runtime error.

  • If the store has pre-existing instances, the runtime may combine those with the bloated instances if such combination is deemed desirable.

Note

This API is experimental

Parameters:
  • store – Store to create bloated instances for

  • low_offsets – Offsets to bloat towards the negative direction

  • high_offsets – Offsets to bloat towards the positive direction

  • initialize – If true, the runtime will issue a fill on the store to initialize it. The default value is false

void issue_mapping_fence()

Issues a mapping fence.

A mapping fence, when issued, blocks mapping of all downstream operations before those preceding the fence get mapped. An issue_mapping_fence call returns immediately after the request is submitted to the runtime, and the fence asynchronously goes through the runtime analysis pipeline just like any other Legate operations. The call also flushes the scheduling window for batched execution.

Mapping fences only affect how the operations are mapped and do not change their execution order, so they are semantically no-op. Nevertheless, they are sometimes useful when the user wants to control how the resource is consumed by independent tasks. Consider a program with two independent tasks A and B, both of which discard their stores right after their execution. If the stores are too big to be allocated all at once, mapping A and B in parallel (which can happen because A and B are independent and thus nothing stops them from getting mapped concurrently) can lead to a failure. If a mapping fence exists between the two, the runtime serializes their mapping and can reclaim the memory space from stores that would be discarded after A’s execution to create allocations for B.

void issue_execution_fence(bool block = false)

Issues an execution fence.

An execution fence is a join point in the task graph. All operations prior to a fence must finish before any of the subsequent operations start.

All execution fences are mapping fences by definition; i.e., an execution fence not only prevents the downstream operations from being mapped ahead of itself but also precedes their execution.

Parameters:

block – When true, the control code blocks on the fence and all operations that have been submitted prior to this fence.

void raise_pending_exception()

Raises a pending exception.

When the exception mode of a scope is “deferred” (i.e., Scope::exception_mode() == ExceptionMode::DEFERRED), the exceptions from tasks in the scope are not immediately handled, but are pushed to the pending exception queue. Accumulated pending exceptions are not flushed until raise_pending_exception is invoked. The function throws the first exception in the pending exception queue and clears the queue. If there is no pending exception to be raised, the function does nothing.

Throws:

legate::TaskException – When there is a pending exception to raise

std::uint32_t node_count() const

Returns the total number of nodes.

Returns:

Total number of nodes

std::uint32_t node_id() const

Returns the current rank.

Returns:

Rank ID

mapping::Machine get_machine() const

Returns the machine of the current scope.

Returns:

Machine object

Processor get_executing_processor() const

Returns the current Processor on which the caller is executing.

Returns:

The current Processor.

void start_profiling_range()

Start a Legion profiling range.

void stop_profiling_range(std::string_view provenance)

Stop a Legion profiling range.

Parameters:

provenance – User-supplied provenance string

Public Static Functions

static Runtime *get_runtime()

Returns a singleton runtime object.

Returns:

The runtime object

namespace detail

Typedefs

using StoreAnalyzable = std::variant<RegionFieldArg, OutputRegionArg, ScalarStoreArg, ReplicatedScalarStoreArg, WriteOnlyScalarStoreArg>
using ArrayAnalyzable = std::variant<BaseArrayArg, ListArrayArg, StructArrayArg>
using Analyzable = variant_detail::variant_concat_t<StoreAnalyzable, ArrayAnalyzable>
using Restrictions = tuple<Restriction>
typedef BasicZStringView<char, std::char_traits<char>> ZStringView
template<typename Default, template<typename...> typename Op, typename ...Args>
using detected_or = detected_detail::detector<Default, void, Op, Args...>
template<template<typename...> typename Op, typename ...Args>
using is_detected = detected_or<detected_detail::nonesuch, Op, Args...>
template<template<typename...> class Op, typename ...Args>
using is_detected_t = typename is_detected<Op, Args...>::type
template<typename T>
using type_identity_t = typename type_identity<T>::type
template<typename T>
using has_shared_from_this = decltype(std::declval<T*>()->shared_from_this())

Enums

enum class ArrayKind : std::uint8_t

Values:

enumerator BASE
enumerator LIST
enumerator STRUCT
enum class AccessMode : std::uint8_t

Values:

enumerator READ
enumerator REDUCE
enumerator WRITE
enum class Restriction : std::uint8_t

Enum to describe partitioning preference on dimensions of a store.

Values:

enumerator ALLOW

The dimension can be partitioned

enumerator AVOID

The dimension can be partitioned, but other dimensions are preferred

enumerator FORBID

The dimension must not be partitioned

enum class ExceptionKind : std::uint8_t

Values:

enumerator CPP
enumerator PYTHON
enum class CoreProjectionOp : std::int32_t

Values:

enumerator DELINEARIZE
enumerator FIRST_DYNAMIC_FUNCTOR
enumerator MAX_FUNCTOR
enum class CoreShardID : std::underlying_type_t<CoreProjectionOp>

Values:

enumerator TOPLEVEL_TASK
enumerator LINEARIZE
enum class CoreTransform : std::int8_t

Values:

enumerator INVALID
enumerator SHIFT
enumerator PROMOTE
enumerator PROJECT
enumerator TRANSPOSE
enumerator DELINEARIZE
enum class TaskPriority : std::int8_t

Values:

enumerator DEFAULT

Functions

void show_progress(
const Legion::Task *task,
Legion::Context ctx,
Legion::Runtime *runtime
)
void check_alignment(std::size_t alignment)
void register_array_tasks(Library &core_lib)
inline InternalSharedPtr<StoragePartition> create_storage_partition(
const InternalSharedPtr<Storage> &self,
InternalSharedPtr<Partition> partition,
std::optional<bool> complete
)
inline InternalSharedPtr<Storage> slice_storage(
const InternalSharedPtr<Storage> &self,
tuple<std::uint64_t> tile_shape,
tuple<std::int64_t> offsets
)
inline InternalSharedPtr<LogicalStore> slice_store(
const InternalSharedPtr<LogicalStore> &self,
std::int32_t dim,
Slice sl
)
inline InternalSharedPtr<LogicalStorePartition> partition_store_by_tiling(
const InternalSharedPtr<LogicalStore> &self,
tuple<std::uint64_t> tile_shape
)
inline InternalSharedPtr<LogicalStorePartition> create_store_partition(
const InternalSharedPtr<LogicalStore> &self,
InternalSharedPtr<Partition> partition,
std::optional<bool> complete = std::nullopt
)
inline StoreAnalyzable store_to_launcher_arg(
const InternalSharedPtr<LogicalStore> &self,
const Variable *variable,
const Strategy &strategy,
const Domain &launch_domain,
const std::optional<SymbolicPoint> &projection,
Legion::PrivilegeMode privilege,
GlobalRedopID redop = GlobalRedopID{-1}
)
inline RegionFieldArg store_to_launcher_arg_for_fixup(
const InternalSharedPtr<LogicalStore> &self,
const Domain &launch_domain,
Legion::PrivilegeMode privilege
)
std::ostream &operator<<(
std::ostream &out,
const Transform &transform
)
template<typename T>
inline decltype(auto) canonical_value_of(
T &&v
) noexcept
inline std::uint64_t canonical_value_of(std::size_t v) noexcept
template<typename ...T>
variant_detail::VariantProxy<T...> variant_cast(
std::variant<T...> v
)
InternalSharedPtr<Alignment> align(
const Variable *lhs,
const Variable *rhs
)
InternalSharedPtr<Broadcast> broadcast(const Variable *variable)
InternalSharedPtr<Broadcast> broadcast(
const Variable *variable,
tuple<std::uint32_t> axes
)
InternalSharedPtr<ImageConstraint> image(
const Variable *var_function,
const Variable *var_range,
ImageComputationHint hint
)
InternalSharedPtr<ScaleConstraint> scale(
tuple<std::uint64_t> factors,
const Variable *var_smaller,
const Variable *var_bigger
)
InternalSharedPtr<BloatConstraint> bloat(
const Variable *var_source,
const Variable *var_bloat,
tuple<std::uint64_t> low_offsets,
tuple<std::uint64_t> high_offsets
)
inline bool operator==(const Variable &lhs, const Variable &rhs)
InternalSharedPtr<NoPartition> create_no_partition()
InternalSharedPtr<Tiling> create_tiling(
tuple<std::uint64_t> tile_shape,
tuple<std::uint64_t> color_shape,
tuple<std::int64_t> offsets
)
InternalSharedPtr<Tiling> create_tiling(
tuple<std::uint64_t> tile_shape,
tuple<std::uint64_t> color_shape,
tuple<std::int64_t> offsets,
tuple<std::uint64_t> strides
)
InternalSharedPtr<Weighted> create_weighted(
const Legion::FutureMap &weights,
const Domain &color_domain
)
InternalSharedPtr<Image> create_image(
InternalSharedPtr<detail::LogicalStore> func,
InternalSharedPtr<Partition> func_partition,
mapping::detail::Machine machine,
ImageComputationHint hint
)
std::ostream &operator<<(
std::ostream &out,
const Partition &partition
)
void register_partitioning_tasks(Library &core_lib)
template<typename OP, typename T>
void wrap_with_cas(
OP op,
T &lhs,
T rhs
)
Restriction join(Restriction lhs, Restriction rhs)
Restrictions join(const Restrictions &lhs, const Restrictions &rhs)
void join_inplace(Restrictions &lhs, const Restrictions &rhs)
template<typename T>
std::ostream &operator<<(
std::ostream &os,
const Scaled<T> &arg
)
template<typename T>
std::ostream &operator<<(
std::ostream &os,
const Argument<T> &arg
)
std::string compose_legion_default_args(const ParsedArgs &parsed)

Compose the contents of LEGION_DEFAULT_ARGS.

This routine does not actually set LEGION_DEFAULT_ARGS, it only computes what the new value should be.

This is technically a private function, but we expose it to test it.

Parameters:

parsed – The parsed command-line arguments.

Returns:

The new value of LEGION_DEFAULT_ARGS.

void configure_legion(const ParsedArgs &parsed)

Configure Legion based on parsed command-line flags.

This function sets LEGION_DEFAULT_ARGS.

Parameters:

parsed – The parsed command-line arguments.

void configure_realm(const ParsedArgs &parsed)

Configure Realm based on the command-line flags.

Parameters:

parsed – The command-line flags.

void configure_cpus(
bool auto_config,
const Realm::ModuleConfig &core,
const Argument<std::int32_t> &omps,
const Argument<std::int32_t> &util,
const Argument<std::int32_t> &gpus,
Argument<std::int32_t> *cpus
)
void configure_cuda_driver_path(
const Argument<std::string> &cuda_driver_path
)
void configure_fbmem(
bool auto_config,
const Realm::ModuleConfig *cuda,
const Argument<std::int32_t> &gpus,
Argument<Scaled<std::int64_t>> *fbmem
)
void configure_gpus(
bool auto_config,
const Realm::ModuleConfig *cuda,
Argument<std::int32_t> *gpus,
Config *cfg
)
std::string convert_log_levels(std::string_view log_levels)

Convert text-based logging levels to the numeric logging levels that Legion expects.

Parameters:

log_levels – The logging string specification.

Returns:

The converted log levels.

std::string logging_help_str()
void configure_numamem(
bool auto_config,
Span<const std::size_t> numa_mems,
const Argument<std::int32_t> &omps,
Argument<Scaled<std::int64_t>> *numamem
)
void configure_ompthreads(
bool auto_config,
const Realm::ModuleConfig &core,
const Argument<std::int32_t> &util,
const Argument<std::int32_t> &cpus,
const Argument<std::int32_t> &gpus,
const Argument<std::int32_t> &omps,
Argument<std::int32_t> *ompthreads,
Config *cfg
)
void configure_omps(
bool auto_config,
const Realm::ModuleConfig *openmp,
Span<const std::size_t> numa_mems,
const Argument<std::int32_t> &gpus,
Argument<std::int32_t> *omps
)
void configure_sysmem(
bool auto_config,
const Realm::ModuleConfig &core,
const Argument<Scaled<std::int64_t>> &numamem,
Argument<Scaled<std::int64_t>> *sysmem
)
std::string_view get_parsed_LEGATE_CONFIG()
Returns:

Get the value of LEGATE_CONFIG that was parsed.

Config handle_legate_args()

Parse LEGATE_CONFIG and generate a Config database from it.

Returns:

The configuration of Legate.

ParsedArgs parse_args(std::vector<std::string> args)

Parse the given command-line flags and return their values.

args must not be empty.

Parameters:

args – A list of command-line flags.

Returns:

The parsed command-line values.

template<typename StringType>
std::vector<StringType> string_split(
std::string_view command,
const char sep
)
bool multi_node_job()
Returns:

true when Legate is being invoked as a multi-node job, false otherwise.

std::vector<std::string> deduplicate_command_line_flags(
Span<const std::string> args
)

De-duplicate a series of command-line flags, preserving the relative ordering of the flags.

Given:

["--foo", "--bar", "--baz", "bop", "--foo=1"]
This routine returns:
["--bar", "--baz", "bop", "--foo=1"]
Note that the relative ordering of arguments is preserved.

Parameters:

args – The arguments to de-duplicate.

Returns:

The de-duplicated flags.

void set_mpi_wrapper_libraries()
ProjectionFunction *find_projection_function(
Legion::ProjectionID proj_id
)
void register_affine_projection_functor(
std::uint32_t src_ndim,
const proj::SymbolicPoint &point,
Legion::ProjectionID proj_id
)
void register_delinearizing_projection_functor(
const tuple<std::uint64_t> &color_shape,
Legion::ProjectionID proj_id
)
void register_compound_projection_functor(
const tuple<std::uint64_t> &color_shape,
const proj::SymbolicPoint &point,
Legion::ProjectionID proj_id
)
Logger &log_legate()
Logger &log_legate_partitioner()
void register_legate_core_tasks(Library &core_lib)
void register_exception_reduction_op(const Library &context)
bool has_started()
bool has_finished()
void register_legate_core_sharding_functors(
const detail::Library &core_library
)
Legion::ShardingID find_sharding_functor_by_projection_functor(
Legion::ProjectionID proj_id
)
void create_sharding_functor_using_projection(
Legion::ShardID shard_id,
Legion::ProjectionID proj_id,
const mapping::ProcessorRange &range
)
void create_sharding_functor_using_projection(
Legion::ShardingID shard_id,
Legion::ProjectionID proj_id,
const mapping::ProcessorRange &range
)
template<typename REDOP>
void register_reduction_callback(
const Legion::RegistrationCallbackArgs &args
)
void inline_task_body(
const Task &task,
VariantCode variant_code,
VariantImpl variant_impl
)
void legion_task_body(
VariantImpl variant_impl,
VariantCode variant_kind,
std::optional<std::string_view> task_name,
const void *args,
std::size_t arglen,
Processor p
)
void show_progress(
const DomainPoint &index_point,
std::string_view task_name,
std::string_view provenance,
Legion::Context ctx,
Legion::Runtime *runtime
)
bool operator==(const TaskConfig &lhs, const TaskConfig &rhs)
bool operator!=(const TaskConfig &lhs, const TaskConfig &rhs)
bool operator==(
const TaskSignature::Nargs &lhs,
const TaskSignature::Nargs &rhs
)
bool operator!=(
const TaskSignature::Nargs &lhs,
const TaskSignature::Nargs &rhs
)
bool operator==(const TaskSignature &lhs, const TaskSignature &rhs)
bool operator!=(const TaskSignature &lhs, const TaskSignature &rhs)
void task_wrapper(
VariantImpl variant_impl,
VariantCode variant_kind,
std::optional<std::string_view> task_name,
const void *args,
std::size_t arglen,
const void*,
std::size_t,
Processor p
)
void task_wrapper(
VariantImpl,
VariantCode,
std::optional<std::string_view>,
const void*,
std::size_t,
const void*,
std::size_t,
Legion::Processor
)
template<VariantImpl variant_fn, VariantCode variant_kind>
inline void task_wrapper_dyn_name(
const void *args,
std::size_t arglen,
const void *userdata,
std::size_t userlen,
Legion::Processor p
)
LEGATE_SELECTOR_SPECIALIZATION(CPU, cpu)
LEGATE_SELECTOR_SPECIALIZATION(OMP, omp)
LEGATE_SELECTOR_SPECIALIZATION(GPU, gpu)
InternalSharedPtr<Type> primitive_type(Type::Code code)
InternalSharedPtr<Type> string_type()
InternalSharedPtr<Type> binary_type(std::uint32_t size)
InternalSharedPtr<FixedArrayType> fixed_array_type(
InternalSharedPtr<Type> element_type,
std::uint32_t N
)
InternalSharedPtr<StructType> struct_type(
std::vector<InternalSharedPtr<Type>> field_types,
bool align
)
InternalSharedPtr<ListType> list_type(
InternalSharedPtr<Type> element_type
)
InternalSharedPtr<Type> bool_()
InternalSharedPtr<Type> int8()
InternalSharedPtr<Type> int16()
InternalSharedPtr<Type> int32()
InternalSharedPtr<Type> int64()
InternalSharedPtr<Type> uint8()
InternalSharedPtr<Type> uint16()
InternalSharedPtr<Type> uint32()
InternalSharedPtr<Type> uint64()
InternalSharedPtr<Type> float16()
InternalSharedPtr<Type> float32()
InternalSharedPtr<Type> float64()
InternalSharedPtr<Type> complex64()
InternalSharedPtr<Type> complex128()
InternalSharedPtr<FixedArrayType> point_type(std::uint32_t ndim)
InternalSharedPtr<StructType> rect_type(std::uint32_t ndim)
InternalSharedPtr<Type> null_type()
InternalSharedPtr<Type> domain_type()
bool is_point_type(const InternalSharedPtr<Type> &type)
bool is_point_type(
const InternalSharedPtr<Type> &type,
std::uint32_t ndim
)
std::int32_t ndim_point_type(const InternalSharedPtr<Type> &type)
bool is_rect_type(const InternalSharedPtr<Type> &type)
bool is_rect_type(
const InternalSharedPtr<Type> &type,
std::uint32_t ndim
)
std::int32_t ndim_rect_type(const InternalSharedPtr<Type> &type)
void abort_handler(
std::string_view file,
std::string_view func,
int line,
std::stringstream *ss
)
template<typename ...T>
void abort_handler_tpl(
std::string_view file,
std::string_view func,
int line,
T&&... args
)
std::string demangle_type(const std::type_info &ti)
std::pair<void*, std::size_t> align_for_unpack_impl(
void *ptr,
std::size_t capacity,
std::size_t bytes,
std::size_t align
)
std::size_t round_up_to_multiple(
std::size_t value,
std::size_t round_to
)
template<typename T>
std::pair<void*, std::size_t> align_for_unpack(
void *ptr,
std::size_t capacity,
std::size_t bytes = sizeof(T),
std::size_t align = alignof(T)
)
template<typename T>
std::size_t max_aligned_size_for_type()
template<typename T>
zip_detail::Zipper<zip_detail::ZiperatorShortest, Enumerator, T> enumerate(
T &&iterable,
typename Enumerator::value_type start = {}
)

Enumerate an iterable.

The enumerator is classed as a bidirectional iterator, so can be both incremented and decremented. Decrementing the enumerator will decrease the count. However, this only applies if iterable is itself at least bidirectional. If iterable does not satisfy bidirectional iteration, then the returned enumerator will assume the iterator category of iterable.

  std::vector<int> my_vector{1, 2, 3, 4, 5};

  // Enumerate a vector starting from index 0
  for (auto&& [idx, val] : legate::detail::enumerate(my_vector)) {
    std::cout << "accessing element " << idx << " of vector: " << val << '\n';
    // a sanity check
    EXPECT_EQ(my_vector[idx], val);
  }

  // Enumerate the vector, but enumerator starts at index 3. Note that the enumerator start has
  // no bearing on the thing being enumerated. The vector is still iterated over from start to
  // finish!
  auto enum_start = 3;
  for (auto&& [idx, val] : legate::detail::enumerate(my_vector, enum_start)) {
    std::cout << "enumerator has value: " << idx << '\n';
    std::cout << "accessing element " << idx - enum_start << " of vector: " << val << '\n';
    EXPECT_EQ(my_vector[idx - enum_start], val);
  }
Parameters:
  • iterable – The iterable to enumerate

  • start – [optional] Set the starting value for the enumerator

Returns:

The enumerator iterator adaptor

LEGATE_DEFINE_ENV_VAR(bool, LEGATE_TEST)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_SHOW_USAGE)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_AUTO_CONFIG)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_SHOW_CONFIG)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_SHOW_PROGRESS)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_EMPTY_TASK)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_SYNC_STREAM_VIEW)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_LOG_MAPPING)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_LOG_PARTITIONING)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_WARMUP_NCCL)
LEGATE_DEFINE_ENV_VAR(std::string, LEGION_DEFAULT_ARGS)
LEGATE_DEFINE_ENV_VAR(std::uint32_t, LEGATE_MAX_EXCEPTION_SIZE)
LEGATE_DEFINE_ENV_VAR(std::int64_t, LEGATE_MIN_CPU_CHUNK)
LEGATE_DEFINE_ENV_VAR(std::int64_t, LEGATE_MIN_GPU_CHUNK)
LEGATE_DEFINE_ENV_VAR(std::int64_t, LEGATE_MIN_OMP_CHUNK)
LEGATE_DEFINE_ENV_VAR(std::uint32_t, LEGATE_WINDOW_SIZE)
LEGATE_DEFINE_ENV_VAR(std::uint32_t, LEGATE_FIELD_REUSE_FRAC)
LEGATE_DEFINE_ENV_VAR(std::uint32_t, LEGATE_FIELD_REUSE_FREQ)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_CONSENSUS)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_DISABLE_MPI)
LEGATE_DEFINE_ENV_VAR(std::string, LEGATE_CONFIG)
LEGATE_DEFINE_ENV_VAR(std::string, LEGATE_MPI_WRAPPER)
LEGATE_DEFINE_ENV_VAR(std::string, LEGATE_CUDA_DRIVER)
LEGATE_DEFINE_ENV_VAR(bool, LEGATE_IO_USE_VFD_GDS)
LEGATE_DEFINE_ENV_VAR(std::string, REALM_UCP_BOOTSTRAP_MODE)
std::string make_error_message(Span<const ErrorDescription> errs)
template<typename T, typename U>
void typed_malloc(
T **ret,
U num_elems
) noexcept
template<typename El, typename Ex, typename L, typename A>
FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>>::difference_type operator-(
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &self,
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &other
) noexcept
template<typename El, typename Ex, typename L, typename A>
FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> operator-(
FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> self,
typename FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>>::difference_type n
) noexcept
template<typename El, typename Ex, typename L, typename A>
FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> operator+(
FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> self,
typename FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>>::difference_type n
) noexcept
template<typename El, typename Ex, typename L, typename A>
FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> operator+(
typename FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>>::difference_type n,
FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> self
) noexcept
template<typename El, typename Ex, typename L, typename A>
bool operator==(
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &lhs,
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &rhs
) noexcept
template<typename El, typename Ex, typename L, typename A>
bool operator!=(
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &lhs,
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &rhs
) noexcept
template<typename El, typename Ex, typename L, typename A>
bool operator<(
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &lhs,
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &rhs
) noexcept
template<typename El, typename Ex, typename L, typename A>
bool operator>(
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &lhs,
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &rhs
) noexcept
template<typename El, typename Ex, typename L, typename A>
bool operator<=(
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &lhs,
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &rhs
) noexcept
template<typename El, typename Ex, typename L, typename A>
bool operator>=(
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &lhs,
const FlatMDSpanIterator<::cuda::std::mdspan<El, Ex, L, A>> &rhs
) noexcept
template<typename T>
FlatMDSpanView(T span) -> FlatMDSpanView<T>
template<typename T>
std::pair<void*, std::size_t> pack_buffer(
void *buf,
std::size_t remaining_cap,
T &&value
)
template<typename T>
std::pair<void*, std::size_t> pack_buffer(
void *buf,
std::size_t remaining_cap,
std::size_t nelem,
const T *value
)
template<typename T>
std::pair<const void*, std::size_t> unpack_buffer(
const void *buf,
std::size_t remaining_cap,
T *value
)
template<typename T>
std::pair<const void*, std::size_t> unpack_buffer(
const void *buf,
std::size_t remaining_cap,
std::size_t nelem,
T *const *value
)
std::size_t processor_id()
void throw_invalid_proc_local_storage_access(
const std::type_info &value_type
)
template<typename U, typename Alloc, typename P, typename ...Args>
U *construct_from_allocator_(
Alloc &allocator,
P *hint,
Args&&... args
)
LEGATE_PRAGMA_PUSH()
LEGATE_PRAGMA_POP()
template<typename T = long long>
T safe_strtoll(
const char *env_value,
char **end_ptr = nullptr
)
bool install_terminate_handler() noexcept

Install the Legate std::terminate() handler.

This routine is thread-safe, and may be called multiple times. However, only the first invocation has any effect. Subsequent calls to this function have no effect. The user may respect the return value to determine whether the handler was installed.

The installed handler will pretty-print any thrown exceptions, adding a traceback showing where the exception was thrown.

Returns:

true if the handlers were installed, false otherwise.

Domain to_domain(Span<const std::uint64_t> shape)
Domain to_domain(const tuple<std::uint64_t> &shape)
DomainPoint to_domain_point(const tuple<std::uint64_t> &shape)
tuple<std::uint64_t> from_domain(const Domain &domain)
void assert_valid_mapping(
std::size_t tuple_size,
const std::vector<std::int32_t> &mapping
)
void throw_invalid_tuple_sizes(
std::size_t lhs_size,
std::size_t rhs_size
)
void assert_in_range(std::size_t tuple_size, std::int32_t pos)
template<typename T>
std::underlying_type_t<T> to_underlying(
T e
) noexcept
template<typename ...T>
Overload(T...) -> Overload<T...>
template<typename ...T>
zip_detail::Zipper<zip_detail::ZiperatorShortest, T...> zip_shortest(
T&&... args
)

Zip a set of containers together.

The adaptor returned by this routine implements a “zip shortest” zip operation. That is, the returned zipper stops when at least one object or container has reached the end. Iterating past that point results in undefined behavior.

The iterators returned by the adaptor support the lowest common denominator of all containers when it comes to iterator functionality. For example, if all containers’ iterators support std::random_access_iterator_tag, then the returned iterator will as well.

Parameters:

args – The set of containers to zip.

Returns:

A zipper constructed from the set of containers. Calling begin() or end() on the zipper returns the corresponding iterators.

template<typename ...T>
zip_detail::Zipper<zip_detail::ZiperatorEqual, T...> zip_equal(
T&&... args
)

Zip a set of containers of equal length together.

The adaptor returned by this routine implements a “zip equal” zip operation. That is, the returned zipper assumes all inputs are of equal size. Debug builds will attempt to verify this invariant upfront, by calling (if applicable) std::size() on the inputs. Iterating past the end results in undefined behavior.

The iterators returned by the adaptor support the lowest common denominator of all containers when it comes to iterator functionality. For example, if all containers’ iterators support std::random_access_iterator_tag, then the returned iterator will as well.

  std::vector<float> vec{1, 2, 3, 4, 5};
  std::list<int> list{5, 4, 3, 2, 1};

  // Add all elements of a list to each element of a vector
  for (auto&& [vi, li] : legate::detail::zip_equal(vec, list)) {
    vi = static_cast<float>(li + 10);
    std::cout << vi << ", ";
  }
Parameters:

args – The set of containers to zip.

Returns:

A zipper constructed from the set of containers of equal size. Calling begin() or end() on the zipper returns the corresponding iterators.

template<typename C, typename T>
std::basic_ostream<C, T> &operator<<(
std::basic_ostream<C, T> &os,
BasicZStringView<C, T> sv
)
template<typename C, typename T>
bool operator==(
BasicZStringView<C, T> lhs,
BasicZStringView<C, T> rhs
)
template<typename C, typename T>
bool operator!=(
BasicZStringView<C, T> lhs,
BasicZStringView<C, T> rhs
)
template<typename C, typename T>
bool operator==(
typename BasicZStringView<C, T>::base_view_type lhs,
BasicZStringView<C, T> rhs
)
template<typename C, typename T>
bool operator!=(
typename BasicZStringView<C, T>::base_view_type lhs,
BasicZStringView<C, T> rhs
)
template<typename C, typename T>
bool operator==(
BasicZStringView<C, T> lhs,
typename BasicZStringView<C, T>::base_view_type rhs
)
template<typename C, typename T>
bool operator!=(
BasicZStringView<C, T> lhs,
typename BasicZStringView<C, T>::base_view_type rhs
)
void throw_unsupported_dim(std::int32_t dim)
void throw_unsupported_type_code(legate::Type::Code code)
void throw_bad_internal_weak_ptr()
template<typename T>
T *to_address(T *p) noexcept
template<typename T, typename = std::void_t<decltype(std::declval<T>().operator->())>>
auto *to_address(
const T &p
) noexcept

Variables

template<typename T>
bool is_pure_move_constructible_v = is_pure_move_constructible<T>::value
template<typename T>
bool is_pure_move_assignable_v = is_pure_move_assignable<T>::value
template<typename From, typename To>
bool is_ptr_compat_v = is_ptr_compat<From, To>::value
template<typename T, typename ...Ts>
bool is_same_as_one_of_v = is_same_as_one_of<T, Ts...>::value
template<template<typename...> typename Op, typename ...Args>
bool is_detected_v = is_detected<Op, Args...>::value
template<typename T>
bool shared_from_this_enabled_v = is_detected_v<has_shared_from_this, T>
template<typename T>
bool is_container_v = is_container<T>::value