Parallel Class

Parallel class and functions for distributed memory

modred.parallel.barrier()[source]

Wrapper for Barrier(); forces all processors/MPI workers to synchronize.

modred.parallel.bcast(vals)[source]

Broadcasts values from rank zero processor/MPI worker to all others.

Args:
vals: Values to broadcast from rank zero processor/MPI worker.
Returns:
outputs: Broadcasted values
modred.parallel.call_and_bcast(func, *args, **kwargs)[source]

Calls function on rank zero processor/MPI worker and broadcasts outputs to all others.

Args:

func: A callable that takes *args and **kwargs

*args: Required arguments for func.

**kwargs: Keyword args for func.

Usage:

# Adds one to the rank, but only evaluated on rank 0, so
# ``outputs==1`` on all processors/MPI workers.
outputs = parallel.call_and_bcast(lambda x: x+1, parallel.get_rank())
modred.parallel.call_from_rank_zero(func, *args, **kwargs)[source]

Calls function from rank zero processor/MPI worker, does not call barrier().

Args:

func: Function to call.

*args: Required arguments for func.

**kwargs: Keyword args for func.

Usage:

parallel.call_from_rank_zero(lambda x: x+1, 1)
modred.parallel.check_for_empty_tasks(task_assignments)[source]

Convenience function that checks for empty processor/MPI worker assignments.

Args:
task_assignments: List of task assignments.
Returns:
empty_tasks: True if any processor/MPI worker has no tasks, otherwise False.
modred.parallel.find_assignments(tasks, task_weights=None)[source]

Evenly distributes tasks among all processors/MPI workers using task weights.

Args:
tasks: List of tasks. A “task” can be any object that corresponds to a set of operations that needs to be completed. For example tasks could be a list of indices, telling each processor/MPI worker which indices of an array to operate on.
Kwargs:
task_weights: List of weights for each task. These are used to equally distribute the workload among processors/MPI workers, in case some tasks are more expensive than others.
Returns:
task_assignments: 2D list of tasks, with indices corresponding to [rank][task_index]. Each processor/MPI worker is responsible for task_assignments[rank]
modred.parallel.get_hostname()[source]

Returns hostname for this node.

modred.parallel.get_node_ID()[source]

Returns unique ID number for this node.

modred.parallel.get_num_MPI_workers()[source]

Returns number of MPI workers (currently same as number of processors).

modred.parallel.get_num_nodes()[source]

Returns number of nodes.

modred.parallel.get_num_procs()[source]

Returns number of processors (currently same as number of MPI workers).

modred.parallel.get_rank()[source]

Returns rank of this processor/MPI worker.

modred.parallel.is_distributed()[source]

Returns True if there is more than one processor/MPI worker and mpi4py was imported properly.

modred.parallel.is_rank_zero()[source]

Returns True if rank is zero, False if not.

modred.parallel.print_from_rank_zero(msg, output_channel='stdout')[source]

Prints msg from rank zero processor/MPI worker only.