GaeBlogX Arista Networks, Software-Defined Networking

Cost Semantics for Parallelism

Cost semantics is to discuss: How long do programs run (abstractly)?

The idea of cost semantics for parallelism is that we have the concrete ability to compute simultaneously.

Simple Example of Product Types

For sequential computation we have:

For parallel computation we have:

Deterministic Parallelism

It means that we are getting the same answer, just (potentially) faster.

Given a closed program $e$, we can count the number of $\mapsto_{seq}$ (or $\mapsto^{w}$ “work”) and the number of $\mapsto_{par}$ (or $\mapsto^{s}$ “span”)

Cost Semantics

We annotate $e\Downarrow^{w,s}v$ to keep tract of work and span.

Brent’s Principle

In general, it is a principle about how work and span predict evaluation in some machine.

For example, for a machine that has $p$ processors:

Machine with States

Local Transitions

$a$’s are names for the tasks, and

Where means to “wait for task $a$ to complete, and then plug it’s value in for $x$”, and means to wait for two tasks.

Suppose one of $e_1,e_2$ is not $val$, we have the following, which is also called fork:

And we have join:

Similarly for let:

Global Transitions

  • Select $1\leq k\leq p$ tasks to make local transitions
  • Step locally
  • Each creates or garbage collectos processes (global synchronization by $\alpha-renaming$)


How to we “Select $1\leq k\leq p$ tasks to make local transitions” e.g DFS, BFS

Similar Posts