Quantcast
Channel: Computing – Cray Blog
Viewing all articles
Browse latest Browse all 57

Six Ways to Say “Hello” in Chapel | Part 1

$
0
0

When learning a new programming language, users often start by studying “Hello world” programs —those that output simple messages to the console. Though such programs are trivial by nature, they can be an illuminating way to get familiar with a new language in a short amount of time.

In this series of articles, I’ll show several “Hello world” programs in Chapel, Cray’s open-source programming language for productive parallel programming. I’ll start with a pair of traditional (serial) “Hello world” programs and then move on to parallel versions that take advantage of Chapel’s features for shared- and distributed-memory execution.

Simple Hello World

Writing a traditional “Hello world” program in Chapel is the one-liner you’d hope for:
Write In

The writeln() routine prints its arguments to the console, followed by a linefeed. When this program is saved to a file named hello.chpl, compiled, and executed, it prints its message as expected:

hello.chapel

And there you have it: your first Chapel program!

“Production Grade” Hello World

Perhaps you’re thinking, “That’s nice, but my team writes applications that are millions of lines long. There’s no way that such flat, script-like code is going to scale to programs of that size or complexity.” We agree with that sentiment. For this reason, Chapel also supports structured programming features, including modules, procedures, iterators, and classes. Here’s a “production-grade” way to say “hello” in Chapel that demonstrates a few of these features in action:

image 3

This program starts by declaring a module, Hello, which defines the program. Modules in Chapel serve as code containers or namespaces. Large Chapel programs are typically composed of many modules. Next, we declare a configuration constant, message, to represent the string that we want to display. A configuration constant’s default value can be overridden on the program’s command-line, as a simple way of providing arguments (we’ll see an example just below). Finally, a procedure, main(), is used to define the program’s entry point. In this program, main() simply prints the message using writeln().

We compile the program as before, but when we run it, we can now use the configuration constant to  optionally replace the default message or one of our own:

Image 4

Parallel Hello World

Since the programming community already has a number of productive serial languages, let’s move on to Chapel’s parallel features. The following program shows a simple way to say “hello” in parallel:

Image 5

Here, the forall keyword specifies a loop whose iterations should execute in parallel. This loop iterates over the indices defined by the range 1..n, which represents the integers 1, 2, 3, …, n. Running the full problem size would exhaust my space limit, so let’s just print 10 messages:

Image 6

As you can see, the messages were printed in a somewhat arbitrary order. This is due to the forall-loop’s parallel execution. Every forall-loop in Chapel is implemented using one or more tasks — the unit of parallel computation in Chapel. Note that, happily, we don’t get output from distinct tasks conflicting with one another, resulting in garbled output like “Hello frHelHom itloello from…” This is due to the writeln() routine’s parallel-safe implementation.

The implementation details for a Chapel forall-loop are determined by the iterand expression that drives the loop — in this case, the range 1..n. For example, the iterand is responsible for defining the tasks used to implement the loop. By default, range iterands create a task for each of the available processor cores on the local compute node. For example, I ran the example above on my four-core laptop, so the loop used four tasks to implement the loop.

A forall-loop’s iterand also defines the iterations that each task owns. By default, ranges give each task a chunk of consecutive iterations of approximately equal size. In this case, the four tasks were assigned the sub-ranges 1..3, 4..5, 6..8, and 9..10, respectively. Looking back at the output, you can see that while the message order appears arbitrary, each task’s iterations were printed in order (i.e., message #6 came before #7 which came before #8 because the third task printed all three of them).

Distributed Parallel Hello World

As a teaser for the next article, here is a way to say “hello” in parallel from all of the processor cores and distributed compute nodes of a parallel system like a Cray® XC™ system or commodity cluster:

Image 7

In the next article, I’ll introduce this version in more detail. Until then, if you want to try your new-found skills firsthand, download a copy of Chapel and give it a spin!

The post Six Ways to Say “Hello” in Chapel | Part 1 appeared first on Cray Blog.


Viewing all articles
Browse latest Browse all 57

Trending Articles