OpenMP directives exploit shared memory parallelism by defining
various types of parallel regions. Parallel regions can include both
iterative and non-iterative segments of program code.
Pragmas fall into these general categories:
- Pragmas that let you define parallel regions in which work is
done by threads in parallel (#pragma omp parallel).
Most of the OpenMP directives either statically or dynamically bind
to an enclosing parallel region.
- Pragmas that let you define how work is distributed
or shared across the threads in a parallel region (#pragma
omp section, #pragma omp for, #pragma
omp single, #pragma omp
task).
- Pragmas that let you control synchronization among threads (#pragma
omp atomic, #pragma omp master, #pragma
omp barrier, #pragma omp critical, #pragma
omp flush, #pragma omp ordered) .
- Pragmas that let you define the scope of data visibility across
threads (#pragma omp threadprivate).
- Pragmas for task synchronization (#pragma
omp taskwait, #pragma omp barrier)

OpenMP directive syntax
.-,----------.
V |
>>-#pragma omp--pragma_name----+--------+-+--statement_block---><
'-clause-'
Pragma directives generally appear immediately before the section
of code to which they apply. For example, the following example defines
a parallel region in which iterations of a
for loop
can run in parallel:
#pragma omp parallel
{
#pragma omp for
for (i=0; i<n; i++)
...
}
This example defines a parallel region in which two or more non-iterative
sections of program code can run in parallel:
#pragma omp sections
{
#pragma omp section
structured_block_1
...
#pragma omp section
structured_block_2
...
....
}
For a pragma-by-pragma description of the OpenMP directives, refer
to Pragma directives for parallel processing in the XL C/C++ Compiler
Reference.