C++ Parallel Programming
Using text:
Parallel Programming with Microsoft Visual C++®[1]
Design Patterns for Decomposition and Coordination on Multicore Architectures
By
Colin Campbell
Ade Miller
2011
- There is also a companion volume to this guide, Parallel Programming with Microsoft .NET, which presents the same patterns in the context of managed code.
- Code Solutions[2]
Notes
- Map-Reduce - parallel data processing
The Parallel Stacks window shows call stack information for all the threads in your application.
- A well-written parallel program runs at approximately the same speed as a sequential program when there is only one core available.
- Tasks are not threads.
• Microsoft Visual Studio 2010 SP1 (Ultimate or Premium edition is required for the Concurrency Visualizer, which allows you to analyze the performance of your application); this includes the PPL, which is required to run the samples and the Asynchronous Agents Library.
- Control flow refers to the steps of an algorithm.
- Data flow refers to the availability of inputs and outputs.
- Unexpected sharing of data. When parallel programs share data that they really, really shouldn't have.
- Adding synchronization between tasks reduces parallelism because it increases the dependencies between tasks.
- Using locks can produce race conditions and slow the program down to sequential, not parallel performance. Henry uses locks and mutexes all over.
- Locks are the goto statements in parallel programming: they are error prone but necessary in certain situations, and they are best left, when possible, to compilers and libraries.
- While some programs are all sequential, no program is all parallel. Most are somewhere in between.
- Parallel Loops
Sequential loops to parallel loops. Parallel Loops execute steps in sequence, parallel or out-of-order. But all steps are executed before the loop finishes.
Anti-Patterns
Anti-patterns are cautionary tales. They highlight issues that need to be carefully considered as well as problem areas. Here are some issues to think about when you implement a parallel loop.
Design Approaches
- Don't parallel piecemeal. Design the program from scratch for parallel execution or make structural changes to an existing program.
- Six Design Patterns
Dictionary
- Concurrency is a concept related to multitasking and asynchronous input-output (I/O). It usually refers to the existence of multiple threads of execution that may each get a slice of time to execute before being preempted by another thread, which also gets a slice of time. Concurrency is necessary in order for a program to react to external stimuli such as user input, devices, and sensors. Operating systems and games, by their very nature, are concurrent, even on one core.
- Parallelism has concurrent threads execute at the same time on multiple cores. Parallel programming focuses on improving the performance of applications that use a lot of processor power and are not constantly interrupted when multiple cores are available.
- Amdahl's Law
Asynchronous Agents Library
Acronyms
- GPGPU - General-Purpose Computation on Graphics Processing Units (GPGPU)
- NUMA
- PPL - Parallel Patterns Library (PPL), Visual C++ 2010 library
- STL - Standard Template Library (STL)
lambda functions
Internal Links
Parent Article: Parallel Programming