You are on page 1of 3

Embarrassingly parallel

In parallel computing, an embarrassingly parallelworkload or problem (also called perfectly parallel or pleasingly parallel) is one
where little or no effort is needed to separate the problem into a number of parallel tasks.[1] This is often the case where there is little
[2]
or no dependency or need for communication between those parallel tasks, or for results between them.

Thus, these are different from distributed computing problems that need communication between tasks, especially communication of
intermediate results. They are easy to perform on server farms which lack the special infrastructure used in a true supercomputer
cluster. They are thus well suited to large, Internet-based distributed platforms such as BOINC, and do not suffer from parallel
slowdown. The opposite of embarrassingly parallel problems areinherently serial problems, which cannot be parallelized at all.

A common example of an embarrassingly parallel problem is 3D video rendering handled by a graphics processing unit, where each
frame (forward method) or pixel (ray tracing method) can be handled with no interdependency. Password cracking is another
embarrassingly parallel task that is easily distributed oncentral processing units, CPU cores, or clusters.

Contents
Etymology
Examples
Implementations
See also
References
External links

Etymology
"Embarrassingly" is used here in the same sense as in the phrase "an embarrassment of riches", meaning an overabundance—here
referring to parallelization problems which are "embarrassingly easy".[3] The term may also imply embarrassment on the part of
developers or compilers: "Because so many important problems remain unsolved mainly due to their intrinsic computational
complexity, it would be embarrassing not to develop parallel implementations of polynomial homotopy continuation methods."[4]
The term is first found in the literature in a 1986 book on multiprocessors by MATLAB's co-founder Cleve Moler,[5] who claims to
have invented the term.[6]

An alternative term, pleasingly parallel, has gained some use, perhaps to avoid the negative connotations of embarrassment in favor
of a positive reflection on the parallelizability of the problems: "Of course, there is nothing embarrassing about these programs at
all."[7]

Examples
Some examples of embarrassingly parallel problems include:

Distributed relational database queries usingdistributed set processing.


Serving static files on a webserver to multiple users at once.
The Mandelbrot set, Perlin noise and similar images, where each point is calculated independently
.
Rendering of computer graphics. In computer animation, each frame or pixel may be rendered independently (see
parallel rendering).
Brute-force searches in cryptography.[8] Notable real-world examples includedistributed.net and proof-of-work
systems used in cryptocurrency.
BLAST searches in bioinformatics for multiple queries (but not for individual large queries). [9]

Large scale facial recognition systemsthat compare thousands of arbitrary acquired faces (e.g., a security or
surveillance video via closed-circuit television) with similarly large number of previously stored faces (e.g., arogues
gallery or similar watch list).[10]
Computer simulations comparing many independent scenarios, such as climate models.
Evolutionary computationmetaheuristics such as genetic algorithms.
Ensemble calculations of numerical weather prediction.
Event simulation and reconstruction inparticle physics.
The marching squares algorithm.
Sieving step of the quadratic sieve and the number field sieve.
Tree growth step of the random forest machine learning technique.
Discrete Fourier transformwhere each harmonic is independently calculated.
Convolutional neural networksrunning on GPUs.
Hyperparameter grid searchin machine learning.

Implementations
In R (programming language)– The Simple Network of Workstations (SNOW) package implements a simple
mechanism for using a set of workstations or aBeowulf cluster for embarrassingly parallel computations.[11]

See also
Amdahl's law defines value P, which would be almost or exactly equal to 1 for embarrassingly parallel problems.
Map (parallel pattern)

References
1. Herlihy, Maurice; Shavit, Nir (2012).The Art of Multiprocessor Programming, Revised Reprint(https://books.google.c
om/books?id=vfvPrSz7R7QC&q=embarrasingly#v=onepage&q=embarrasingly&f=false) (revised ed.). Elsevier.
p. 14. ISBN 9780123977953. Retrieved 28 February 2016. "Some computational problems are “embarrassingly
parallel”: they can easily be divided into components that can be executed concurrently
."
2. Section 1.4.4 of: Foster, Ian (1995). "Designing and Building Parallel Programs"(https://www.webcitation.org/5wfSkP
1Ia?url=http://www.mcs.anl.gov/~itf/dbpp/text/node10.html). Addison–Wesley (ISBN 9780201575941). Archived from
the original (http://www.mcs.anl.gov/~itf/dbpp/text/node10.html) on 2011-02-21.
3. Matloff, Norman (2011). The Art of R Programming: A Tour of Statistical Software Design, p.347. No Starch.
ISBN 9781593274108.
4. Leykin, Anton; Verschelde, Jan; Zhuang, Yan (2006). "Parallel Homotopy Algorithms to Solve Polynomial Systems"
(https://link.springer.com/chapter/10.1007%2F11832225_22#page-1). Proceedings of ICMS 2006.
5. Moler, Cleve (1986). Heath, Michael T., ed. "Matrix Computation on Distributed Memory Multiprocessors". Hypercube
Multiprocessors. Society for Industrial and Applied Mathematics, Philadelphia.ISBN 0898712092.
6. The Intel hypercube part 2 reposted on Cleve's Corner blog on The MathW
orks website (http://blogs.mathworks.co
m/cleve/2013/11/12/the-intel-hypercube-part-2-reposted/#096367ea-045e-4f28-8fa2-9f7db8fb7b01)
7. Kepner, Jeremy (2009). Parallel MATLAB for Multicore and MultinodeComputers, p.12. SIAM.
ISBN 9780898716733.
8. Simon, Josefsson; Colin, Percival (August 2016)."The scrypt Password-Based Key Derivation Function"(https://tool
s.ietf.org/html/rfc7914#page-2). tools.ietf.org. Retrieved 2016-12-12.
9. SeqAnswers forum (http://seqanswers.com/forums/showpost.php?p=21050&postcount=3)
10. How we made our face recognizer 25 times faster(http://lbrandy.com/blog/2008/10/how-we-made-our-face-recogniz
er-25-times-faster/) (developer blog post)
11. Simple Network of Workstations (SNOW) package (http://www.stat.uiowa.edu/~luke/R/cluster/cluster.html)
External links
Embarrassingly Parallel Computations, Engineering a Beowulf-style Compute Cluster
"Star-P: High Productivity Parallel Computing"

Retrieved from "https://en.wikipedia.org/w/index.php?title=Embarrassingly_parallel&oldid=827828402


"

This page was last edited on 26 February 2018, at 23:52.

Text is available under theCreative Commons Attribution-ShareAlike License ; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of theWikimedia
Foundation, Inc., a non-profit organization.

You might also like