e99 Online Shopping Mall

Geometry.Net - the online learning center Help  
Home  - Basic P - Parallel Computing Programming (Books)

  Back | 21-40 of 100 | Next 20
A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

click price to see details     click image to enlarge     click link to go to the store

$19.69
21. Introduction to Parallel Computing:
$67.26
22. Principles of Parallel Programming
$77.19
23. Parallel Computing: Numerics,
$84.03
24. Parallel Programming: Techniques
$49.49
25. The Sourcebook of Parallel Computing
$27.18
26. CUDA by Example: An Introduction
$46.50
27. Patterns for Parallel Programming
$73.64
28. An Introduction to Parallel Programming
$20.99
29. Parallel Computing Works!
$121.13
30. Handbook of Parallel Computing
$51.90
31. Parallel Programming in OpenMP
$60.00
32. Foundations of Multithreaded,
 
$219.16
33. Concurrent Programming: Fundamental
 
34. Dlp: A Language for Distributed
$191.87
35. Parallel Programming, Models and
$49.52
36. Parallel Computing in Science
$55.00
37. Parallel Computing on Heterogeneous
$255.87
38. Parallel Computing: Architectures,
 
39. Implementation of Non-Strict Functional
$55.34
40. Scientific Computing: An Introduction

21. Introduction to Parallel Computing: Design and Analysis of Parallel Algorithms
by Vipin Kumar, Ananth Grama, Anshul Gupta, George Karpis
Textbook Binding: 597 Pages (1994-01)
list price: US$73.00 -- used & new: US$19.69
(price subject to change: see help)
Asin: 0805331700
Average Customer Review: 4.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Take an in-depth look at techniques for the design and analysis of parallel algorithms with this new text. The broad, balanced coverage of important core topics includes sorting and graph algorithms, discrete optimization techniques, and scientific computing applications. The authors focus on parallel algorithms for realistic machine models while avoiding architectures that are unrealizable in practice. They provide numerous examples and diagrams illustrating potentially difficult subjects and conclude each chapter with an extensive list of bibliographic references. In addition, problems of varying degrees of difficulty challenge readers at different levels. Introduction to Parallel Computing is an ideal tool for students and professionals who want insight into problem-solving with parallel computers. Features: *Presents parallel algorithms in terms of a small set of basic data communication operations, greatly simplifying the design and understanding of these algorithms. *Emphasizes practical issues of performance, efficiency, and scalability. *Provides a self-contained discussion of the basic concepts of parallel computer architectures.*Covers algorithms for scientific computation, such as dense and sparse matrix computations, linear system solving, finite elements, and FFT. *Discusses algorithms for combinatorial optimization, including branch-and-bound, unstructured tree search, and dynamic programming. *Incorporates various parallel programming models and languages as well as illustrative examples for commercially-available computers. Audience: Junior/Senior/Graduate Computer Science and Computer Engineering majors Professional/Reference Courses: Distributed Computing Parallel Programming Parallel Algorithms Prerequisites: Operating Systems and Analysis of Algorithms 0805331700B04062001 ... Read more

Customer Reviews (4)

5-0 out of 5 stars Just great
Excellent introduction to the field, specially for the beginner. There is no other book as clear and concise as this one. If you need an introduction to parallel computing / programming, buy the second edition of this book now!

5-0 out of 5 stars Essential 4 any prospective parallel computing professional
I bought this book when I was a 2nd grade CS student. I planned to start my research project in supercomputing field. So I decided to study the aspects of parallel computing starting by its concepts and programming. As a programmer I found that I would need the general view before coding. Kumar's book is great in which it gives you generalized overview of hardware and software architectures. He and his contributors don't take care of what system nor language you're using. Instead, they want you to learn Parallel Programming. Scientific and non-numerical algorithms are overviewed and explained mathematically. They prove everything they state by using mathematics. I don't know any better way. Do you? It's worth every penny.

4-0 out of 5 stars Good book on parallel computing
This book is a very good one for the parallel computing fields. The most interesting parts of the book to me are the parallel alogrithms design & analysis. The ideas are explained clearly and the exercises are nice too. I would like to recommend this book to all my friends who are interested in parallel computing.

4-0 out of 5 stars Great details and insightful
This one is must for someone who has needs an introductory course for parallel computing. It dealt with the fundamental of parallel computing in terms of algorithms decide. ... Read more


22. Principles of Parallel Programming
by Calvin Lin, Larry Snyder
Hardcover: 352 Pages (2008-03-07)
list price: US$113.00 -- used & new: US$67.26
(price subject to change: see help)
Asin: 0321487907
Average Customer Review: 3.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Written by top researchers Larry Snyder and Calvin Lin, this highly anticipated first edition emphasizes the principles underlying parallel computation, explains the various phenomena, and clarifies why these phenomena represent opportunities or barriers to successful parallel programming. Introduction: Parallelism = Opportunities + Challenges; Introduction: Parallelism = Opportunities + Challenges; Reasoning about Performance; First Steps Towards Parallel Programming; Scalable Algorithmic Techniques; Programming with Threads; Local View Programming Languages; Global View Programming Languages; Assessing Our Knowledge; Future Directions in Parallel Programming; Capstone Project: Designing a Parallel Program. For all readers (particularly software engineers and computer system designers) interested in multi-core architecture and parallel programming. ... Read more

Customer Reviews (4)

1-0 out of 5 stars Overpriced, Dated Content
This book is overpriced and the content is very very dated. Who wants to learn ZPL in today's world? If you want to learn about parallelism pickup a copy of Patterns for Parallel Programming Timothy G. Mattson, Beverly A. Sanders, Berna L. Massingill. Don't waste ur money on this book.

1-0 out of 5 stars Riddled with errors
This book is absolutely riddled with errors. The sheer density of errors make the book unusable for any in-depth study.

4-0 out of 5 stars abstracts essential ideas
Years ago I briefly worked on a hypercube, and when I got this book, I wondered how it had fared. Alas, the hypercube, at least under this name, rated no mention. Though there is a passing reference to a binary 3 cube which is a 3 dimensional hypercube.

The authors explain the current state of multiprocessor architectures. The few remaining computer CPU makers have efforts in this field. Intel, AMD, Sun and IBM. The book describes qualitatively the salient aspects of each. One nice thing about the discussion is that it focuses on this, without drowning you in unnecessary hardware details. This turns out to be a key theme of the book. It abstracts out essential hardware properties, so that you can appreciate these and apply the book's ideas without being tied to any given chip.

The book also describes an important type of multiprocessor. Cluster machines, where each node is typically some off the shelf CPU, buffed up with a lot of local memory. The key differences between clusters are often related to how the nodes are hooked to each other, by some type of bus or crossbar. Affordability is an important property of clusters; thus the maximal use of commodity hardware. (The hypercube that I mentioned earlier would be a cluster.)

For a programmer, there is one overriding idea that you should get from the book. For optimal performance, minimise the internodal communication, compared to the use of a node's cache. The access time of the former can be 2-5 orders of magnitude slower. Details vary with the given architectures, of course. But typically nothing else comes close, in terms of effects on your throughput.

5-0 out of 5 stars excellent introductory text, both timely and timeless
I have done parallel programming on a variety of machines for many years, and have written some widely-used parallel numerical software. Now that I have graduate students of my own and teach courses in numerical and parallel computation, I've been hoping for a book like this to help my students understand the basic techniques, concepts, and problems common to most parallel programming, as well as to use as a reference for courses, without resorting to manuals bogged down in the details of specific architectures.A colleague of mine (who has a large company developing parallel tools and who for many years has taught a course on parallel scientific computing) pointed me to this text, and I'm much more pleased with it than with any recent book on the subject that I can recall.

The authors of this book clearly introduce key concepts of extracting parallelism, load balancing, performance analysis, and memory management with a number of well-selected examples and advice clearly stemming from long experience in the field.They describe numerous general principles in an accessible way, without getting bogged down in the theoretical models of dubious utility that are too common in this field.The book is timely, in that it exhibits a clear awareness of current architectural trends, but remains rightly focused on timeless ideas.

I suppose the authors cannot be blamed for devoting a chapter to the parallel programming language they have developed in their own work (ZPL), and it is balanced by chapters on the current popular low-level techniques like MPI and threads as well as brief discussions of other proposed high-level languages (although the mention of Cilk is a bit too brief for my tastes).But the real strength of the book is that it is not tied too closely to any particular language or implementation, and instead helps you to recognize fundamental ideas as they appear in various forms.

I do wish the book were a bit cheaper, but high textbook prices seem to be a fact of life.A more basic introduction to caches, and the connection between memory locality on serial computers and locality on parallel machines, would probably be helpful.The mention of the powerful idea of work stealing is too brief.And I'm sure I'll find many other things I dislike as I continue to use this book, but overall I'm quite happy with this book as a way to get students into this subject. ... Read more


23. Parallel Computing: Numerics, Applications, and Trends
Hardcover: 520 Pages (2009-06-04)
list price: US$99.00 -- used & new: US$77.19
(price subject to change: see help)
Asin: 1848824084
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

This book targets development of efficient parallel computational methods for different scientific and technical applications. Readers who wish to design and implement efficient solutions on parallel and distributed computer systems are given insight into the theory of the computational methods through practical application being emphasized throughout. Features: Discusses development of algorithms for different applications plus other aspects related to parallel numerical solution of PDEs (e.g. grid refinement). Considers other numerical applications such as data retrieval by linear algebra approach and quasi Monte-Carlo methods. Covers molecular dynamics, computational quantum physics, analysis of bio-signals and image and video coding. Chapters overviews and conclusions with a discussion on future work. This concise volume provides the state-of-the-art in parallel and distributed computing, and is a must-read for practitioners, researchers and graduate students.

... Read more

24. Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers (2nd Edition)
by Barry Wilkinson, Michael Allen
Paperback: 496 Pages (2004-03-14)
list price: US$113.40 -- used & new: US$84.03
(price subject to change: see help)
Asin: 0131405632
Average Customer Review: 3.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This book provides various parallel programming approaches and analyses of their performances in detail.The book covers the timely topic of cluster programming, interesting to many programmers due to the recent availability of low-cost computers.Useful as a professional reference for programmers and system administrators. ... Read more

Customer Reviews (6)

1-0 out of 5 stars Typical CS department book
I find this book to be poorly written.The examples are insufficient and the content is only a cursory introduction to parallel programming.My recommendation is to reaearch alternative publications relating to Parallel Programming.

4-0 out of 5 stars Good book at senior or early grad school level
The book serves as a good introduction to several advanced computing techniques.It isn't for beginners in computer science or networking, and it isn't worth the list price.Unfortunately, the topic isn't something you are likely to find in a career, so it isn't useful to general computer science students.
It is great as a learning book, in-depth enough that you could use it for on-the-job learning.It covers the things you need to know for real-world use.

I would have given it 5 stars, except it isn't all that great as a reference; you will probably end up using online help for whatever communications package you use.It's the kind of book you read once or twice, then give away to younger collegues.

4-0 out of 5 stars Wonderful book but don't pay full price
This is a really good introduction to Parallel Programming techniques, but its overpriced. Buy it used.

This book is used as a textbook in many computer science departments (hence the inflated price) but I found that even with the minimal computer science education I've had (mostly from self-study) I learned a lot and did not find this book overly difficult to follow.

2-0 out of 5 stars Not for beginners
This would be a great reference manual, but I am using this test in my parallel processing course and the pseudocode is confusing and the MPI functions are introduced with poor descriptions.

4-0 out of 5 stars Clear and informative book, but...
The book does quite well in explaining the concepts of parallel computing and programming, and I have very few complaints about anything actually written in the book.(A companion CD with some sample MPI/PVM programs would have been nice.)However, as well as this book is written and organized, it is almost comical to have this size of book (paperback, at that) costing nearly $...If the book would have cost about $.. less and had the companion CD, it would have been five stars. ... Read more


25. The Sourcebook of Parallel Computing (The Morgan Kaufmann Series in Computer Architecture and Design)
Hardcover: 842 Pages (2002-11-25)
list price: US$84.95 -- used & new: US$49.49
(price subject to change: see help)
Asin: 1558608710
Average Customer Review: 5.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description


Parallel Computing is a compelling vision of how computation can seamlessly scale from a single processor to virtually limitless computing power. Unfortunately, the scaling of application performance has not matched peak speed, and the programming burden for these machines remains heavy.The applications must be programmed to exploit parallelism in the most efficient way possible. Today, the responsibility for achieving the vision of scalable parallelism remains in the hands ofthe application developer.


This book represents the collected knowledge and experience of over 60 leading parallel computing researchers.They offer students, scientists and engineers a complete sourcebook with solid coverage ofparallel computing hardware, programming considerations, algorithms, software and enabling technologies, as well as several parallel application case studies. The Sourcebook of Parallel Computing offers extensivetutorials and detailed documentation of the advanced strategies produced by research over the last two decades
application case studies. The Sourcebook of Parallel Computing offers extensivetutorials and detailed documentation of the advanced strategies produced by research over the last two decades

* Provides a solid background in parallel computing technologies
* Examines the technologies available and teaches students and practitioners how to select and apply them
* Presents case studies in a range of application areas including Chemistry, Image Processing, Data Mining, Ocean Modeling and Earthquake Simulation
* Considers the future development of parallel computing technologies and the kinds of applications they will support ... Read more

Customer Reviews (1)

5-0 out of 5 stars Parallel Computing - An inside out by Jack Dongarra!
...This book builds on the important work done at the Center for Research on Parallel Computation and within the academic community for over a decade. It is a definitive text on parallel Computing and should be a key reference for students,
researchers and practitioners in the field. The Sourcebook for Parallel Computing gives a thorough introduction to parallel applications, software technologies, enabling technologies, and algorithms. I highly recommend this great book to anyone
interested in a comprehensive and thoughtful treatment of the most important issues in parallel computing. The Features & Benefits of this book includes...but not limited to 1) Providing a solid background in parallel computing technologies 2) Examining the technologies available and teaches students and practitioners how to select and apply them 3) Presenting case studies in a range of application areas including Chemistry, Image Processing, Data Mining, Ocean Modeling and Earthquake and 4) Considering the future development of parallel computing technologies and the kinds of applications they will support. Worth buying this book, your money is invested! ... Read more


26. CUDA by Example: An Introduction to General-Purpose GPU Programming
by Jason Sanders, Edward Kandrot
Paperback: 312 Pages (2010-07-29)
list price: US$39.99 -- used & new: US$27.18
(price subject to change: see help)
Asin: 0131387685
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

“This book is required reading for anyone working with accelerator-based computing systems.”

–From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory

CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.

 

CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.

 

Major topics covered include

  • Parallel programming
  • Thread cooperation
  • Constant memory and events
  • Texture memory
  • Graphics interoperability
  • Atomics
  • Streams
  • CUDA C on multiple GPUs
  • Advanced atomics
  • Additional CUDA resources

All the CUDA software tools you’ll need are freely available for download from NVIDIA.

http://developer.nvidia.com/object/cuda-by-example.html ... Read more

Customer Reviews (14)

5-0 out of 5 stars Perfect for professional programming collections
A recommended pick is Jason Sanders and Edward Kandrot's CUDA BY EXAMPLE: AN INTRODUCTION TO GENERAL-PURPOSE GPU PROGRAMMING. It's a fine pick for advanced programming collections where parallel programming is of interest, covering a computing architecture designed to support parallel programs. Two senior member of the CUDA software platform team up to offer this in-depth coverage, perfect for professional programming collections.

1-0 out of 5 stars Poorly executed
This book initially seemed like it would be a good set of tutorials on development with CUDA.It does roughly that, but the examples are poorly explained and more often than not require the read to go online to figure out what they just did.A book like this' strength should be precisely that it is self-contained with a full set of directions and explanations.

Definitely skip this one

2-0 out of 5 stars Fair starting point, but definitely not the only book you should read.
I've done some work with CUDA and read a number of books and tutorials. This book does a very good job of relating the syntax and structure, but this book really doesn't go beyond showing you how to get your code to compile when using different features. It does not show you how to write efficient CUDA code (Getting a 7x speedup on a card running 960 threads simultaneously should *not" be considered very impressive. We've recently gotten >60x speedups, but using concepts that aren't covered in this text). I know the book industry doesn't turn on a dime, so I can certainly understand that no specific discussion is given to Fermi (though the book does list those cards), and there are (I think they claim) 200 million non-Fermi cards out there, so there is still more than enough reason to write apps that need to know how these "older" cards work. You really need a reference that will also discuss optimizing for register use, coalesced memory accesses, divergence, etc. in much greater depth.

So, given the low price, it's a useful buy if you prefer a book instead of going through some online tutorials. But, if you want to write fast, efficient code, don't stop at this book.

5-0 out of 5 stars excellent introduction
Very well written, the authors (who are on the CUDA design team) seem to understand what people find confusing about GPU programming and take great pains to go slowly and explain things. Some other reviewers claim that the book is out of date at publication -- I view this as a lame complaint. This is an introductory book. If you work thorugh it and understand it, you will not have too much trouble catching up to the latest features.

5-0 out of 5 stars Hits the mark almost perfectly (read the title)
At least one reviewer seems to have read the book, but missed the title. This is unfortunate. The book is enjoyable and informative from the start. It gives practical examples that get you started with NVIDIA hardware right away. This is an introduction, folks. The very first page describes the objectives and prerequisites, which, in brief are: provide an overview of techniques for interfacing with GPU hardware, using CUDA, using a basic knowledge of C - a pretty low threshold. It is explicitly stated that the examples are generally not intended for production use, but instead have been created with the goal of comprehension. I absolutely applaud this approach.

Although there are very nice examples, you will not find coverage of advanced strategies in parallel or high performance computing. I have the experience to appreciate the topics that are "missing", but coverage of many of those things would really be out of scope for this work, and almost certainly be a distraction.

It's very rare that an author, or team of authors take on the task of teaching a specific technology without having their effort degrade into the production of a huge, incomprehensible tome of endless, redundant screenshots and half-baked everything-to-everybody code blocks. I suspect that the participation of the team at AW had something to do with the success of this book, as I have noticed a certain pattern of solid, on-topic references from them.

In any case, if your goal is to get started, this book is for you. If your goal is to produce very high performance code - which it probably is in the future - then this book with be a great companion to others that have or will cover distributed processing theory. ... Read more


27. Patterns for Parallel Programming
by Timothy G. Mattson, Beverly A. Sanders, Berna L. Massingill
Hardcover: 384 Pages (2004-09-25)
list price: US$64.99 -- used & new: US$46.50
(price subject to change: see help)
Asin: 0321228111
Average Customer Review: 3.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

The Parallel Programming Guide for Every Software Developer

From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

That's where Patterns for Parallel Programming comes in. It's the first parallel programming guide written specifically to serve working software developers, not just computer scientists. The authors introduce a complete, highly accessible pattern language that will help any experienced developer "think parallel"-and start writing effective parallel code almost immediately. Instead of formal theory, they deliver proven solutions to the challenges faced by parallel programmers, and pragmatic guidance for using today's parallel APIs in the real world. Coverage includes:

  • Understanding the parallel computing landscape and the challenges faced by parallel developers
  • Finding the concurrency in a software design problem and decomposing it into concurrent tasks
  • Managing the use of data across tasks
  • Creating an algorithm structure that effectively exploits the concurrency you've identified
  • Connecting your algorithmic structures to the APIs needed to implement them
  • Specific software constructs for implementing parallel programs
  • Working with today's leading parallel programming environments: OpenMP, MPI, and Java

Patterns have helped thousands of programmers master object-oriented development and other complex programming technologies. With this book, you will learn that they're the best way to master parallel programming too.



0321228111B08232004

... Read more

Customer Reviews (7)

4-0 out of 5 stars A pretty decent guide to parallel programming
"Patterns for Parallel Programming" (PPP) is the outcome of a collaboration between Timothy Mattson of Intel and Beverly Sanders &Berna Massingill (who are academic researchers). It introduces a pattern language for parallel programming, and uses OpenMP, MPI, and Java to flesh out the related patterns.

The Good: this volume discusses both shared-memory and distributed-memory programming, all between one set of covers. It also makes use of a general-purpose programming language and is therefore of interest both to computational scientists who are interested in clusters, and to programmers interested in multiprocessors (these days that covers pretty much everyone). More generally, PPP offers valuable advice to those interested in robust parallel software design. The authors cover a number of topics that are an essential part of parallel-programming lore (e.g. the 1D and 2D block-cyclic array distributions in Chapter 5). In other words, they codify existing knowledge, which is precisely what patterns are supposed to do. To accomplish this, they make effective use of a small number of examples (like molecular dynamics and the Mandelbrot set). That allows them to show a specific problem as approached both from different design spaces, and also from different patterns within one design space. This book follows in the footsteps of the illustrious volume "Design Patterns" by the Gang of Four (GoF). In chapters 3, 4, and 5, Mattson, Sanders, and Massingill introduce a number of patterns using a simplified version of the GoF template. Despite the structural similarities between the two books, PPP is more readable than the GoF volume. This is probably because it introduces a pattern language ("an organized way of navigating through a collection of design patterns to produce a design"), not just a collection of patterns. Essentially, the writing style is a linear combination of narrative and reference: it can be read cover-to-cover, or not. Finally, the three appendices contain introductory discussions of OpenMP, MPI, and concurrency in Java, respectively. They can be read either as the need arises, or before even starting the book: though limited in scope, they are pedagogically sound.

The Bad: despite being easier to read from start to finish than the GoF classic, this book is still constrained by its choice to catalog patterns. As a result, the recurring examples lead to repetition, since they have to be re-introduced in each example section. Also, given that the book was published in 2004, a few implementation-related topics are somewhat out-of-date (e.g., OpenMP 3.0 was not around at the time). Importantly, the book predates the recent explosion of interest in general-purpose GPU programming, so it doesn't mention, say, texture memory. However, more fundamental things like data decomposition, which the book does explain, are related to any parallel programming environment. On a different note, even though the book is generally readable, from time to time the authors resort to the "just look at the code and figure it out" technique: the best-known example is in chapter 4 when they discuss ghost cells and nonblocking communication. Furthermore, even though the authors have been for the most part clearheaded when naming the different patterns, I found their decision to call two distinct patterns "Data Sharing" and "Shared Data" (in the "Finding Concurrency" and "Supporting Structures" design spaces, respectively) quite confusing and therefore unfortunate. Also, the Glossary is very useful, in that it explains many terms either discussed in the text (e.g. "False sharing") or not (e.g. "Copy on write", "Eager evaluation"), but it is far from complete (e.g. "First touch", "Poison pill", and "Work stealing", though mentioned in the main text, are not included in the Glossary). Finally, I think the authors overstate the case when they claim that "the parallel programming community has converged around" Java: Pthreads would have been an equally (if not more) acceptable choice.

All in all, this book provides a good description of many aspects of parallel programming. Most other texts on parallel programming either are class textbooks or focus on a specific technology. In contradistinction to such books, "Patterns for parallel programming" strikes a happy medium between focusing on principles and discussing practical applications.

Alex Gezerlis

1-0 out of 5 stars A total waste of money
When I bought this book, I was hoping that the word 'patterns' in its title is only there to make it buzzword compliant.But sadly not.It is one of those completely useless pattern books, that long-windedly explain what should you do, without telling the how, and the why.Moreover all that explanations are about things, that you find out during the first day, when you actually sit down, and try to do some parallel programming.

4-0 out of 5 stars Probably one of the best books on this subject
A little dry and a little repetitive but only to a small degree. The subject is (necessarily) approached from several different 'points of view' so some repetition is to be expected, but this should not discourage you from buying and reading this book, it is one of the most readable and affordable books on this topic. I highly recommend this book.

4-0 out of 5 stars Easy to read and useful content
Normally design pattern books are things that you dip into rather than read end to end, simply because they can be very dry reading. Not this one - as long as you have an interest in parallel programming, reading this end to end should be easy. But that's not to say that you couldn't just dip in to the bits that are most applicable to your work - I'm sure you could.

Many of the examples given of where each pattern is used are in industry sectors other than where I work, but with such good descriptions of each pattern it is easy to picture where they are used other than the examples given and to identify where you have used them yourself without previously knowing that you were using a "named" pattern even if you have been doing it that way for years.

Much of the material in this book is stuff that is hard to find elsewhere. I've heard bits of it at Intel seminars or touched on in Intel books (e.g. the Threading Building Blocks book), but otherwise have not seen this stuff in print, even though many people (possibly unknowingly) are implementing the same ideas in code.

Excellent book. I've knocked one star off though, simply because the authors work on the premise that almost everyone is using one of OpenMP, MPI or Java. In practice, there are still an awful lot of people implementing such systems using C++ with either native threading APIs or third party libraries wrapping those threading APIs.

4-0 out of 5 stars Read this book
This is a very good book: It will start teaching you how to think about parallel programming and will help you get started in this area.

Why only four stars you may ask? The trouble is that after over 40 years knowledge about parallel programming is still weak. The scientific computation folks have their (often heavy duty) tricks of the trade, but, as another reviewer pointed out, parallel computing is much more and is starting to address much broader areas.

This book will help you wade through the maze of confusion and will help you get oriented - that is of a huge help. Then you need to practice... ... Read more


28. An Introduction to Parallel Programming
by Peter Pacheco
Hardcover: 392 Pages (2011-01-18)
list price: US$79.95 -- used & new: US$73.64
(price subject to change: see help)
Asin: 0123742609
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

Author Peter Pacheco uses a tutorial approach to show students how to develop effective parallel programs with MPI, PThreads, and OpenMP. The first undergraduate text to directly address compiling and running parallel programs on the new multi-core and cluster architecture, An Introduction to Parallel Programming explains how to design, debug, and evaluate the performance of distributed and shared-memory programs. User-friendly exercises teach students how to compile, run and modify example programs.



Key features:

  • Takes a tutorial approach, starting with small programming examples and building progressively to more challenging examples

  • Focuses on designing, debugging and evaluating the performance of distributed and shared-memory programs

  • Explains how to develop parallel programs using MPI, Pthreads, and OpenMP programming models
  • ... Read more

    29. Parallel Computing Works!
    by Geoffrey C. Fox, Roy D. Williams, Guiseppe C. Messina
    Hardcover: 977 Pages (1994-05-15)
    list price: US$122.00 -- used & new: US$20.99
    (price subject to change: see help)
    Asin: 1558602534
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description

    A clear illustration of how parallel computers can be successfully applied
    to large-scale scientific computations. This book demonstrates how a
    variety of applications in physics, biology, mathematics and other sciences
    were implemented on real parallel computers to produce new scientific
    results. It investigates issues of fine-grained parallelism relevant for
    future supercomputers with particular emphasis on hypercube architecture.



    The authors describe how they used an experimental approach to configure
    different massively parallel machines, design and implement basic system
    software, and develop algorithms for frequently used mathematical
    computations. They also devise performance models, measure the performance
    characteristics of several computers, and create a high-performance
    computing facility based exclusively on parallel computers. By addressing
    all issues involved in scientific problem solving, Parallel Computing
    Works!
    provides valuable insight into computational science for large-scale
    parallel architectures. For those in the sciences, the findings reveal the
    usefulness of an important experimental tool. Anyone in supercomputing and
    related computational fields will gain a new perspective on the potential
    contributions of parallelism. Includes over 30 full-color illustrations.

    ... Read more

    30. Handbook of Parallel Computing and Statistics (Statistics:A Series of Textbooks and Monographs)
    Hardcover: 552 Pages (2005-12-21)
    list price: US$129.95 -- used & new: US$121.13
    (price subject to change: see help)
    Asin: 082474067X
    Average Customer Review: 4.5 out of 5 stars
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    Technological improvements continue to push back the frontier of processor speed in modern computers. Unfortunately, the computational intensity demanded by modern research problems grows even faster. Parallel computing has emerged as the most successful bridge to this computational gap, and many popular solutions have emerged based on its concepts, such as grid computing and massively parallel supercomputers. The Handbook of Parallel Computing and Statistics systematically applies the principles of parallel computing for solving increasingly complex problems in statistics research.

    This unique reference weaves together the principles and theoretical models of parallel computing with the design, analysis, and application of algorithms for solving statistical problems. After a brief introduction to parallel computing, the book explores the architecture, programming, and computational aspects of parallel processing. Focus then turns to optimization methods followed by statistical applications. These applications include algorithms for predictive modeling, adaptive design, real-time estimation of higher-order moments and cumulants, data mining, econometrics, and Bayesian computation. Expert contributors summarize recent results and explore new directions in these areas.

    Its intricate combination of theory and practical applications makes the Handbook of Parallel Computing and Statistics an ideal companion for helping solve the abundance of computation-intensive statistical problems arising in a variety of fields. ... Read more

    Customer Reviews (2)

    5-0 out of 5 stars Excellent Overview of the State of the Art
    It came as somewhat of a surprise to the industry that coupling together several PC's enabled the construction of what was in effect a supercomputer at a small fraction of the cost.

    What began twenty or so years ago has now influenced the design of CPU's and the intereconnection 'LANs' that facilitate the transfer of data between the processors. And this clearly hasn't stopped. The AMD Opteron CPU's and Intel's PCI-Express are simply the latest innovations in silicon, and more is coming.

    From a system architecture standpoint, we have (and the book discusses) clusters, Grids, and distributed processor systems -- all of which are fairly loosely defined with plenty of room for very good discussions over several beer.

    What this book brings is an excellent introduction into the state of the art in parallel computers as it exists today. As is often the case with books that are pushing the state of the art, it is written by a large numnber of experts and edited together. Each chapter covers a particular area in depth from the design of the hardware to the languages (primarily Fortran and Java), to the solution of a series of common problems that are frequent in several different application areas.

    This book is an excellent summary of parallel computing as it exists today. It would be of particular help to the person responsible for writing the proposal for an organization to buy/build one. The book is probably a bit too advanced for a course at an undergraduate level, but would be excellent for first year graduate students in a wide variety of fields from computer science to bio-informatics, data mining, cryptography or any number of other fields requiring heavy duty computation.

    4-0 out of 5 stars brings together a comprehensive review
    A sturdy review of the different geometries of parallel machines, and of how to program them. Examples are given of algorithms that can be efficiently ported to these machines. The handbook is useful in summarising a lot of results scattered over conference proceedings and journal papers. The range of applications described is impressive.

    The text is probably suited for a graduate level course. A bit too specialised for most undergrad CS majors. ... Read more


    31. Parallel Programming in OpenMP
    by Rohit Chandra, Ramesh Menon, Leo Dagum, David Kohr, Dror Maydan, Jeff McDonald
    Hardcover: 231 Pages (2000-10-16)
    list price: US$60.95 -- used & new: US$51.90
    (price subject to change: see help)
    Asin: 1558606718
    Average Customer Review: 4.0 out of 5 stars
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description


    The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in government laboratories find they need to parallelize huge volumes of code in a portable fashion. OpenMP, developed jointly by several parallel computing vendors to address these issues, is an industry-wide standard for programming shared-memory and distributed shared-memory multiprocessors. It consists of a set of compiler directives and library routines that extend FORTRAN, C, and C++ codes to express shared-memory parallelism.


    Parallel Programming in OpenMP is the first book to teach both the novice and expert parallel programmers how to program using this new standard. The authors, who helped design and implement OpenMP while at SGI, bring a depth and breadth to the book as compiler writers, application developers, and performance engineers.

    * Designed so that expert parallel programmers can skip the opening chapters, which introduce parallel programming to novices, and jump right into the essentials of OpenMP.
    * Presents all the basic OpenMP constructs in FORTRAN, C, and C++.
    * Emphasizes practical concepts to address the concerns of real application developers.
    * Includes high quality example programs that illustrate concepts of parallel programming as well as all the constructs of OpenMP.
    * Serves as both an effective teaching text and a compact reference.
    * Includes end-of-chapter programming exercises.Amazon.com Review
    The OpenMP standard allows programmers to take advantage of new shared-memory multiprocessor systems from vendors like Compaq, Sun, HP, and SGI. Aimed at the working researcher or scientific C/C++ or Fortran programmer, Parallel Programming in OpenMP both explains what this standard is and how to use it to create software that takes full advantage of parallel computing.

    At its heart, OpenMP is remarkably simple. By adding a handful of compiler directives (or pragmas) in Fortran or C/C++, plus a few optional library calls, programmers can "parallelize" existing software without completely rewriting it. This book starts with simple examples of how to parallelize "loops"--iterative code that in scientific software might work with very large arrays. Sample code relies primarily on Fortran (undoubtedly the language of choice for high-end numerical software) with descriptions of the equivalent calls and strategies in C/C++. Each sample is thoroughly explained, and though the style in this book is occasionally dense, it does manage to give plenty of practical advice on how to make code run in parallel efficiently. The techniques explored include how to tweak the default parallelized directives for specific situations, how to use parallel regions (beyond simple loops), and the dos and don'ts of effective synchronization (with critical sections and barriers). The book finishes up with some excellent advice for how to cooperate with the cache mechanisms of today's OpenMP-compliant systems.

    Overall, Parallel Programming in OpenMP introduces the competent research programmer to a new vocabulary of idioms and techniques for parallelizing software using OpenMP. Of course, this standard will continue to be used primarily for academic or research computing, but now that OpenMP machines by major commercial vendors are available, even business users can benefit from this technology--for high-end forecasting and modeling, for instance. This book fills a useful niche by describing this powerful new development in parallel computing. --Richard Dragan

    Topics covered:

    • Overview of the OpenMP programming standard for shared-memory multiprocessors
    • Description of OpenMP parallel hardware
    • OpenMP directives for Fortran and pragmas for C/C++
    • Parallelizing simple loops
    • parallel do / parallel for directives
    • Shared and private scoping for thread variables
    • reduction operations
    • Data dependencies and how to remove them
    • OpenMP performance issues (sufficient work, balancing the load in loops, scheduling options)
    • Parallel regions
    • How to parallelize arbitrary blocks of code (master and slave threads, threadprivate directives and the copyin clause)
    • Parallel task queues
    • Dividing work based on thread numbers
    • Noniterative work sharing
    • Restrictions on work-sharing
    • Orphaning
    • Nested parallel regions
    • Controlling parallelism in OpenMP, including controlling the number of threads, dynamic threads, and OpenMP library calls for threads
    • OpenMP synchronization
    • Avoiding data races
    • Critical section directives (named and nested critical sections and the atomic directive
    • Runtime OpenMP library lock routines
    • Event synchronization (barrier directives and ordered sections)
    • Custom synchronization, including the flush directive
    • Programming tips for synchronization
    • Performance issues with OpenMP
    • Amdahl's Law
    • Load balancing for parallelized code
    • Hints for writing parallelized code that fits into processor caches
    • Avoiding false sharing
    • Synchronization hints
    • Performance issues for bus-based and Non-Uniform Memory Access (NUMA) machines
    • OpenMP quick reference
    ... Read more

    Customer Reviews (5)

    4-0 out of 5 stars Many Tips And Pitfalls
    Hoping for just information on OpenMP, I was pleased to find much information about issues with parallelizing algorithms.In fact, OpenMP itself is actually very tiny, easily fitting on a few quick reference cards.Applying OpenMP, or any multithreading for that matter, is what actually determines success.I was particularly pleased with the section on cache lines and their impact on design.

    4-0 out of 5 stars classic how to but too heavy a focus on fortran
    I found this book to be a well written ground up how to on OpenMP. It is approachable by someone not well versed in parallel programming. I believe it was written before the wide scale advent of multi core architectures and in those pre multi core days most users of OpenMP would have been in the scientific community and have been interested largely in speeding up fortran codes. So the focus on the fortran constructs is understandable. However in todays world with every desktop equipped with a multi core cpu the book would be better with a stronger focus on c++. Despite the heavy focus on fortran examples the book does include information on using OpenMP from c++. I rate the book highly, because of its clarity, approachability, and style and hope future editions have a stronger showing of c++ examples.

    4-0 out of 5 stars Great introduction
    Chandra et al. have put together a readable, helpful introduction to parallel programming with OpenMP. Unlike its major competitor, MPI, OpenMP assumes a shared memory model, in which every processor has the same view of the same address space as every other. At least as a start, this cuts the intellectual load way down. The programmer adds just one concept to the problem, parallelism, without adding buffering, communication networks, and lots of other stuff as well.

    After two introductory chapters, the authors introduce OpenMP in three stages: loop parallelism, general parallelism, and synchronization, roughly in order of increasing complexity. The authors present the necessary OpenMP pragmas and APIs at each step, showing how they address the immediate problems. An appendix summarizes the pragmas and APIs, in both their C/C++ and Fortran forms. OO C++ programmers may be dismayed by the amount of attention paid to an un-cool language like Fortran, but need to realize that it's still the lingua franca of performance programming. And, in fairness, the authors spend equal time on C++ idiosyncrasies, such as constructor invocations for variables that are silently replicated in each of the parallel threads.

    If you've ever done performance programming, you're groaningly aware that getting the parallelism right is actually the easy part. The tricky parts come in breaking dependencies, in scheduling, in ensuring spatial and temporal locality, and in dealing with cache coherency issues of multiprocessors. The authors give great introductions to all of the basics. This includes a patient description of how caches actually work, since there's a new crop of beginners every day. The authors describe performance analysis tools, but only briefly. The tools differ so much between vendors and between one rev and the next, that any detailed description would be useless to most readers immediately and obsolete for all readers very soon.

    This won't turn a beginner into the guru of performance computing. It will, however, establish a working competence in one popular parallelization tool, OpenMP, and in the computing technologies that affect parallel performance.

    //wiredweird

    4-0 out of 5 stars has many Fortran 77 but not Fortran 95 examples
    This book has many examples of how to parallelize Fortran 77 programs with loops using OpenMP directives, but coverage of how to parallelize Fortran 95 code using array operations is sparse. For this, one should read the tutorial "Parallel Programming in Fortran 95 using OpenMP", by Miguel Hermanns, available at the OpenMP web site.

    4-0 out of 5 stars Clear and concise
    this is probably the first book about OMP. The author has decribed the uses of many functions and directives of OMP. The examples (in Fortran and C) given are also useful. Generally this is a good book to get you started off with OMP. ... Read more


    32. Foundations of Multithreaded, Parallel, and Distributed Programming
    by Gregory R. Andrews
    Paperback: 664 Pages (1999-12-10)
    list price: US$80.00 -- used & new: US$60.00
    (price subject to change: see help)
    Asin: 0201357526
    Average Customer Review: 3.5 out of 5 stars
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    Foundations of Multithreaded, Parallel, and Distributed Programming covers-and then applies-the core concepts and techniques needed for an introductory course in this topic. The book emphasizes the practice and application of parallel systems, using real-world examples throughout.

    Greg Andrews teaches the fundamental concepts of multithreaded, parallel and distributed computing and relates them to the implementation and performance processes. He presents the appropriate breadth of topics and supports these discussions with an emphasis on performance. ... Read more

    Customer Reviews (6)

    1-0 out of 5 stars Difficult to follow with bad examples.
    I took this class with Greg Andrews and he is as bad of a teacher as he is a textbook author.He is jumpy and scattered while teaching and his book is organized similarly, very hard to follow, doesn't explain things really well.I'm a book learner, and this text is not the right one for the self-taught type.So if you want to learn about parallel computing, try another book, and hopefully this isn't a required text!

    5-0 out of 5 stars Wide and deep on fundamentals, and timely
    First a little about me and of what use I found the book.Maybe you are looking to determine whether this book is suitable for your purposes too.I studied this book cover to cover.This took a little over one year.However I have to add, to this day I did not yet solve the example problems, which are plentiful and probably carefully designed and necessary forunderstanding all the ins and outs and cementing the lessons into your head.But I work all day writing software for a living and felt I needed a change of pace, meaning I wanted to spend time studying and absorbing new information after work hours this year rather than writing still more code atnight for now.I've done the night coding thing many a time already.I undoubtedly will shortly be writing code using the book's concepts.That's really the point of me reading it: to write a lot more concurrent code, and to do it better.To my detriment I did not have the benefit of enrolling in a real upper level undergraduate course or a graduate course as guided by an instructor as is probably more typically the case with this textbook's readers.To my benefit, during my day work I had already designed and developed some relatively successful (meaning people are actually using them today, and they paid some money for it) multithreaded, parallel, and distributed applications, such as a web statistics and analytics clustered parallel application, and communication middleware of an access control system.At mid-career it occurred to me that having already made concurrent systems by relying on my cultivated developer's intuition and OJT, now was maybe a good time to try to fill the gaps in my knowledge of concurrent programming and climb to higher levels of mastery in this specialty.

    Parallelism is making a strong comeback again in all sorts of systems small and large, such as GPUs and multicore CPUs as well as the giant Beowulf clusters and the classical manyprocessor supercomputers.The heat generation that comes with ever faster clock cycles has put up a pretty tough barrier in the way for the famous CPU chip makers.You can see for yourself that the rise in the gigahertz numbers has flattened out lately, and now it's the number of cores which is beginning to rise in commodity computing equipment, instead of the frequency of (serial) operations.The new parallel generation is happening now after a decade or two of diversion into emphasizing faster sequential processors, Intel and AMD being the notables in that effort.Well now the Intels and AMDs have pushed sequential processing and hardware instruction pipelining really fast but maybe they also have found the thermal limits, and the memory speed limits, and exploited all the pipelining and predictive branching that they could, and now they have to find something else to make similar progress going forward to make Mr. Moore and his Law work right again.Increasing the total aggregate throughput of operations executed across multiple processors or cores appears to be a way forward now in computing performance.

    I needed some good, thorough material to perform as the center pillar of a concurrent programming learning initiative.

    To select this book over the other textbooks that are available, whereas some seemed specialized or narrow in scope, and some were too formal or dense in their presentation for do-it-yourselfers, I noticed on the web that a pretty large number of universities are using the textbook by Andrews for their introductory course in parallel programming or concurrent programming.

    Now notice that the book was copyright in 2000 which is almost a decade ago now which is a bit of a limitation, yet the reality is this is not quite a problem. You can fill in the rare gap using other sources of information, like also using other books, as well as online course lecture videos graciously provided for free public consumption at web sites like the Cloudera company, and the MIT and UC Berkeley university web sites.Keep in mind there is a reason why a book becomes a classic and keeps being used at universities.Consequently however, the famous MapReduce is not really represented in the book.Globus, an earlier distributed framework, is mentioned though.Google and Yahoo and Facebook, and other such sites who are now programming innovatively in the very large for mostly nonscientific applications, would not yet hit the big time and share their concurrent computing innovations with the public until a couple more years into the future, when this book was written.

    In my opinion today's massively parallel applications underpinning a few of the famous web sites might well now have some of the world's biggest concurrent application clusters even rivaling supercomputers, since the "supers" don't seem to concurrently use "clusters of clusters" linked across the whole planet, like MapReduce already does every day for millions of users.The traditional supercomputers, on the other hand, even the biggest, baddest ones who are hitting the top 500 fastest lists, seem to be located at just one site at a time if I'm not mistaken.And I'm not talking about content caching or simple load balancing at Google, because GFS and Mapreduce as a parallel coordination language is much more than simple web site front ending.Mapreduce is an application development framework.I suspect Google and its globe-spanning cluster application might be even faster than any of the world's fastest single-site highly parallel supercomputers doing the atom bomb development simulations and cryptanalysis and communications traffic analytics for the DOE or military.The actual numbers are unknown it seems but I suspect the world's largest Beowulf cluster is already in use at Google and they might already be achieving application and system concurrency across perhaps a half-million compute nodes.

    Also, Single Instruction Multiple Data (SIMD) programming is not covered enough to my taste in Andrews book.Yet I want to program some massively parallel SIMD GPUs as seen a lot lately in daughterboards or "video cards."There is a30-processor GPU with thousands of parallel hardware threads, organized in a multi-level thread/warp/block hierarchy, with its own separate NUMA memory subsystem, running in my workstation right now as I write this review.Also, I understand Cray just announced their intent to include SIMD GPUs in an upcoming supercomputer.So SIMD is making a comeback.But the book provides nearly no instruction for learning SIMD design and coding.I am left to prepare for SIMD using other sources like the vendor-specific NVidia CUDA documentation or perhaps the nascent OpenCL language.SIMD computing has existed long before year 2000, but at the time of the book, SIMD had already fallen out of favor apparently, because Multi Instruction Multiple Data (MIMD) architecture had pretty much taken over the computer scientists' attention, and this book reflects that.

    Despite the minor quibbles, it is accurate to say the book has a broad coverage of topics within the field.And, at the same time there is the depth and detail such that the reader will develop a feeling of being equipped for a good amount of languages or communication models you will eventually select whatever the job at hand.Like the author says, you can only put so many pages into a single textbook. There are probably whole books out there about any one of the chapters in this book.The bread and butter skill of how to effectively think about concurrent and parallel and distributed systems of nearly all types is presented with clarity and simplicity.This benefit is really the strength of this book.There are many examples built from different models and approaches so you get a sense of what works in what situation.There are also plenty of pseudocode and near-code examples in many languages.Make no mistake, there is also a significant amount of detail and depth of instruction on the essentials such as building correct and high performance and fair critical sections of shared memory.The reader develops a sense for fine grained and coarse grained concurrency; effective control of nondeterministic instruction histories; shared memory versus distributed memory programming; parallel and sequential and distributed and concurrent programming (each is different); concurrent systems-level programming versus concurrent applications-level programming; surveys of important features in different languages including their strengths and weaknesses with regard to the suitability for your system including hardware, network, and software; and parallelizing compilers and language abstractions.You will develop readiness to tackle situations with Ps and Vs (semaphores), monitors, message passing, pthreads, and critical sections.

    Now, please put aside the tone of the minor criticisms I told you earlier.Andrews book is far more timely than you might have been thinking.Let me demonstrate.Yesterday a brand new language named "Go" was made available to the public by Google.Google appears to be presenting Go as an open-source concurrent-friendly systems programming language.Reading the tutorial for Go, one can see Go provides intrinsics which largely mimic the CSP style of synchronous message passing for interprocess communication and synchronization.Anyone who reads Andrews book will spot the CSP similarity quickly in the new Go language.Moreover the reader will also bring a good preparation on how to use Go's communication model.That's because Andrews' book has provided its readers with good instruction on the synchronous message passing model.Having read Andrews' book, I expect you are better prepared to start programming with what is perhaps one of the more sophisticated and essential parts of the Go language, its intrinsic message passing model.I would suggest that someone who approaches Go without any preparatory knowledge of CSP's guarded synchronous communications model might be at risk of getting mired down in confusion for a while.Go supports concurrency, yet you will not find multithreading and forks and joins in it if I'm understanding it correctly.Those won't be seen explicitly because with message passing those are absent.Perhaps multithreading is the only concurrent technology that some programmers have used, especially if C and pthreads or java or C#.net are among your other area of expertise.My suggestion is to pick up a copy of Andrews book if you want to program some concurrent systems using message passing like Go's.I don't expect a vendor's language tutorial is going to have the time and space to provide all the educational tools, such as a comparitive analysis in different application situations. The book explains the comm and synch model in the context of several different examples;and further, you would want to know where this communication and synchronization model is efficient and simple to code, versus where it is not so simple at times, and this is illustrated as well.Asynchronous message passing code using MPI can be simpler than synchronous code (seen in Go or CSP) depending on the application at hand.Also, maybe you can use CSP to model your Go application.That's because CSP is not just a programming language; it's also a modeling tool which may be useful for Go algorithm design work.

    The book was (relatively) easy to read independently compared to some other books I've purchased on similar subjects.The worse ones lay dormant on my shelf.My personal copy of the Andrews book has begun showing some wear and real usage... and that's a good thing.The author obviously knows a lot on this subject, yet he still is able to conjure the sympathy and patience for the beginner. He tends to provide the details that you cannot take for granted in the background of a nonexpert (future expert).Some other books by contrast seem to make too-big logical jumps, or use a terseness that leaves you shaking your head and rereading the same sentences over and over like something was left out.It is apparent that more than just a computer scientist, Andrews is also a teacher of knowledge.Independent students can read his book effectively because getting stuck and frustrated is rare, an important boon in the absence of an expert instructor.

    As Andrews said up front, while his book is broad and has also some depth where it's needed, it is still not the only book you will need.This is not much of a problem in the big picture, especially with the web.I am finding good satisfaction in having studied Andrews' broad introductory book closely.Moving ahead from there, it was useful to study complementary materials like the UC Berkely ParLab short course lecture videos (free), Prof. Demmel's CS267 Apps of Parallel Computers course lecture videos (also free), the linear algebra and calculus math material available on free video at MIT's Opencourseware site, and by reading the more narrowly-scoped but interesting textbook Patterns of Parallel Programming by Mattson et al.

    Overall, Prof. Andrews' book is a strong basis for learning the principles of design and programming of parallel, concurrent, and distributed systems.

    5-0 out of 5 stars Dr. Andrews knows this topic
    The author of this text, Dr. Andrews, has dealt with the theory and implementation of parallel/multithreaded/distributed in computer systems since the 70s. I was fortunate to take his class at the University of Arizona in which this book was used as the primary text.

    Unlike many textbooks of its ilk, Dr. Andrews does use coded examples, but they are not complex code excerpts that span several pages. He does an excellent job of covering the topic in both C with Posix, Java, as well as the language he worked on MPD. Since this topic has been his primary focus he really knows the subject matter yet can explain it in a way such that anyone with moderate programming skills can grasp.

    Just like his lectures, the fundamentals and theory presented in each chapter is always structured, explained, and numerous examples are given to reinforce the topics that are being taught. I would recommend this book to anyone who requires an introductory to medium exposure to the critical topic of multithreaded, parallel and distributed programming.

    5-0 out of 5 stars An Excellent Book
    I used this book for Dr. Andrews' parallel computing class many years ago and LOVED this book.This book covers a wide ranging topics of parallel computing and is a good enough reference for 99% people out there.

    The book is extremely useful in that it provides actual working example codes for nearly all the topics covered, in C or Java or both.The codes are also very small, in most cases less than a page.This is very important because a lot cases such as multiple readers single writers are not easy to code from scratch and could easily have synchronization problems unless one has strong overall grasp of the concept.This book does very good analysis of potential pitfalls of them.Even if you already knew the concepts, this book provides a valuable reference and code templates.

    Some part of the book may seem to be pseudo code to some people since Dr. Andrews uses the MPD language (which is on top of C) for a far easier time dealing with many of the computing issues.Although I have never used to MPD language, I find the syntax useful in understanding many concepts Dr. Andrews is trying to explain, such as how to partition work into small tasks.

    1-0 out of 5 stars I've never read anything worse than this in any subject ever
    I've read a few books on computer science and mathematics, being a university student in both subjects. However, I've never read anything nearly as conjested as this book. Basically its 665 pages of randomly mixed crap. I wonder if Amazon is going to censor this review, still this is what I think about this book.

    Many parts of this books shows that the author has a very poor understanding of mathematics, logical deduction and also pays to attention to details. Further he seems to have quite a bit of flawed intuition about programs/processes and threads. Okay that might be harsh but atleast he has no ability to communicate his understanding anyway so...

    More specifically I disagree with the following things;
    The author repeats definitions (sometimes three or more times!).
    The author does not explain his weird pseudo-code notation which I additionally think is counter-intuitive. He presents many copies of the same snippet with the first few versions beeing incorrect and the versions only differ my a few lines. Great way to take of two whole pages with minimal actual information content.

    The one thing that made be go online and write this was page 73 (i've not read futher and i'm not sure i'm going to either) where the author delivers a lengthy insult to ones intellect by telling you that a program is BAD if any possible trace of execution leave the program in a BAD state. Further he continues to write that GOOD is equivalent to BAD, and then uses these two concepts as if they were each others logical negations. ... Read more


    33. Concurrent Programming: Fundamental Techniques for Real-Time and Parallel Software Design (Wiley Series in Parallel Computing)
    by Tom Axford
     Paperback: 266 Pages (1989-10-27)
    list price: US$40.95 -- used & new: US$219.16
    (price subject to change: see help)
    Asin: 0471923036
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    A practical introduction to the techniques and algorithms of concurrent programming. Low-level methods commonly used in existing real-time software are covered first, followed by more sophisticated high-level techniques that are increasingly being applied to real-time and parallel systems. Covers a large number of algorithms and a wide variety of concurrency mechanisms and languages. ... Read more


    34. Dlp: A Language for Distributed Logic Programming : Design, Semantics and Implementation (Wiley Series in Parallel Computing)
    by Anton Eliens
     Hardcover: 300 Pages (1992-07)
    list price: US$69.95
    Isbn: 0471931179
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    This text introduces the distributed logic programming language DLP. Distributed logic programming combines logic programming, object-oriented programming and parallelism. A distinguishing feature of DLP with respect to other proposals is the support for distributed backtracking over the results of a rendezvous between objects. A leading interest behind this work has been the question of parallelism in expert system reasoning. Distributed logic programming is a suitable vehicle for the implementation of distributed knowledge-based systems, including expert systems, and systems for distributed problem solving. The complete trajectory of the development of an experimental programming language is covered, paying attention to the design, the semantics and implementation. The text also introduces a multi-paradigm programming language for the implementation of distributed knowledge-based systems by means of small to medium-size examples. ... Read more


    35. Parallel Programming, Models and Applications in Grid and P2P Systems (Advances in Parallel Computing)
    by F. Xhafa
    Hardcover: 350 Pages (2009-05-15)
    list price: US$196.00 -- used & new: US$191.87
    (price subject to change: see help)
    Asin: 1607500043
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    The demand for more computing power has been a constant trend in many fields of science, engineering and business. Now more than ever, the need for more and more processing power is emerging in the resolution of complex problems from life sciences, financial services, drug discovery, weather forecasting, massive data processing for e-science, e-commerce and e-government etc. Grid and P2P paradigms are based on the premise to deliver greater computing power at less cost, thus enabling the solution of such complex problems. Parallel Programming, Models and Applications in Grid and P2P Systems presents recent advances for grid and P2P paradigms, middleware, programming models, communication libraries, as well as their application to the resolution of real-life problems. By approaching grid and P2P paradigms in an integrated and comprehensive way, we believe that this book will serve as a reference for researchers and developers of the grid and P2P computing communities. Important features of the book include an up-to-date survey of grid and P2P programming models, middleware and communication libraries, new approaches for modeling and performance analysis in grid and P2P systems, novel grid and P2P middleware as well as grid and P2P-enabled applications for real-life problems. Academics, scientists, software developers and engineers interested in the grid and P2P paradigms will find the comprehensive coverage of this book useful for their academic, research and development activity.

    IOS Press is an international science, technical and medical publisher of high-quality books for academics, scientists, and professionals in all fields.

    Some of the areas we publish in:

    -Biomedicine
    -Oncology
    -Artificial intelligence
    -Databases and information systems
    -Maritime engineering
    -Nanotechnology
    -Geoengineering
    -All aspects of physics
    -E-governance
    -E-commerce
    -The knowledge economy
    -Urban studies
    -Arms control
    -Understanding and responding to terrorism
    -Medical informatics
    -Computer Sciences ... Read more


    36. Parallel Computing in Science and Engineering: 4th International DFVLR Seminar on Foundations of Engineering Sciences, Bonn, FRG, June 25/26, 1987 (Lecture Notes in Computer Science)
    Paperback: 185 Pages (1988-06-13)
    list price: US$49.95 -- used & new: US$49.52
    (price subject to change: see help)
    Asin: 3540189238
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    It was the aim of the conference to present issues in parallel computing to a community of potential engineering/scientific users. An overview of the state-of-the-art in several important research areas is given by leading scientists in their field. The classification question is taken up at various points, ranging from parametric characterizations, communication structure, and memory distribution to control and execution schemes. Central issues in multiprocessing hardware and operation, such as scalability, techniques of overcoming memory latency and synchronization overhead, as well as fault tolerance of communication networks are discussed. The problem of designing and debugging parallel programs in a user-friendly environment is addressed and a number of program transformations for enhancing vectorization and parallelization in a variety of program situations are described. Two different algorithmic techniques for the solution of certain classes of partial differential equations are discussed. The properties of domain-decomposition algorithms and their mapping onto a CRAY-XMP-type architecture are investigated and an overview is given of the merit of various approaches to exploiting the acceleration potential of multigrid methods. Finally, an abstract performance modeling technique for the behavior of applications on parallel and vector architectures is described. ... Read more


    37. Parallel Computing on Heterogeneous Clusters
    by Alexey L. Lastovetsky
    Hardcover: 350 Pages (2003-08-11)
    list price: US$132.95 -- used & new: US$55.00
    (price subject to change: see help)
    Asin: 0471229822
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    * New approaches to parallel computing are being developed that make better use of the heterogeneous cluster architecture
    * Provides a detailed introduction to parallel computing on heterogenous clusters
    * All concepts and algorithms are illustrated with working programs that can be compiled and executed on any cluster
    * The algorithms discussed have practical applications in a range of real-life parallel computing problems, such as the N-body problem, portfolio management, and the modeling of oil extraction ... Read more


    38. Parallel Computing: Architectures, Algorithms and Applications - Volume 15 Advances in Parallel Computing
    by C. Bischof
    Hardcover: 824 Pages (2008-03-15)
    list price: US$260.00 -- used & new: US$255.87
    (price subject to change: see help)
    Asin: 158603796X
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    ParCo2007 marks a quarter of a century of the international conferences on parallel computing that started in Berlin in 1983. The aim of the conference is to give an overview of the state-of-the-art of the developments, applications and future trends in high performance computing for all platforms. The conference addresses all aspects of parallel computing, including applications, hardware and software technologies as well as languages and development environments. Special emphasis was placed on the role of high performance processing to solve real-life problems in all areas, including scientific, engineering and multidisciplinary applications and strategies, experiences and conclusions made with respect to parallel computing. The book contains papers covering: 1) Applications; The application of parallel computers to solve computationally challenging problems in the physical and life sciences, engineering, industry and commerce. The treatment of complex multidisciplinary problems occurring in all application areas was discussed. 2) Algorithms; Design, analysis and implementation of generic parallel algorithms, including their scalability, in particular to a large number of processors (MPP), portability and adaptability and 3) Software and Architectures; Software engineering for developing and maintaining parallel software, including parallel programming models and paradigms, development environments, compile-time and run-time tools. A number of symposia on specialized topics formed part of the scientific program. The following topics were covered: Parallel Computing with FPGAs, The Future of OpenMP in the Multi-Core Era, Scalability and Usability of HPC Programming Tools, DEISA: Extreme Computing in an Advanced Supercomputing Environment and Scaling Science Applications on Blue Gene. The conference was organized by the renowned research and teaching institutions Forschungszentrum Julich and the RWTH Aachen University in Germany.

    IOS Press is an international science, technical and medical publisher of high-quality books for academics, scientists, and professionals in all fields.

    Some of the areas we publish in:

    -Biomedicine
    -Oncology
    -Artificial intelligence
    -Databases and information systems
    -Maritime engineering
    -Nanotechnology
    -Geoengineering
    -All aspects of physics
    -E-governance
    -E-commerce
    -The knowledge economy
    -Urban studies
    -Arms control
    -Understanding and responding to terrorism
    -Medical informatics
    -Computer Sciences ... Read more


    39. Implementation of Non-Strict Functional Programming Languages (Research Monographs in Parallel and Distributed Computing)
    by Kenneth R. Traub
     Paperback: 185 Pages (1991-03-07)
    list price: US$27.95
    Isbn: 0262700425
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    Modern "non-strict" functional programming languages are a powerfulmeans of programming highly parallel computers but are intrinsicallydifficult to compile well because decisions about the ordering ofsubcomputations must be made at the time of compiling. This bookrepresents a new technique for compiling such languages by partitioninga program into sequential threads. While the interleaving of threads canvary at run time, within each thread the order is fixed.

    A program is compiled by analyzing its data dependences and developingfrom that a set of partitioning constraints. These practical algorithmsare founded on a new theory of data dependence and ordering withinfunctional programs, which defines dependence graphs in terms of arewrite-rule operational semantics for the language.

    By attacking the ordering problem directly, the book departs fromprevious approaches that obtain partitioning as a by-product ofoptimizing lazy evaluation and cleanly separates partitioning from othercode-generation issues. Furthermore, the method is flexible enough toproduce both lazy code and a less-restrictive "lenient" variant thatallows larger threads with only a slight decrease in expressive power.Code generation and optimization are explored in depth for bothuniprocessor and multiprocessor targets. ... Read more


    40. Scientific Computing: An Introduction with Parallel Computing
    by Gene H. Golub, James M. Ortega
    Hardcover: 442 Pages (1993-02-03)
    list price: US$83.95 -- used & new: US$55.34
    (price subject to change: see help)
    Asin: 0122892534
    Average Customer Review: 3.0 out of 5 stars
    Canada | United Kingdom | Germany | France | Japan
    Editorial Review

    Product Description
    This book introduces the basic concepts of parallel and vector computing in the context of an introduction to numerical methods.It contains chapters on parallel and vector matrix multiplication and solution of linear systems by direct and iterative methods.It is suitable for advanced undergraduate and beginning graduate courses in computer science, applied mathematics, and engineering.Ideally, students will have access to a parallel or Vector computer, but the material can be studied profitably in any case.

    * Gives a modern overview of scientific computing including parallel an vector computation
    * Introduces numerical methods for both ordinary and partial differential equations
    * Has considerable discussion of both direct and iterative methods for linear systems of equations, including parallel and vector algorithms
    * Covers most of the main topics for a first course in numerical methods and can serve as a text for this course
    ... Read more

    Customer Reviews (1)

    3-0 out of 5 stars As the title says: an introduction
    It has all the basic bits that a beginner needs to get started in numerical computation: polynomial approximation, numerical integration, and linear systems. The latter is a real strength, since it offer not just exact techniques (like Gaussian elimination), but iterative techniques including sparse-system special cases and more general conjugate gradient techniques. There isn't much cut&paste code here, but theorems usually appear only as conclusions, not as arguments. The last sections, on iterative solutions of linear systems, give more in the way of concepts than guidance, but a determined reader will find value.

    Despite strengths, this book has significant weaknesses. Error analysis is thin, and that's what really sets good analysts ahead of the pack. The book predates wide acceptance of symbolic algebra packages, so it favors Taylor series approximations over the superior but tedious orthogonal polynomials. It notes cache:memory penalties under 1:10, where they're typically over 1:100 today. And its discussion of performance processor architecture barely approaches adequate, even by 1993 standards. That dates back 10 generations of Moore's Law, ten doublings of transistor count or 1000x. That's a lot, and a world with a Blue Gene in it is a very different place.

    Still, the basics haven't changed. Despite some obscurity in the later chapters, it's still good for the first few things a numerical programmer needs to know, including a little parallelism awareness. If I were teaching a basic course in scientific computation, it would still be in the running when I went to choose a text.

    //wiredweird ... Read more


      Back | 21-40 of 100 | Next 20
    A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

    Prices listed on this site are subject to change without notice.
    Questions on ordering or shipping? click here for help.

    site stats