View larger. Download instructor resources. Additional order info. Introduction to Parallel Computing, 2e provides a basic, in-depth look at techniques for the design and analysis of parallel algorithms and for programming them on commercially available parallel platforms. It provides a broad and balanced coverage of various core topics such as sorting, graph algorithms, discrete optimization techniques, data mining algorithms, and a number of other algorithms used in numerical and scientific computing applications. Implicit Parallelism: Trends in Microprocessor Architectures.
|Published (Last):||22 January 2006|
|PDF File Size:||18.89 Mb|
|ePub File Size:||10.22 Mb|
|Price:||Free* [*Free Regsitration Required]|
Introducation to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards.
Skip to main content. Start your free trial. Book Description Introducation to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards. Show and hide more. Table of Contents Product Information. Introduction to Parallel Computing 1. Motivating Parallelism 1. The Data Communication Argument 1. Scope of Parallel Computing 1. Applications in Engineering and Design 1.
Scientific Applications 1. Commercial Applications 1. Applications in Computer Systems 1. Organization and Contents of the Text 1. Bibliographic Remarks Problems 2. Parallel Programming Platforms 2. Pipelining and Superscalar Execution 2. Very Long Instruction Word Processors 2. Impact of Memory Bandwidth 2. Tradeoffs of Multithreading and Prefetching 2.
Dichotomy of Parallel Computing Platforms 2. Control Structure of Parallel Platforms 2. Physical Organization of Parallel Platforms 2.
Interconnection Networks for Parallel Computers 2. Evaluating Static Interconnection Networks 2. Evaluating Dynamic Interconnection Networks 2. Communication Costs in Parallel Machines 2.
Routing Mechanisms for Interconnection Networks 2. Cost-Performance Tradeoffs 2. Bibliographic Remarks Problems 3. Principles of Parallel Algorithm Design 3. Preliminaries 3. Decomposition, Tasks, and Dependency Graphs 3. Granularity, Concurrency, and Task-Interaction 3. Processes and Mapping 3. Processes versus Processors 3. Decomposition Techniques 3. Recursive Decomposition 3. Data Decomposition 3. Exploratory Decomposition 3. Speculative Decomposition 3. Hybrid Decompositions 3. Characteristics of Tasks and Interactions 3.
Characteristics of Tasks 3. Characteristics of Inter-Task Interactions 3. Mapping Techniques for Load Balancing 3. Methods for Containing Interaction Overheads 3. Maximizing Data Locality 3. Minimizing Contention and Hot Spots 3. Overlapping Computations with Interactions 3. Replicating Data or Computations 3.
Using Optimized Collective Interaction Operations 3. Overlapping Interactions with Other Interactions 3. Parallel Algorithm Models 3. The Data-Parallel Model 3. The Task Graph Model 3. The Work Pool Model 3. The Master-Slave Model 3. The Pipeline or Producer-Consumer Model 3. Hybrid Models 3. Bibliographic Remarks Problems 4. Basic Communication Operations 4. Ring or Linear Array 4. Mesh 4. Hypercube 4. Balanced Binary Tree 4. Detailed Algorithms 4. Cost Analysis 4. All-to-All Broadcast and Reduction 4.
Linear Array and Ring 4. All-Reduce and Prefix-Sum Operations 4. Scatter and Gather 4. All-to-All Personalized Communication 4.
Ring 4. Hypercube An Optimal Algorithm 4. Circular Shift 4. Improving the Speed of Some Communication Operations 4. All-Port Communication 4. Summary 4. Bibliographic Remarks Problems 5. Analytical Modeling of Parallel Programs 5. Sources of Overhead in Parallel Programs 5. Performance Metrics for Parallel Systems 5. Execution Time 5. Total Parallel Overhead 5. Speedup 5. Efficiency 5. Cost 5. The Effect of Granularity on Performance 5.
Scalability of Parallel Systems 5. Scaling Characteristics of Parallel Programs 5. Cost-Optimality and the Isoefficiency Function 5. A Lower Bound on the Isoefficiency Function 5. The Degree of Concurrency and the Isoefficiency Function 5. Asymptotic Analysis of Parallel Programs 5. Other Scalability Metrics 5. Bibliographic Remarks Problems 6. Programming Using the Message-Passing Paradigm 6.
Introduction to Parallel Computing, 2nd Edition
Introduction to Parallel Computing, Second Edition
Introduction to Parallel Computing.