Programming Massively Parallel Processors with CUDA
By Stanford University
オーディオ Podcast を試聴するには、タイトルの上にマウスを移動して再生ボタンをクリックしてください。Podcast のダウンロードやサブスクリプションへの登録を行うには iTunes を開いてください。
説明
Virtually all semiconductor market domains, including PCs, game consoles, mobile handsets, servers, supercomputers, and networks, are converging to concurrent platforms. There are two important reasons for this trend. First, these concurrent processors can potentially offer more effective use of chip space and power than traditional monolithic microprocessors for many demanding applications. Second, an increasing number of applications that traditionally used Application Specific Integrated Circuits (ASICs) are now implemented with concurrent processors in order to improve functionality and reduce engineering cost. The real challenge is to develop applications software that effectively uses these concurrent processors to achieve efficiency and performance goals. The aim of this course is to provide students with knowledge and hands-on experience in developing applications software for processors with massively parallel computing resources. In general, we refer to a processor as massively parallel if it has the ability to complete more than 64 arithmetic operations per clock cycle. Many commercial offerings from NVIDIA, AMD, and Intel already offer such levels of concurrency. Effectively programming these processors will require in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors. The target audiences of the course are students who want to develop exciting applications for these processors, as well as those who want to develop programming tools and future implementations for these processors. Visit the CS193G companion website for course materials.
タイトル | 説明 | リリース | 価格 | ||
---|---|---|---|---|---|
1 | Video16. Parallel Sorting (April 20, 2010) | Michael Garland, of NVIDIA Research, discusses sorting methods in order to make searching, categorization, and building of data structures in parallel easier. (April 20, 2010) | 2010/6/9 | 無料 | iTunes で見る |
2 | Video15. Optimizing Parallel GPU Performance (May 20, 2010) | John Nicholis discusses how to optimize with Parallel GPU Performance. (May 20, 2010) | 2010/6/9 | 無料 | iTunes で見る |
3 | Video14. Path Planning System on the GPU (May 18, 2010) | Avi Bleiweiss delivers a lecture on the path planning system on the GPU. (May 18, 2010) | 2010/6/9 | 無料 | iTunes で見る |
4 | Video6. Parallel Patterns I (April 15, 2010) | Students are taught how to effectively program massively parallel processors using the CUDA C programming language. Students also develop familiarity with the language itself and are exposed to the architecture of modern GPUs. (April 15, 2010) | 2010/5/27 | 無料 | iTunes で見る |
5 | Video12. NVIDIA OptiX: Ray Tracing on the GPU (May 11, 2010) | Steven Parker, Director of High Performance Computing and Computational Graphics at NVIDIA, speaks about ray tracing. (May 11, 2010) | 2010/5/26 | 無料 | iTunes で見る |
6 | Video13. Future of Throughput (May 13, 2010) | William Dally guest-lectures on the end of denial architecture and the rise of throughput computing. (May 13, 2010) | 2010/5/26 | 無料 | iTunes で見る |
7 | Video11. The Fermi Architecture (May 6, 2010) | Michael C Shebanow, Principal Research Scientist with NVIDIA Research, talks about the new Fermi architecture. This next generation CUDA architecture, code named "Fermi" is the most advanced GPU computing architecture ever built. (May 6, 2010) | 2010/5/17 | 無料 | iTunes で見る |
8 | Video10. Solving Partial Differential Equations with CUDA (May 4, 2010) | Jonathan Cohen, a Senior Research Scientist at NVIDIA Research, talks about solving partial differential equations with CUDA. (May 4, 2010) | 2010/5/17 | 無料 | iTunes で見る |
9 | Video9. Sparse Matrix Vector Operations (April 29, 2010) | Nathan Bell from NVIDIA Research talks about sparse matrix-vector multiplication on throughput-oriented processors. (April 29, 2010) | 2010/5/17 | 無料 | iTunes で見る |
10 | Video8. Introduction to Thrust (April 27, 2010) | Nathan Bell of NVIDIA Research talks about Thrust, a productivity library for CUDA. (April 27, 2010) | 2010/5/17 | 無料 | iTunes で見る |
11 | Video7. Parallel Patterns II (April 22, 2010) | David Tarjan continues his discussion on parallel patterns. (April 22, 2010) | 2010/5/17 | 無料 | iTunes で見る |
12 | Video5. Performance Considerations (April 13, 2010) | Lukas Biewald of Delores Labs, discusses performance considerations including: memory coalescing, shared memory bank conflicts, control-flow divergence, occupancy, and kernel launch overheads. (April 13, 2010) | 2010/4/21 | 無料 | iTunes で見る |
13 | Video4. CUDA Memories (April 8, 2010) | Jared Hoberock of NVIDIA lectures on CUDA memory spaces for CS 193G: Programming Massively Parallel Processors. (April 8, 2010) | 2010/4/21 | 無料 | iTunes で見る |
14 | Video3. CUDA Threads & Atomics (April 6, 2010) | Atomic operations in CUDA and the associated hardware are discussed. (April 6, 2010) | 2010/4/15 | 無料 | iTunes で見る |
15 | Video2. Introduction to CUDA (April 1, 2010) | science, technology, computer science, CS, software engineering, programming, parallel processors, CUDA, language, code, Computers, coding, MP0, MP1, hardware, software, memory management, GPU, CPU, memory, parallel code, kernel, threads, launch, thread b | 2010/4/14 | 無料 | iTunes で見る |
16 | Video1. Introduction to Massively Parallel Computing (March 30, 2010) | Jared Hoberock of NVIDIA gives the introductory lecture to CS 193G: Programming Massively Parallel Processors. (March 30, 2010) | 2010/4/9 | 無料 | iTunes で見る |
16アイテム |