Multicore Video Series

Professional Education Resources


Course Description

This Multicore Series consists of 11 one-hour lectures by some of the world's leading researchers in the field plus a one-hour panel discussion featuring all lecturers. This series is not structured as a course, but it provides an important lens into how leading researchers and professionals are leveraging multicore in a range of industries.

Who Needs to Watch These Videos?

Professionals in a wide variety of industries whose computing requirements include real-time processing, high performance, and speed will find these videos an important tool for getting better performance out of their multicore processing systems and future-proofing their applications.

Educators and graduate students will also find this series extremely helpful because it gives them a window into the minds of some of the most accomplished experts in the field.

All twelve of the videos are stimulating, challenging, and inspiring.

Self-Paced Learning:

Approximate time = 12 hours

  • PDH: 12.0
  • CEU: 1.2

Full Series Price:

  • IEEE CS Member: $195
  • Nonmember: $1,000

Individual Videos:

  • IEEE CS Member: $30
  • Nonmember: $125

See individual videos below.

For questions, please contact

Hear it First from the World’s Leading Multicore Compiler Experts

Multicore processors have become pervasive in today’s computing infrastructure, but most organizations struggle to use them efficiently. That’s why we brought together some of the world’s most renowned experts to share the innovative techniques they’ve used to improve reliability and performance while reducing costs, time, and power consumption.

Innovative Use in a Wide Range of Applications

This video series features some of the most advanced parallelization, vectorization, and power reduction technologies used in industry applications that utilize advanced multicore processors with accelerators, including automobiles (from ECU to self-driving cars), base-band communication, big data, cloud computing, cluster computing, medical image processing, multimedia, smartphones, and supercomputing.  

Power Reduction for Current and Future Applications

From the megawatts consumed by high-performance computing systems to the everyday power charging needs of smartphones, better power-reduction techniques are essential to meet today’s challenging applications. Our experts show you how they’ve used parallelization techniques to enhance performance and provide opportunities to reduce power consumption using frequency and voltage control as well as clock and power gating of each core. Such power-reduction techniques offer tremendous potential for a wide variety of current and future applications.

New Ways to Get the Most out of Multicore Processing:


Automatic Parallelization by David Padua
Professor Padua is an internationally recognized expert on autotuning, high level notations for parallel programming, compiler evaluation, and the efficient implementation of scripting languages, and is also a member of the editorial boards of IEEE Transactions of Parallel and Distributed Systems, the Journal of Parallel and Distributed Computing, and the International Journal of Parallel Programming.

Autoparallelization for GPUs by Wen-mei Hwu
With research interests in the area of architecture, implementation, compilation, and algorithms for parallel computing, Wen-mei Hwu is the chief scientist of the Parallel Computing Institute and director of the IMPACT Research Group.

Dependences and Dependence Analysis by Utpal Banerjee
Utpal Banerjee's research interests in computer science are in the general area of parallel processing and he has published four books on loop transformations and dependence analysis, with a fifth one on instruction level parallelism on the way. 

Dynamic Parallelization by Rudolf Eigenmann
Dr. Eigenmann’s research interests include optimizing compilers, programming methodologies and tools, performance evaluation for high-performance computers and cyberinfrastructures, and he currently serves as program director at the National Science Foundation. 

Instruction Level Parallelization by Alexandru Nicolau
The author of over 300 conference and journal articles and many books, Dr. Nicolau has chaired numerous international conferences and is editor in chief of the International Journal of Parallel Programming, the oldest journal in that field. 

Multigrain Parallelization and Power Reduction by Hironori Kasahara
Professor Kasahara has been researching on OSCAR Automatic Parallelizing and Power Reducing Compiler and OSCAR Multicore architecture for more than 30 years, and led four Japanese national projects on parallelizing compilers, multicores, and green computing. 

The Polyhedral Model by Paul Feautrier
Professor Feautrier’s research interests include computer architecture, operating systems, parallel programming, automatic parallelization, and program optimization, and he has been one of the prime movers behind the polyhedral model, an abstract representation of regular programs. 

Vector Computation by David Kuck
David Kuck is an Intel Fellow working on hardware/software co-design in Intel’s Software and Solutions Group, and is the founder of the Center for Supercomputing Research and Development. 

Vectorization by P. Sadayappan
Professor Sadayappan’s current research centers around domain-specific compiler optimization and high-performance scientific computing, including some recent projects such as the PolyOpt polyhedral optimizer in the ROSE compiler and the Tensor Contraction Engine. 

Vectorization/Parallelization in the IBM Compiler by Yaoqing Gao
Yaoqing Gao’s major interests are compilation technology, optimization and performance tuning tools, parallel programming models and languages, and computer architecture, and as an IBM Master Inventor, he has authored over 30 issued and pending patents.

Vectorization/Parallelization in the Intel Compiler by Peng Tu
Dr. Peng Tu is a Principle Engineer and manages the Technology Pathfinding engineering team in the Developer Product Division of Intel Corporation. 

Roundtable Discussion by all presenters

Computing Now