Physical SciencesComputer ScienceHardware and Architecture

Parallel Computing and Optimization Techniques

Modern processors stopped getting faster by simply raising clock speeds, so squeezing more performance out of hardware now means running many computations at the same time across multiple cores, specialized chips like GPUs, and carefully designed memory hierarchies. Researchers study how to map software tasks onto these parallel resources efficiently, how data moves between processors and memory without becoming a bottleneck, and how to do all of this without consuming excessive power. Open questions include how to build programming models and compilers that make heterogeneous hardware — CPUs, GPUs, and custom accelerators working together — easier to use without sacrificing performance, and how to measure and compare systems fairly when workloads and architectures are growing more diverse. Simulation platforms and benchmarking methodologies remain active areas of work precisely because understanding where time and energy are actually spent is a prerequisite for making things faster.

Works
200,175
Total citations
2,312,532
Keywords
Parallel ComputingPerformance OptimizationGPU ComputingMulticore ArchitecturesMemory SystemsBenchmarking

Top papers in Parallel Computing and Optimization Techniques

Ordered by total citation count.

Active researchers

Top authors in this area, ranked by h-index.

Related topics