|
Energy/power control method | Works | Description |
|
(1) Selection of devices/scheduling | [73] | Selection of devices in a cluster or collection of clusters such that maximum power consumption limit is followed + data partitioning and scheduling of computations |
[35] | Selection of cores for a configuration minimizing energy consumption |
[75] | Using GPUs for optimization/generation of parity data |
[39] | Selection of best GPU architectures in terms of performance/energy usage point of view |
[58] | Specific scheduling and switching off unused cluster nodes |
[33, 71] | Task partitioning and scheduling |
[44] | Task scheduling, a two-stage energy-efficient temperature-aware task scheduling algorithm is proposed: in the first system, dynamic energy consumption under task deadlines, in the second temperature profiles of processors, are improved |
[76] | Application assignment to virtual and physical nodes of the cloud |
[66] | Workload placement in a data center |
[68] | Proposal of RMAP—a resource manager that minimizes average turnaround time for jobs provides an adaptive policy that supports overprovisioning and power-aware backfilling |
|
(2) DVFS/DFS/DCT | [49] | For MPI applications with the goal not to impact performance |
[47] | Uniform frequency power-limiting investigates results for the fixed frequency mode, minimum power level assigned to a job, and automatic mode with consideration of available power |
[45] | Core and uncore frequency scaling of CPUs |
[55, 56] | Minimization of energy usage through DVFS on particular nodes |
[52] | DFS, DCT |
[14] | DVFS, DCT |
[37] | Control of frequency on a GPU |
[41] | DVFS with dynamic detection of computation phases (memory and CPU bound) |
[59] | DVFS with a posteriori (using logs) detection and prioritization of computation phases (memory and CPU bound) |
[61] | Sysfs interface is used |
[63, 64] | DCT, combined DVFS/DCT |
[65] | Sysfs interface |
[72] | Setting the frequency according to the established computing center policies |
|
(3) Power capping | [24] | Using Intel RAPL for power management |
[40] | Using Intel RAPL for analyzing energy/performance trade-offs with power capping for parallel applications on modern multi- and manycore processors |
[42] | Using PAPI and Intel RAPL |
[62] | Using Intel RAPL |
[46] | Using Intel’s power governor tool and Intel RAPL |
|
(4) Application optimizations | [54] | Theoretical consideration of optimizations of an application that results in improvement of performance countervalues |
[36] | Finding an optimal GPU configuration (in terms of the number of threads per block and the number of blocks) |
[53, 57] | Control of CPU frequency, spinning down the disk, and network speed scaling |
[43] | Exploration of various loop scheduling ways, chunk sizes, optimization levels, and thread counts |
|
(5) Hybrid | [30] | Software + RAPL, the proposed PUPiL approach combines hardware’s fast reaction time with flexibility of a software approach |
[48] | Scheduling/software + resource management (including RAPL), the proposed algorithm takes into account real power and energy consumption |
[34] | Concurrent kernel execution + DVFS |
[50, 74] | Scheduling + DVFS |
[38] | Scheduling + DVS for minimization of temperature and meeting task deadlines |
[51] | Scheduling jobs and management of resources and DVFS |
[77] | Selection of the resources for a given user request, with VM migration and putting unused machines in the sleep mode |
[60] | Workload distribution + DVFS-based multiobjective optimization |
[69] | Polling, interrupt-driven execution (relinquishing CPU and waiting on a network event), DVFS power levers |
[70] | Selection of nodes in an overprovisioned HPC clusters and Intel RAPL |
|