MASc seminar - Fadwa Abdulhalim

Thursday, December 10, 2015 10:00 am - 10:00 am EST (GMT -05:00)


Fadwa Abdulhalim


Modeling Power Consumption of Applications Software Running on Servers


Kshirasagar Naik


In the current world, the demand of the computing resources has gone up manifolds. This leads to an increase in the maintenance cost of data centers and also the cost of power involved in running a data center. It adds up to a significant portion of the annual budget. This ultimately results in varying dynamic power cost. The software engineering community is facing challenges pertaining to the ever-rising energy demands. These challenges can only be taken care of if the engineers are able to measure and predict the energy consumption of applications software. They need to predict the energy consumption that would happen when application software is executed. The application software can be optimized while they are still in the designing phase. However, it is difficult to estimate the power cost of applications until they are run on the real servers. The main reason behind the energy consumption of application software is the demand of the resources. When the system has a lot to do and the demand is more than the hardware, is bound to consume more energy. Various decisions need to be taken in such cases; for example, choice has to be made whether performance should be enhanced or the energy consumption should be minimized. In order to help make such trade-off decisions, we have proposed a modeling procedure that would help in prediction how much energy would be consumed by particular application software. Energy consumption can be defined as power demand integrated over time. The proposed modeling procedure is capable of predicting the power demands of software over time. In addition, through the use of this modeling procedure, a developer can estimate the system power consumption without the need of using an actual power meter device.

This work provides energy performance evaluation and power consumption estimation of applications running on a server using performance counters. Counter data of various performance indicators are collected using the CollectD tool. Simultaneously, during the test, a Power Meter (TED5000) is used to monitor the actual power drawn by the computer server. Furthermore, stress tests are performed to examine power fluctuations in response to the performance counts of four hardware subsystems: CPU, memory, disk, and network interface. A neural network model (NNM) and a linear polynomial model (LPM) have been developed based on process count information gathered by CollectD. These two models have been validated by four different scenarios running on three different platforms (three real servers.) Our experimental results show that system power consumption can be estimated with an average MAE (mean absolute error) between 11% to 15% on new system servers. While on old system servers, the average MAE is between 1% to 4%. Also, we find that NNM has better estimation results than the LPM, resulting in 1.5% reduction in MAE of energy estimation when compared to the LPM.

The detailed contributions of the thesis are as follows: (i) provide a practical approach to extracting system performance counters and simplify them to get the model parameters; (ii) a modeling procedure is proposed and implemented for predicting the power cost of application software using performance counters; (iii) an experiment is performed to contrast the different energy consumption behaviors in terms of different models, platforms, and different load scenarios. All of our contributions and the proposed procedure have been validated with numerous measurements on a real test bench. In summary, the results of this work can be used by application developers to make implementation-level decisions that affect the energy efficiency of software applications.