The need for Big Data platforms in recent years is increasing steadily, given the amount of data produced or consumed every second by millions of users and machines, and this huge volume of data has to be processed, managed, or stored. Several constraints must be taken into consideration when allocating this data and processing it on big data platforms, and among the major concerns of big data clients who are always looking to reduce their costs remains time and budget. We can say that time is among the major factors that determine the performance of a processing model of a big data platform and which has a direct effect on other allocation constraints. In this paper, we conducted an analytical study of the performance of MapReduce which is the processing model of the Hadoop platform. Our study shows that the estimation of MapReduce performance remains difficult and depends not only on the scheduler used but also on other factors including the type of workload itself.