\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Performance analysis of backup-task scheduling with deadline time in cloud computing

Abstract Related Papers Cited by
  • In large-scale parallel job processing for cloud computing, a huge task is divided into subtasks, which are processed independently on a cluster of machines called workers. Since the task processing lasts until all the subtasks are completed, a slow worker machine makes the overall task-processing time long, degrading the task-level throughput. In order to alleviate the performance degradation, MapReduce conducts backup execution, in which the master node schedules the remaining in-progress subtasks when the whole task operation is close to completion. In this paper, we investigate the effect of backup tasks on the task-level throughput. We consider the backup-task scheduling in which a backup subtask for a worker starts when the subtask-processing time of the worker reaches the deadline time. We analyze the task-level processing-time distribution by considering the maximum subtask-processing time among workers. The task throughput and the amount of all the workers' processing times are derived when the worker-processing-time (WPT) follows a hyper-exponential, Weibull, and Pareto distribution. We also propose an approximate method to derive performance measures based on extreme value theory. The approximations are validated by Monte Carlo simulation. Numerical examples show that the performance improvement by backup tasks significantly depends on workers' processing time distribution.
    Mathematics Subject Classification: 62G32, 60H30.

    Citation:

    \begin{equation} \\ \end{equation}
  • [1]

    S. Ali, B. Eslamnour and Z. Shah, A case for on-machine load balancing, Journal of Parallel and Distributed Computing, 71 (2011), 556-564.doi: 10.1016/j.jpdc.2010.11.003.

    [2]

    L. A. Barroso and U. Hölzle, The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, Morgan & Claypool, 2009.doi: 10.2200/S00193ED1V01Y200905CAC006.

    [3]

    W. Cirne, D. Paranhos, F. Brasileiro and L. F. W. Góes, On the efficacy, efficiency and emergent behavior of task replication in large distributed systems, Parallel Computing, 33 (2007), 213-234.doi: 10.1016/j.parco.2007.01.002.

    [4]

    J. Dean and S. Ghemawat, MapReduce: Simplified data processing on large clusters, Communications of the ACM, 51 (2008), 107-113.doi: 10.1145/1327452.1327492.

    [5]

    M. Dobber, R. V. D. Mei and G. Koole, Dynamic load balancing and job replication in a global-scale grid environment: A comparison, IEEE Transactions on Parallel and Distributed Systems, 20 (2009), 207-218.doi: 10.1109/TPDS.2008.61.

    [6]

    P. Embrechets, C. Klüppelberg and T. Mikosch, Modelling Extremal Events for Insurance and Finance, Springer, Berlin, 1997.doi: 10.1007/978-3-642-33483-2.

    [7]

    T. Hirai, H. Masuyama, S. Kasahara and Y. Takahashi, Performance analysis of large-scale parallel-distributed processing with backup tasks for cloud computing, Journal of Industrial and Management Optimization, 10 (2014), 113-129.doi: 10.3934/jimo.2014.10.113.

    [8]

    W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes in C, 2nd edition, Cambridge University Press, 1992.

    [9]

    S. Resnick, Extreme Values, Regular Variation and Point Processes, Springer Series in Operations Research and Financial Engineering. Springer, New York, 2008.

    [10]

    T. White, Hadoop: The Definitive Guide, 2nd edition, O'reilly Media, California, 2008.

    [11]

    K. Wolter, Stochastic Models for Fault Tolerance: Restart, Rejuvenation, and Checkpointing, With a foreword by Aad van Moorsel. Springer, Heidelberg, 2010.doi: 10.1007/978-3-642-11257-7.

  • 加载中
SHARE

Article Metrics

HTML views() PDF downloads(136) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return