Massively parallel algorithms for trace-driven cache simulations
Read Online
Share

Massively parallel algorithms for trace-driven cache simulations

  • 387 Want to read
  • ·
  • 10 Currently reading

Published by National Aeronautics and Space Administration, Langley Research Center, National Technical Information Service, distributor in Hampton, Va, [Springfield, Va .
Written in English

Subjects:

  • Cache memory -- Computer simulation.

Book details:

Edition Notes

Other titlesMassively parallel algorithms for trace driven cache simulations.
StatementDavid M. Nicol, Albert G. Greenberg, Boris D. Lubachevsky.
SeriesICASE report -- no. 91-83., NASA contractor report -- 189571., NASA contractor report -- NASA CR-189571.
ContributionsGreenberg, Albert G., Lubachevsky, Boris D., Langley Research Center.
The Physical Object
FormatMicroform
Pagination1 v.
ID Numbers
Open LibraryOL15369963M

Download Massively parallel algorithms for trace-driven cache simulations

PDF EPUB FB2 MOBI RTF

Massively parallel algorithms for trace-driven cache. Massively parallel algorithms for trace-driven cache simulations. By Albert G. Greenberg, Boris D. Lubachevsky and David M. Nicol. Abstract. Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache. We present massively parallel algorithms for both problems that are optimal in the following senses: The approximation factor of our algorithms is 1 + ∊. The round complexity of our algorithms is constant. The total running time of our algorithms over all machines is Õ(n 2). Massively Parallel Computation (MPC), first introduced in [KSV10] and later refined in [ANOY14,BKS13,GSZ11]. Prerequisites No prior knowledge in parallel algorithms/computing is assumed. The only prerequisite is that one should be comfortable with randomized algo-rithms. Having taken a course in Algorithms, Probability, and Computing1.

  In this paper, we focus on the description and scalability of the parallel solver algorithm, leaving the model formulation and large massive parallel simulations of different atmospheric phenomena for future work. This is a challenging task itself, requiring to acquire reliable data for the initial state, forcing, and boundary conditions. Nonconventional parallel simulations methods are presented, wherein speed-ups are not limited by the number of simulated components. The methods capitalize on Chandy and Sherman's space-time relaxa. Massively Parallel Algorithms for Fluid-Structure Interaction with Moving Objects Lehrstuhl für Informatik 10 (Systemsimulation) Universität Erlangen-Nürnberg Herrsching, Octo Workshop on Computational Engineering Fluid-Structure Interaction: Modelling, Simulation, Optimisation. () Massively parallel algorithms for trace-driven cache simulations. IEEE Transactions on Parallel and Distributed Systems , () An efficient parallel algorithm for the single function coarsest partition problem.

6th Workshop on Parallel and Distributed Simulation Massively Parallel Algorithms for TraceDriven Cache Simulation 3 David M Nicol antimessages application artificial rollback benchmarks block bound buffer cache cancelback causality Chandy checkpoint interval circuits clock CMB algorithm communications latency concurrency conservative.   The implementation of artificial neural networks (ANNs) on powerful parallel computer hardware is closely related to the simulation of ANNs on general purpose computers itself. Although there are many different good reasons for a parallel implementation, there has always been a lot of scepticism as well.   Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms . The authors consider the problem of using a parallel computer to execute discrete-event simulation of timed Petri-nets. They first develop synchronization and simulation algorithms for this task.