- Tera.It. Alverson et al. The Tera computer system. Int. Conf. on Supercomp., 1-6, 1990. Google ScholarDigital Library
- Bl90.G.E. Blelloch. Vector Models }or Data-Parallel Computing. MIT Press, 1990. Google ScholarDigital Library
- Bl96.G.E. Blelloch. Prorgamming parallel algorithms. CA CM, 39(3), 1996, 85-97. Google ScholarDigital Library
- BGMN.G.E. Blelloch, P.B. Gibbons, Y. Matias and G.J. Narlikar. Space-efficient scheduling of parallelism with synchronization variables. In Proc. 9th ACM-SPAA, 1997, 12-22. Google ScholarDigital Library
- BL93.It.D. Blumofe and C.E. Leiserson. Spaceefficient scheduling of multi-threaded computations. Proc. 25th STOC, 362-371, 1993. Google ScholarDigital Library
- CZ89.It. Cole and O. Zajicek. The APRAM: incorporating asynchrony into the P t tAM model. Parallelism: Proc. 1st SPAA, 1989, 169-178. Google ScholarDigital Library
- Fl96.M.J. Flynn. Parallel processors were the future., and may yet be. Computer 29,12, 151- 152, December 1996. Google ScholarDigital Library
- Fr93.M. Franklin. The Multiscalar Architecture. Doctoral dissertation, Department of Computer Science, University of Wisconsin, 1993. Google ScholarDigital Library
- G89.P.B. Gibbons. A more practical P t tAM algorithm. Proc. 1st SPAA, 1989, 158-168. Google ScholarDigital Library
- HP96.3.L. Hennessy and D.A. Patterson. Computer Architecture A Qualitative Approach, Second Edition. Morgan Kaufmann, San Mateo, California, 1996. Google ScholarDigital Library
- Ja92.J. J~J~. An Introduction to Parallel Algorithms. Addison-Wesley, Iteading, MA, 1992. Google ScholarDigital Library
- Ke96.C.W. Kessler. Quick reference guides: (i) Fork95, and (ii) SB-PItAM: Instruction set simulator system software. U. Trier, FB IV, D-54286 Trier, Germany, 1996.Google Scholar
- M96.T. Mudge. Strategic directions in computer architecture. A CM Computing Surveys, 28,4 (1996), 671-678. Google ScholarDigital Library
- P+97.D. Patterson et al. Intelligent RAM (IRAM): Chips that remember and compute. 1997 IEEE Int. Solid-State Circuits Conf., San Francisco, CA, Feb. 1997.Google Scholar
- PH94.D.A. Patterson and J.L. Hennessy. Computer Organization and Design The Hardware#Software interlace. Morgan Kaufmann, San Mateo, California, 1994. Google ScholarDigital Library
- SCDEFT97.M. Schlansker et al. Compilers for instruction-level parallelism. Computer, 30,12 (1997), 63-69. Google ScholarDigital Library
- Si97.J.F. Sibeyn. From parallel to external list ranking. TIt-MPI-I-97-1-021, Saarbrucken, Germany, 1997.Google Scholar
- TEL95.D.M. Tullsen, S.J. Eggers, and H.M. Levy. Simultaneous multithreading: maximizing onchip parallelism. Proc. 22nd ISCA, 1995. Google ScholarDigital Library
- Va90.L.G. Valiant. A bridging model for parallel computation. CA CM, 33(8), 1990. Google ScholarDigital Library
- Vi84.U. Vishkin. Randomized speed-ups in parallel computation. Proc. 16th STOC, 1984, 230- 239. Google ScholarDigital Library
- Vi84a.U. Vishkin. Parallel-Design Distributed- Implementation (PDDI) general purpose computer. TCS 32 (1984), 157-172. Google ScholarDigital Library
- Vi97.U. Vishkin. From Algorithm Parallelism to Instruction-Level Parallelism: An Encode- Decode Chain Using Prefix-Sum. Proc. 9th SPAA, 1997, 260-271. Google ScholarDigital Library
Index Terms
- Explicit multi-threading (XMT) bridging models for instruction parallelism (extended abstract)
Recommendations
Experiments with list ranking for explicit multi-threaded (XMT) instruction parallelism
Algorithms for the problem of list ranking are empirically studied with respect to the Explicit Multi-Threaded (XMT) platform for instruction-level parallelism (ILP). The main goal of this study is to understand the differences between XMT and more ...
Experiments with List Ranking for Explicit Multi-Threaded (XMT) Instruction Parallelism
WAE '99: Proceedings of the 3rd International Workshop on Algorithm EngineeringAlgorithms for the problem of list ranking are empirically studied with respect to the Explicit Multi-Threaded (XMT) platform for instruction-level parallelism (ILP). The main goal of this study is to understand the differences between XMT and more ...
Comments