- 1 J. R. Allen. Dependence Analysis for Subscripted Variables and Its Application to Program Transformations. PhD thesis, Rice University, April 1983. Google ScholarDigital Library
- 2 J.R. Allen. Unifying vectorization, parallelization, and optimization: The Ardent compiler. In L. Kartashev and S. Kartashev, editors, Proceedings of the Third International Conference on $upercomputing, 1988.Google Scholar
- 3 J. R. Allen and K. Kennedy. PFC: A program to convert Fortran to parallel form. In Supercomputers: Design and Applications, pages 186-205. IEEE Computer Society Press, Silver Spring, MD, 1984.Google Scholar
- 4 J. R. Allen and K. Kennedy. Automatic translation of Fortran programs to vector form. ACM Transactions on Programming Languages and Systems, 9(4):491-542, October 1987. Google ScholarDigital Library
- 5 Z. AmmargueHat and W. Harrision. Automatic recognition of induction variables and recurrence relations by abstract interpretation. In Proceedings of ihe A CM S{GPLAN '90 ConJerence on Program Language Design and {mplementarich, White Plains, NY, June 1990. Google ScholarDigital Library
- 6 U. Banerjee. Data dependence in ordinary programs. Master's thesis, Dept. of Computer Science, University of Illinois at Urbana-Champaign, November 1976. Report No. 76-837.Google Scholar
- 7 U. Banerjee. Speedup of ordinary programs. PhD thesis, Dept. of Computer Science, University of Illinois at Urbana- Champaign, October 1979. Report No. 79-989. Google ScholarDigital Library
- 8 U. Banerjee. Dependence Analysis for Supereompu~ing. Kluwer Academic Publishers, Boston, MA, 1988. Google ScholarDigital Library
- 9 M. Burke and R. Cytron. Interprocedural dependence analysis and parallellzation, in Proceedings of the SlGPLAN '86 Symposium on Compiler Construction, Palo Alto, CA, July 1986. Google ScholarDigital Library
- 10 D. Callahan. Dependence testing in PFC: Weak separability. Supercomputer Software Newsletter 2, Dept. of Computer Science, Rice University, August 1986.Google Scholar
- 11 D. Callahan, S. Cart, and K. Kennedy. Improving register allocation for subscripted variables. In Proceedings of the ACM SIGPLAN '90 Conference on Program Language Design and Implementation, White Plains, NY, June 1990. Google ScholarDigital Library
- 12 D. Callahan, K. Cooper, R. Hood, K. Kennedy, and L. Toreson. ParaScope: A parallel programming environment. The International Journal of Super'computer Applications, 2(4), Winter 1988.Google Scholar
- 13 D. Callahan, J. Dongarra, and D. Levine. Vectorizing compilers: A test suite and results. In Proceedings of Sapereomputing '88, Orlando, FL, November 1988. Google ScholarDigital Library
- 14 W. Cohagan. Vector optimization for the ASC. In Proceedings of the Seventh Annual Princeton Conference on Information Sciences and Systems, Princeton, NJ, March 1973.Google Scholar
- 15 S. Cook. The complexity of theorem-proving procedures. In Proceedings of Third Annual A CM Symposium on Theory of Computing, New York, NY, 1971. Google ScholarDigital Library
- 16 G. Cybenko, L. Kipp, L. Pointer, and D. Kuck. Supercomputer performance evaluation and the Perfect benchmarks. In Proceedings of the 1990 A CM International Conference on Supercomputing, Amsterdam, The Netherlands, June 1990. Google ScholarDigital Library
- 17 G. Dantzig. Linear Programming and Extensions. Princeton University Press, Princeton, N J, 1963.Google Scholar
- 18 P. Feautrier. Parametric integer programming. Operationnelle/Operations Research, 22(3):243-268, September 1988.Google ScholarCross Ref
- 19 D. Gannon, W. Jalby, and K. Gallivan. Strategies for cache and local memory management by global program transformations. In Proceedings of the First International Conference on $uper~omputing. Springer-Verlag, Athens, Greece, June 1987. Google ScholarDigital Library
- 20 M. Girkar and C. Polychronopoulos. Compiling issues for supercomputers. In Proceedings of Supercomputing '88, Orlando, FL, November 1988. Google ScholarDigital Library
- 21 T. Gross and P. Steenkiste. Structured datafiow analysis for arrays and its use in an optimizing compiler. Software-- Practice and Experience, 20(2):133-155, February 1990. Google ScholarDigital Library
- 22 M. Haghlghat and C. Polychronopoulos. Symbolic dependence analysis for high performance parallelizing compilers. In Proceedings of the Third Workshop on Languages and Compilers for Parallel Computing, Irvine, CA, August 1990.Google Scholar
- 23 R. Heuft and W. Little. Improved time and parallel processor bounds for Fortran-llke loops. IEEE Transactions on Computers, 0-31(1):78-81, January 1982.Google ScholarDigital Library
- 24 N. Karmark~. A new polynomial-time algorithm for linear programming. In Proceedings of the 16th Annual A CM Symposium on the Theory of Computing, 1984. Google ScholarDigital Library
- 25 K. Kennedy. Automatic translation of Fortran programs to vector form. Technical Report 476-029-4, Dept. of Mathematical Sciences, Rice University, October 1980.Google Scholar
- 26 K. Kennedy. Triangular Banerjee inequality. Supercomputer Software Newsletter 8, Dept. of Computer Science, Rice University, October 1986.Google Scholar
- 27 K. Kennedy, K. S. McKinley, and C. Tseng. Analysis and transformation in the ParaScope Editor. In Proceedings of the 1991 A CM {niernational Conference on S~percomputing, Cologne, Germany, June 1991. Google ScholarDigital Library
- 28 K. Kennedy, K. S. McKinley, and C. Tseng. Interactive parallel programming using the ParaScope Editor. IEEE Transactions on Parallel and Distributed Systems, 2(3), July 1991. Google ScholarDigital Library
- 29 D. Klappholz and X. Kong. Extending the Banerjee-Wolfe test to handle execution conditions. Technical Report 9101, Dept. of EE/CS, Stevens Institute of Technology, 1991.Google Scholar
- 30 D. Klappholz, K. Psarris, and X. Kong. On the perfect accuracy of an approximate subscript analysis test. In Proceedings of the 1990 A CM International Conference on Supercomputing, Amsterdam, The Netherlands, June 1990. Google ScholarDigital Library
- 31 X. Kong, D. Klappholz, and K. Psarris. The I test: A new test for subscript data dependence. In Proceedings of the 1990 International Conference on Parallel Processing, St. Charles, IL, August 1990.Google Scholar
- 32 D. Kuck. The Structure of Computers and Computations, Volume 1. John Wiley and Sons, New York, NY, 1978. Google ScholarDigital Library
- 33 D. Kuck, R. Kuhn, D. Padua, B. Leasure, and M. J. Wolfe. Dependence graphs and compiler optimlzations. In Conference Record of the Eighth ACAtl Symposium on the Principles of Programming Languages, Williamsburg, VA, January 1981. Google ScholarDigital Library
- 34 D. Kuck, Y. Muraoka, and S. Chen. On the number of operations simultaneously executable in Fortran-like programs and their resulting speedup. IEEE Transactions on Computers, C-21(12):1293-1310, December 1972.Google ScholarDigital Library
- 35 R. Kuhn. Optimization and Intercon~ection Complexity for: Parallel Processors, Single-Stage Networks, and Decision Trees. PhD thesis, Dept. of Computer Science, University of Illinois at Urbana-Champaign, February 1980. Google ScholarDigital Library
- 36 L. Lamport. The parallel execution of DO loops. Communications of the A CM, 17(2):83-93, February 1974. Google ScholarDigital Library
- 37 Z. Li and P. Yew. Some results on exact data dependence analysis. In D. Gelernter, A. Nicolau, and D. Padua, editors, Languages and Compilers for Parallel Computing. The MIT Press, 1990. Google ScholarDigital Library
- 38 Z. Li, P. Yew, and C. Zhu. Data dependence analysis on multi-dimensional array references. In Proceedings of the 1989 A CM International Conference on Supercomputin9, Crete, Greece, June 1989. Google ScholarDigital Library
- 39 A. Lichnewsky and F. Thomasset. Introducing symbolic problem solving techniques in the dependence testing phases of a vectorizer, in Proceedings of the Second {r~ternational Conference on Supercompuiing, St. Malo, France, July 1988. Google ScholarDigital Library
- 40 L. Lu and M. Chen. Subdomain dependence test for massive parallelism. In Proceedings of Super~omp~ting '90, New York, NY, November 1990. Google ScholarDigital Library
- 41 F. McMahon. The Livermore Fortran Kernels: A computer test of the numerical performance range. Technical Report UCRL-53745, Lawrence Livermore National Laboratory, 1986.Google Scholar
- 42 Y. Muraoka. Parallelism Exposure and Exploilaiion in Programs. PhD thesis, Dept. of Computer Science, University of Illinois at Urbana-Champaign, February 1971. Report No. 71-424. Google ScholarDigital Library
- 43 A. Porterfield. Software Methods }or improvement of Cache Performance. PhD thesis, Rice University, May 1989. Google ScholarDigital Library
- 44 A. Schrijver. Theory of Linear and {nieger Programming. John Wiley and Sons, Chichester, Great Britain, 1986. Google ScholarDigital Library
- 45 Z. Shen, Z. Li, and P. Yew. An empirical study of Fortran programs for parallelizing compilers. IEEE Transactions on Parallel and Distributed $yslems, 1(3):356-364, July 1990. Google ScholarDigital Library
- 46 R. Shostak. Deciding linear inequalities by computing loop residues. Journal of the A CM, 28(4).'769-779, October 1981. Google ScholarDigital Library
- 47 R. Triolet. Interprocedural analysis for program restructuring with Parafrase. CSRD Rpt. No. 538, Dept. of Computer Science, University of illinois at Urbana-Champaign, December 1985.Google Scholar
- 48 R. Triolet, F. irigoin, and P. Feautrier. Direct parallelization of CALL statements. In Proceedings of the $iGPLAN '86 Symposium on Compiler Construction, Palo Alto, CA, July 1986. Google ScholarDigital Library
- 49 J. Uniejewski. $PEC Benchmark Suite: designed for today's advanced systems. SPEC Newsletter Volume 1, Issue 1, SPEC, Fall 1989.Google Scholar
- 50 D. Wallace. Dependence of multi-dimensional array references. In Proceedings of the Second International Conference on $upercompuiing, St. Malo, France, July 1988. Google ScholarDigital Library
- 51 M. E. Wolf and M. Lam. Maximizing parallelism via loop transformations. In Proceedings of the Third Workshop on Languages and Compilers for Parallel Computing, Irvine, CA, August 1990.Google Scholar
- 52 M. J. Wolfe. Techniques for improving the inherent parallelism in programs. Master's thesis, Dept. of Computer Science, University of Illinois at Urbana-Champaign, July 1978.Google Scholar
- 53 M. J. Wolfe. Optimizing Supercompilers .for Supercomputers. PhD thesis, Dept. of Computer Science, University of Illinois at Urbana-Champaign, October 1982. Google ScholarDigital Library
- 54 M.J. Wolfe. Loop skewing: The wavefront method revisited. 293, August 1986. Google ScholarDigital Library
- 55 M. J. Wolfe. Optimizing S~tper~ompilers fo~, Supercompuers. The MiT Press, Cambridge, MA, 1989. Google ScholarDigital Library
- 56 M. J. Wolfe and C. Tseng. The Power test for data dependence. Technical Report CS/E 90-015, Dept. of Computer Science and Engineering, Oregon Graduate Institute, August 1990. To appear in IEEE Transactions on Parallel and Distributed Systems. Google ScholarDigital Library
Index Terms
- Practical dependence testing
Recommendations
Dependence testing: extending data flow testing with control dependence
TestCom'05: Proceedings of the 17th IFIP TC6/WG 6.1 international conference on Testing of Communicating SystemsThis paper presents a new approach to structural testing, called dependence testing. First we propose dependence oriented coverage criteria that extend conventional data flow oriented coverage criteria with control dependence. This allows one to capture ...
A Practical Extension of Pairwise Testing
ISSREW '12: Proceedings of the 2012 IEEE 23rd International Symposium on Software Reliability Engineering WorkshopsFor the interaction testing, there have been a variety of tools to efficiently generate test cases using Pair wise or In-Parameter-Order (or IPO) technique. Especially, PICT and AETG among these tools provide not only a combination technique of input ...
Repairing test dependence
FSE 2016: Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software EngineeringIn a test suite, all the tests should be independent: no test should affect another test's result, and running the tests in any order should yield the same test results. The assumption of such test independence is important so that tests behave ...
Comments