skip to main content
Skip header Section
Interpretation and instruction path coprocessingJanuary 1990
Publisher:
  • MIT Press
  • 55 Hayward St.
  • Cambridge
  • MA
  • United States
ISBN:978-0-262-04107-2
Published:03 January 1990
Pages:
192
Skip Bibliometrics Section
Bibliometrics
Abstract

No abstract available.

Cited By

  1. ACM
    Savrun-Yeniçeri G, Zhang W, Zhang H, Seckler E, Li C, Brunthaler S, Larsen P and Franz M (2014). Efficient hosted interpreters on the JVM, ACM Transactions on Architecture and Code Optimization, 11:1, (1-24), Online publication date: 1-Feb-2014.
  2. ACM
    Savrun-Yeniçeri G, Zhang W, Zhang H, Li C, Brunthaler S, Larsen P and Franz M Efficient interpreter optimizations for the JVM Proceedings of the 2013 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, (113-123)
  3. ACM
    Bertin C, Guillon C and De Bosschere K Compilation and virtualization in the HiPEAC vision Proceedings of the 47th Design Automation Conference, (96-101)
  4. ACM
    Ertl M and Gregg D Combining stack caching with dynamic superinstructions Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators, (7-14)
  5. Ertl M and Gregg D Retargeting JIT Compilers by using C-Compiler Generated Executable Code Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques, (41-50)
  6. ACM
    Ogata K, Komatsu H and Nakatani T Bytecode fetch optimization for a Java interpreter Proceedings of the 10th international conference on Architectural support for programming languages and operating systems, (58-67)
  7. ACM
    Ogata K, Komatsu H and Nakatani T (2002). Bytecode fetch optimization for a Java interpreter, ACM SIGPLAN Notices, 37:10, (58-67), Online publication date: 1-Oct-2002.
  8. ACM
    Ogata K, Komatsu H and Nakatani T (2002). Bytecode fetch optimization for a Java interpreter, ACM SIGARCH Computer Architecture News, 30:5, (58-67), Online publication date: 1-Dec-2002.
  9. ACM
    Ogata K, Komatsu H and Nakatani T (2002). Bytecode fetch optimization for a Java interpreter, ACM SIGOPS Operating Systems Review, 36:5, (58-67), Online publication date: 1-Dec-2002.
  10. ACM
    Radhakrishnan R, Bhargava R and John L Improving Java performance using hardware translation Proceedings of the 15th international conference on Supercomputing, (427-439)
  11. ACM
    Chou Y and Shen J Instruction path coprocessors Proceedings of the 27th annual international symposium on Computer architecture, (270-281)
  12. ACM
    Chou Y and Shen J (2000). Instruction path coprocessors, ACM SIGARCH Computer Architecture News, 28:2, (270-281), Online publication date: 1-May-2000.
  13. ACM
    Ertl M Stack caching for interpreters Proceedings of the ACM SIGPLAN 1995 conference on Programming language design and implementation, (315-327)
  14. ACM
    Ertl M (2019). Stack caching for interpreters, ACM SIGPLAN Notices, 30:6, (315-327), Online publication date: 1-Jun-1995.
  15. ACM
    Bird P and Pleban U A semantics-directed partitioning of a processor architecture Proceedings of the 1991 ACM/IEEE conference on Supercomputing, (702-709)
Contributors
  • Ghent University
  • Ghent University

Recommendations

Reviews

Herbert G. Mayer

This great little monograph treats interpretation of programs, as opposed to execution of compiled code. It explains methods for speeding up interpretation through pure software methods and through multiple processors used as coprocessors of conventional microprocessors. The authors do a superb job of explaining various methods of interpretation and of showing how to overcome the main weakness of interpretation, its slowness. With two sketchy exceptions the monograph fails to explain why interpretation should not be dropped altogether in favor of the obviously better (because faster) approach, execution of compiled object code. The exceptions mentioned are fast prototyping and systems with very small memories. Thus the book seems to lack proper motivation. Luckily, this is not the case. The book exists because the authors love their me´tier; their enthusiasm led them to write an excellent work on more than interpretation and how to make it faster. Included almost by accident are a nice coverage of computer architecture from a software point of view with a good mix of hardware, and an interesting treatment of the interaction of computer architecture and programming (or intermediate) languages. It is a pity that the authors, who must have had tremendous fun implementing various interpreters for their documented project, tell us only through literature references about the background and history of this project. I wish such information had been included directly. While I was reading I felt almost envious, imagining the joy they must have had, and wished the authors had let me peek deeper into their world of programming and the funding and motivation behind it. The preface states clearly that “this monograph was neither intended to be a tutorial, nor a textbook on interpretation with worked-out examples. Rather, it groups and presents concepts that are seldom found together, and it elaborates on ideas that are not in the main thrust of recent hardware development.” Chapter 1 sets the stage by discussing those aspects of computer architecture that are applicable to interpretation. It starts by surveying memory hierarchies from fast registers, to caches, to slow mass storage. Microcoded architectures are reviewed and related to interpreted execution. It then reviews the development of RISC architectures, contrasts them with CISC, surveys I/O subsystems with their cyclic development cycle, and contrasts the von Neumann with the Harvard architecture. The last sections of chapter 1 review programming language development, explaining the growing emphasis on the human programmer and the decreasing importance of the closeness of a language to the target machine. Chapter 2 presents two views of interpretation—methods for encoding intermediate instructions and ways of representing programs using these intermediate instructions. After discussing ideal intermediate representations through directly interpretable languages and showing the space/time tradeoffs, chapter 2 includes three concrete examples of intermediate languages—the Modula-2 code, the Forth threaded-code, and a Lispkit LISP code, all of which have actually been used in implementations. Chapter 3 defines the slowdown factor <__?__Pub Fmt italic>S<__?__Pub Fmt /italic> as the ratio of the time to interpret a program to the time to execute that same program in compiled form. Generally, <__?__Pub Fmt italic>S<__?__Pub Fmt /italic> is much greater than 1, a drawback that results from the interpretive overhead &tgr; and the architectural gap; a formula for &tgr; is given. For both causes of slowdown, chapter 3 offers ways to reduce <__?__Pub Fmt italic>S<__?__Pub Fmt /italic>. One clever way to reduce the overhead is to replicate the mapping instructions of the interpreter at the end of every semantic action. Each action can then determine its successor and branch to that successor directly, instead of having to branch back unconditionally to a central decoding module, which branches to the appropriate semantic routine. While the interpreter grows in size, this trick eliminates one branch for every instruction of the intermediate code. Slowdown caused by the architectural gap can also be reduced by actually increasing the gap. The authors observe the following phenomenon. The number of host machine operations necessary to interpret intermediate language instructions increases more slowly than the semantic level of such intermediate instructions. Moreover, if the level is raised, fewer instructions are necessary for one fixed program. Thus, one way to speed up interpretation is to raise the level of the intermediate language. Also, the intermediate engine is often a conceptual stack machine, in contrast to the host computer, a general-purpose register machine. Since the number of contiguous push or pop operations is usually small, rarely exceeding 3, such a restricted stack machine can easily be simulated with a few hardware registers without the repeated cost of memory stores or fetches. The semantic gap is thus reduced for the sake of speed-up. Chapter 3 concludes by observing that such pure programming techniques will not suffice to eliminate the interpretive slowdown. Instead, parallel hardware will be required, which leads into chapter 4. Chapter 4 shows how interpreters can be accelerated through coprocessors. The novel idea is the use of conventional numeric data processors as coprocessors in the instruction path rather than in the data path. After an excellent survey of different types of coprocessors in the data path, classified as peripheral, spying, and independent coprocessors, chapter 4 explains interpretation enhancements by coprocessing the prefetch engine, by executing jumps, and by handling literals. Coprocessing of the prefetch engine works as long as the mapping time <__?__Pub Fmt italic>T m <__?__Pub Fmt /italic> is of the order of magnitude of the semantic time <__?__Pub Fmt italic>T s .<__?__Pub Fmt /italic> Interpreting jumps on a coprocessor effectively pastes together all semantic routines in a jump-free fashion. Section 4.3 shows two concrete examples of instruction path coprocessors, both designed and implemented from standard components: the first is for M-code, the second for threaded code. Both demonstrate a considerable speedup. The acceleration is limited, however, and section 4.4 explains why it is not even higher. The concluding remarks in chapter 5 review related work and discuss the applicability of instruction path coprocessing for pure RISC architectures. In the classification of sequential and parallel architectures, the authors leave out the MISD version without offering an explanation. Strongly pipelined architectures can be classified as MISD, and lumping hardware pipelining together with SIMD architectures is highly debatable. The section “Microprocessors and the von Neumann Bottleneck” states that parallel decoding of multiple subinstructions in a VLIW-type architecture is not possible with contemporary microprocessors. The authors are not on top of the latest processor development: Intel's iWarp is indeed a VLIW architecture on a single chip. The section on “Language Implementation Techniques” compares compiled with interpreted translation. It suffers somewhat from the authors' blind spot, created by their bias in favor of interpretation. They argue that for compiled execution of <__?__Pub Fmt italic>m<__?__Pub Fmt /italic> languages on <__?__Pub Fmt italic>n<__?__Pub Fmt /italic> machines <__?__Pub Fmt italic>m<__?__Pub Fmt /italic>×<__?__Pub Fmt italic>n<__?__Pub Fmt /italic> compilers are needed, while only <__?__Pub Fmt italic>m<__?__Pub Fmt /italic>+<__?__Pub Fmt italic>n<__?__Pub Fmt /italic> interpreters suffice for the equivalent job. These equations hold only if the compiler writers do a sloppy job. In reality, the first requirement of any compiler writer for a multilanguage, multitarget environment is to write a single back-end for all languages translated to that target. These drawbacks, as well as a few typesetting errors, are minor. They do not detract from the quality of the book. The authors have the rare gift of combining thorough software understanding with an equally deep grasp of hardware and the ability to describe both. Consequently, their characterization of programming languages makes it seem almost natural that logic and object-oriented languages developed when they did, driven by changes in computer architecture. In retrospect, all developments seem natural, but it is a rare art to present such developments in a logically forcing, natural manner. Debaere and Campenhout have mastered this art. They also contrast RISC versus CISC architectures, with their respective advantages and shortcomings, in a beautifully objective manner. Such absence of bias is welcome and cannot be expected from hardware architects. This book has a more important strength. Before writing about interpretive techniques and speedup, Debaere and Campenhout implemented their ideas, measured the results, and then wrote about it all. That is how it should be, but rarely is, done. Too often we read about paper tigers. I appreciate a book about work actually done. I prefer facts over educated guesses and measured results over hand-waving. The book does not explain in detail how to build interpreters. Still, anybody implementing an interpreter or simulator will benefit from reading this text, since it offers detailed design discussions of top-level architectural choices that must be made. The monograph offers substance for reasonable decisions. The introductory chapters cover their topics with so much insight that I recommend them as good references for students in computer architecture. Every team designing a new processor will also benefit from studying this text carefully. Not only will the monograph provide a solid background for writing a simulator early, so that the architecture can be tested before it exists in silicon, it also provides good insights into tradeoffs of functionality versus silicon area. Thus it helps answer some difficult questions: H<__?__Pub Caret>ow often will this instruction be executed__?__ Can that instruction be generated by a compiler__?__ What will be the impact of this instruction on the overall execution profile__?__ How large should the on-chip cache be__?__ The text builds a good case for using interpreters in academia. In an educational environment it is difficult to design and then build, test, and debug actual hardware in one semester; it is difficult to touch the hardware. Software exercises, by contrast, are easy to come up with. With an interpreter, the (virtual) hardware can be built, touched, and quickly modified. The monograph serves as a basis for this. This book is good; it promises much and delivers more. In addition to a sound and sensible approach to speeding up interpretation, Debaere and Campenhout have written an introduction to computer architecture. I will keep it as a valuable reference for my work in academia and industry.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.