Object-oriented code looks different from procedural code. The main difference is the increased frequency of polymorphic calls. A polymorphic call looks like a procedural call, but where a procedural call has only one possible target subroutine, a polymorphic call can result in the execution of one of several different subroutines. The choice is made at run time, and depends on the type of the receiving object (the first argument). Polymorphic calls enable the construction of clean, modular code design. They allow the programmer to invoke operations on an object without knowing its exact type in advance. This flexibility incurs an overhead: in general, polymorphic calls must be resolved at run time. The overhead of this run time polymorphic call resolution can lead a programmer to sacrifice clarity of design for more efficient code, by replacing instances of polymorphic calls by several single-target procedural calls, removing run time polymorphism. This practice typically leads to a more rigid program structure and code duplication, increasing the short term effort required to build a functional prototype, and the long term effort of maintaining and adapting a program to changing needs. We study techniques to minimize the run-time cost of polymorphic calls. In the software domain, we minimize the memory overhead of table based implementations (message dispatch tables), which are most efficient in terms of number of instructions executed. In the hardware domain, we reduce the cycle cost of these instructions through indirect branch prediction. For reasonable transistor budgets, hit rates of more than 95% can be achieved. As a result, only one out of twenty polymorphic calls incurs significant cost at run time. Design of clear, maintainable and reusable code, as enabled by object-oriented technology, can thereby become less restrained by efficiency considerations. Only in very time-critical program segments should the programmer avoid the use of polymorphism. In other words, object-oriented code can become the norm rather than the exception. From our own experience in building software architectures, we consider this a Good Thing.
Cited By
- Gil J and Zibin Y (2007). Efficient dynamic dispatching with type slicing, ACM Transactions on Programming Languages and Systems, 30:1, (5-es), Online publication date: 1-Nov-2007.
- Bettini L, Capecchi S and Venneri B (2019). Double dispatch in C++, Software—Practice & Experience, 36:6, (581-613), Online publication date: 1-May-2006.
- Zibin Y and Gil J Fast algorithm for creating space efficient dispatching tables with application to multi-dispatching Proceedings of the 17th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications, (142-160)
- Zibin Y and Gil J (2019). Fast algorithm for creating space efficient dispatching tables with application to multi-dispatching, ACM SIGPLAN Notices, 37:11, (142-160), Online publication date: 17-Nov-2002.
Recommendations
On polymorphic gradual typing
We study an extension of gradual typing—a method to integrate dynamic typing and static typing smoothly in a single language—to parametric polymorphism and its theoretical properties, including conservativity of typing and semantics over both statically ...
Hardware and software support for efficient exception handling
Program-synchronous exceptions, for example, breakpoints, watchpoints, illegal opcodes, and memory access violations, provide information about exceptional conditions, interrupting the program and vectoring to an operating system handler. Over the last ...