Generation of Fast Interpreters for Huffman Compressed Bytecode

Mario Latendresse, Marc Feeley

Workshop on Interpreters, Virtual Machines and Emulators (IVME03), San Diego, California, 12 Jun 2003


Abstract

Embedded systems often have severe memory constraints requiring careful encoding of programs. For example, smart cards have on the order of 1K of RAM, 16K of non-volatile memory, and 24K of ROM. A virtual machine can be an effective approach to obtain compact programs but instructions are commonly encoded using one byte for the opcode and multiple bytes for the operands, which can be wasteful and thus limit the size of programs runnable on embedded systems. Our approach uses canonical Huffman codes to generate compact opcodes with custom-sized operand fields and with a virtual machine that directly executes this compact code. We present techniques to automatically generate the new instruction formats and the decoder. In effect, this automatically creates both an instruction set for a customized virtual machine and an implementation of that machine. We demonstrate that, without prior decompression, fast decoding of these virtual compressed instructions is feasible. Through experiments on Scheme and Java, we demonstrate the speed of these decoders. Java benchmarks show an average execution slowdown of 9%. Compression factors highly depend on the original bytecode and the training sample, but typically vary from 30% to 60%.


Full Paper

Server START Conference Manager
Update Time 28 Apr 2003 at 17:07:05
Maintainer anton@mips.complang.tuwien.ac.at.
Start Conference Manager
Conference Systems