Neil D. Jones is Prof. Emeritus at the University of Copenhagen. He has been professor of Computer Science in the US, Canada and Denmark,starting in 1962 until retirement in 2007. He continues to be professionally active, e.g., writing papers and serving on program committees. In 2009 he received a lifetime Research Prize from the Alexander von-Humboldt-Stiftung in Germany, and is a Fellow of the American Association for Computing Machinery. Research areas: programming languages including partial evaluation andabstract interpretation (program flow analysis), computability and complexity theory.
National Institute of Informatics, 20F, Lecture Room
(joint with Lars Hartmann, Jakob Grue Simonsen, Søren Vrist)
In spite of widespread discussion about connections between biologyand computation, one question seems notable by its absence:*Where are the programs*? We take the question seriously, and propose a model of computation that is at the same time biologically plausible: the model's functioning is defined by a relatively small set of chemical-like reaction rules; programmable (by programs reminiscent of low-level computer machine code); uniform: new hardware is not needed to solve new problems; universal: all computable functions can be computed (in natural ways and without arcane encodings of data and algorithm); stored-program: programs are the same as data, so programs are not only executable, but are also compilable and interpretable; and (last but not least) Turing complete in a strong sense: a universal algorithm exists, able to execute any program, and not asymptotically in efficient. The model has been designed and implemented (for now insilico on a conventional computer). This work opens new perspectives on just how computation may be specified at the biological level. One goal is to provide a top-down approach to biomolecular computation.
Interpreters, Compilers, and Program Specialisers; The Futamura projections 1: a partial evaluator can compile; 2: a partial evaluator can generate a compiler; 3: a partial evaluator can generate a compiler generator. Underbar types to describe the Futamura projections: Types of interpreters; types of compilers; types of specialisers; types for program self-application
Trivial program specialization. Interpretation overhead, including self-interpretation. How specialization can be done; binding-time analysis. Speedups from self-application in the Futamura projections. Optimal program specialization.
(joint with Roberto Giacobazzi, Isabella Mastroeni)
How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P′ from a program P with source clear code. Start with a (program-executing) interpreter interp for the language in which P is written. Then “distort” interp so it is still correct, but its specialization P′ with respect to P is transformed code that is equivalent to the original program, but harder to understand or analyze. A systematic approach to deformation is to make program P obscure by transforming it to P′ on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufﬁciently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. Our method is applied to: code ﬂattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.
(joint with David Lacey, Eric Van Wyk, Carl Christian Frederiksen)
Rewrite rules with side conditions can elegantly express many classical compiler optimizations for imperative programming languages. In this paper, programs are written in an intermediate language and transformation-enabling side conditions are specified in a temporal logic suitable for describing program data flow. The purpose of this talk is to show how such transformations may be proven correct. Our methodology is illustrated by three familiar optimizations: dead code elimination, constant folding, and code motion. A transformation is correct if whenever it can be applied to a program, the original and transformed programs are semantically equivalent, i.e., they compute the same input-output function. The proofs of semantic equivalence inductively show that a transformation-specific bisimulation relation holds between the original and transformed program computations.