Donnerstag/Freitag, 25./26. Februar 2016 |
Abstract:
The past years saw several unprecedented, high-profile cyber-security
incidents, with hacks targeting Sony, AshleyMadison, JP Morgan Chase,
Anthem Bluecross, Fiat/Chrysler, and the US Office of Personnel
Management. These incidents are, however, not limited to the United
States, as illustrated by an industrial hack of a German steel mill that
prevented the shutdown of a blast furnace, causing substantial financial
damage. These examples illustrate just how much is at stake, and how
pressing the need for lasting solutions really is.
Language-based security is a research area that leverages programming
language principles to address security challenges. In this talk, I
will briefly review why language-based security is attractive, what has
been achieved in the past, and how we address current and upcoming
security challenges emanating from (i) highly dynamic programming
languages exemplified by JavaScript, or (ii) emerging threat landscape.
Abstract: Modern web applications are usually based on JavaScript. Due to its loosely typed, dynamic nature, testing is time expensive and costly. Techniques for regression testing and fault-localization as well as frameworks like the Google Web Toolkit (GWT) ease the development and testing process, but still require approaches to reduce the testing effort. In this paper, we investigate the efficiency of an extended, graph-walk based selective regression testing technique that aims to detect client-side code changes in order to determine a reduced set of web tests. To do this, we analyze web applications created with GWT on different precision levels and with varying lookaheads. We examine how these parameters affect the localization of client-side code changes, the run time, the memory consumption and the number of web tests selected for re-execution. In addition, we propose a dynamic heuristic which targets an analysis that is as exact as possible while reducing the memory consumption. The results are partially applicable on non-GWT applications. In the context of web applications, it shows that the efficiency rely to a great degree on both the structure of the application and the code modifications, which is why we propose further measures tailored to the results of our approach.
Abstract: Systematic reuse of software artifacts can be achieved with software product lines, which represent a family of similar software systems by commonalities and variabilities. A variability model (e.g., feature model) describes the commonalities and variabilities and serves as a basis for a product configuration, i.e., a selection of features according to constraints defined in the model. These models can contain additional information, such as attributes, which enrich features with typed values for various purposes (e.g., optimization, simplified readability). Unfortunately, these attributes are not directly reusable in code artifacts as the model is only used to assemble or change code artifacts according to a product configuration. There are many languages that are capable of implementing software product lines such as DeltaJ, but they do not support the direct propagation of feature attributes to the associated code artifacts. In this paper, we propose parametric DeltaJ, an adaptation of the language DeltaJ, a delta oriented programming language for Java. parametric DeltaJ allows the propagation of typed attributes from an attributed feature model to Java code artifacts. We perform a case study to show that introducing parameters reduces the number of variables, delta modules and lines of code.
Abstract:
With the widespread use of multicore processors, software becomes more and more
diverse in its use of parallel computing resources. To address all application
requirements, each with the appropriate abstraction, developers mix and match
various concurrency abstractions made available to them via libraries and
frameworks. Unfortunately, today's tools such as debuggers and profilers do not
support the diversity of these abstractions. Instead of enabling developers to
reason about the high-level programming concepts, they used to express their
programs, the tools work only on the library's implementation level. While this
is a common problem also for other libraries and frameworks, the complexity of
concurrency exacerbates the issue further, and reasoning on the higher levels of
the concurrency abstractions is essential to manage the associated complexity.
In this position paper, we identify open research issues and propose to build
tools based on a common meta-level interface to enable developers to reasons
about their programs based on the high-level concepts they used to implement
them.
Abstract: In the past we considered type inference for Java with generics and lambdas. Our type inference algorithm determines nominal types in subjection to a given environment. This is a hard restriction as the code can not be compiled without a given environment. In this paper we present a type inference algorithm for a java-like language, that infers structural types without a given environment. This means that in any environment any type, which fulfills the demanded conditions, can be instantiated. The structural types are given as generated interfaces.
Abstract: A programming language is usually taught by starting with a small kernel that is continuously extended to the full set of language features. Unfortunately, the existence of advanced language features might confuse students if they accidentally use them and get incomprehensible error messages. To avoid these problems, one should group the language features into different levels so that beginners start with a simple level and advance to higher levels with more features if they become more experienced. In order to support such a concept for arbitrary programming languages, we present a parser generator system, called Levels, for level-based programming languages in this paper. With a level-based language, one could stepwise increase the language level in order to match the experience of the students. Furthermore, one can implement level-specific semantic analyses in order to provide comprehensible error messages. Our Levels system generates level-specific parsers from a unified syntax description and provides an infrastructure to implement level-specific semantic analyses as well as program editors to develop level-specific programs.
Abstract: Dependency Injection Frameworks, wie das Spring Framework, verlassen sich auf dynamische Sprachfähigkeiten von Java. Sofern diese Fähigkeiten auf unvorhergesehen Art und Weise eingesetzt werden, können Fehlerwirkungen auftreten die zur Übersetzungszeit nicht vom Java Compiler erkannt werden. Diese Arbeit diskutiert die Anwendung von statischer Programmcode Analyse als Mittel besagte Übersetzungszeit-Prüfungen wiederherzustellen. Zuerst werden mögliche Fehler in der Konfiguration von Spring identifiziert und klassifiziert. Dabei werden attributierte Grammatiken genutzt um formal Fehlerzustände festzustellen. Anschließend wird eine prototypische Compiler-Erweiterung basierend auf der Java Pluggable Annotation Processing API vorgestellt.
Abstract: The Sig programming language is a total functional, clocked synchronous data-flow language. Its core has been designed to admit concise coalgebraic semantics. Universal coalgebra is an expressive theoretical framework for behavioral semantics, but traditionally phrased in abstract categorical language, and generally considered inaccessible. In the present paper, we rephrase the coalgebraic concepts relevant for the Sig language semantics in basic mathematical notation. We demonstrate how the language features characteristic of its paradigms, namely delay for data flow, and apply for higher-order functional programming, are shaped naturally by the semantic structure. Thus the present paper serves two purposes, as a gentle, self-contained and applied introduction to coalgebraic semantics, and as an explication of the Sig core language denotational and operational design.
Abstract: The polymorphically typed functional core language LRP is a lambda calculus with recursive let-expressions, data constructors, case-expressions, and a seq-operator that uses call-by-need evaluation. In this work LRP is extended by scoped work ecorations to ease computations when reasoning on program improvements. The considered language LRPw extends LRP by two constructs to represent work (i.e. numbers of reduction steps) that can be shared between several subexpressions. Due to a surprising observation that this extension is proper, some effort is required to re-establish the correctness and optimization properties of a set of program transformations also in LRPw. Based on these results, correctness of several useful computation rules for work decorations is shown.