Friday 16 June 2017

On The Tension between Utility and Innovation in Software Engineering


For a software engineering publication to be published, it must above all provide some evidence of the fact that it is of value (or at least potentially of value) in a practical, industrial context. Software Engineering publications and grant proposals live or die by their perceived impact upon and value to the software industry. 

To be published at a high-impact venue the value of a piece of research hinges on the ability to demonstrate this with a convincing empirical study, with extra credit given to projects that involve large numbers of industrial developers and projects. For a grant proposal to be accepted it should ideally involve significant commitments from industrial partners.

Of course this makes sense. Funding councils should rightly expect some form of return on investment; by funding Software Engineering researchers there should be some form of impact upon the industry as a result. The motivation of any research should always ultimately be to improve the state of the art in some respect. Extensive involvement of industrial partners can potentially bridge the “valley of death” in the technology readiness levels between conceptual research and industrial application.

However, there are downsides to framing the value of research area in such starkly utilitarian terms.  There is a risk that research effort becomes overly concentrated on activities such as tool development, developer studies and data collection. Evaluation focusses from novelty and innovation to issues such as the ease with which the tool can be deployed and the wealth of data supporting its efficacy. This is ok if an idea is easy to implement as a tool, and the data is easy to collect. Unfortunately, this only tends to be the case for technology that is already well established (for which there are already plenty of APIs around for example), and where the idea lends itself to easy data collection, or the data already exists and merely has to be re-analysed.

There is however no incentive (in fact, there is a disincentive) to embark upon a line of research for which tools and empirical studies are harder to construct in the short-term, or for which data cannot readily be harvested from software repositories. It is surely the case that the truly visionary, game-changing ideas might require a long time (5-10 years) to refine and will potentially require cultural changes that will put them (at least in the initial years of a project) beyond the remit of empirical studies. But it is surely within this space that the truly game-changing innovations lie.

The convention is that early-stage research should be published in workshops and “new idea” papers, and can only graduate to full conference or journal papers once it is “mature” enough. This is problematic because a truly risky, long-term project of the sort mentioned above would not produce the level of publications that are necessary to sustain an academic career.

This state of affairs is by no means a necessity. For example, the few Formal Methods conferences that I’ve been to and proceedings that I’ve read have always struck me as being  more welcoming of risky ideas with sketchier evaluations (despite the fact that these same conferences and researchers also have formidable links to industry). 

It is not obvious what the solution might be. However, I do believe that it probably has to involve a loosening of the empiricist straightjacket.



* For fear of this being misread, it is not my opinion that papers should in general should be excused for not having a rigorous empirical study. It’s just that some should be.