Learning from Authoritative Security Experiment Results

The 2013 LASER Workshop

LASER 2013 Call for Papers

Full paper submissions due July 11, 2013 (extended date)

With the increasing importance of computer security, the goal of this workshop is to help the security community quickly identify and learn from both success and failure. The workshop focuses on research that has a valid hypothesis and reproducible experimental methodology, but where the results were unexpected or did not validate the hypotheses, where the methodology addressed difficult and/or unexpected issues, or where unsuspected confounding issues were found in prior work.

Topics include, but are not limited to:

  • Unsuccessful research in experimental security
  • Methods and designs for security experiments
  • Experimental confounds, mistakes, and mitigations
  • Successes and failures reproducing experimental techniques and/or results
  • Hypothesis and methods development (e.g., realism, fidelity, scale)

The specific security results of experiments are of secondary interest for this workshop.

Journals and conferences typically publish papers that report successful experiments that extend our knowledge of the science of security or assess whether an engineering project has performed as anticipated. Some of these results have high impact; others do not.

Unfortunately, papers reporting on experiments with unanticipated results that the experimenters cannot explain, experiments that are not statistically significant, or engineering efforts that fail to produce the expected results are frequently not considered publishable because they do not appear to extend our knowledge. Yet, some of these “failures” may actually provide clues to even more significant results than the original experimenter had intended. The research is useful, even though the results are unexpected.

Useful research includes a well-reasoned hypothesis, a well-defined method for testing that hypothesis, and results that either disprove or fail to prove the hypothesis. It also includes a methodology documented sufficiently so that others can follow the same path. When framed in this way, “unsuccessful” research furthers our knowledge of a hypothesis and a testing method. Others can reproduce the experiment itself, vary the methods, and change the hypothesis, as the original result provides a place to begin.

As an example, consider an experiment assessing a protocol utilizing biometric authentication as part of the process to provide access to a computer system. The null hypothesis might be that the biometric technology does not distinguish between two different people; in other words, that the biometric element of the protocol makes the approach vulnerable to a masquerade attack. Suppose the null hypothesis is not rejected; it is still valuable to publish this result. First, it might prevent others from trying the same biometric method. Second, it might lead them to further develop the technology—to determine whether a different style of biometrics would improve matters, or if the environment in which authentication is being attempted makes a difference. For example, a retinal scan may be a failure in recognizing people in a crowd, but successful where the users present themselves one at a time to an admission device with controlled lighting, or when multiple “tries” are included. Third, it might lead to modifying the encompassing protocol so as to make masquerading more difficult for some other reason.

Equally important is research designed to reproduce the results of earlier work. Reproducibility is key to science as a way to validate earlier work or to uncover errors or problems in earlier work. Failure to reproduce the results leads to a deeper understanding of the phenomena that the earlier work uncovers.

Finally, many discussions about papers, proposals, and projects seek to explore previously tried strategies that failed, usually because published work does not exist. Old ideas are often pursued because the community is not aware of the prior failure. The workshop provides a venue that can help resolve this gap in the security community’s research literature.

Both full papers and structured abstracts are solicited. Full papers follow a typical pattern of submission, review, notification, pre-conference version, conference presentation, and final post-conference version. One-page structured abstracts serve two purposes: (1) to enable authors to receive early feedback prior to investing significant effort writing papers, and (2) to provide all attendees a forum to share an abstract of their work before the workshop.

Abstracts will be reviewed by at least two PC members with comments returned in 5-10 days; submissions before June 27 will receive an “encouraged,” “neutral,” or “discouraged” indication for submission of a full paper based on the abstract. The pre-submission feedback is for the author’s use only. All abstracts deemed relevant by the PC will be available on the LASER website before the conference, but they will not be part of the proceedings.

Proceedings

The 2013 LASER proceedings are published by USENIX, which provides free, perpetual online access to technical papers. USENIX has been committed to the "Open Access to Research" movement since 2008.

Further Information

If you have questions or comments about LASER, or if you would like additional information about the workshop, contact us at: info@laser-workshop.org.

Join the LASER mailing list to stay informed of LASER news.