More
PC conflicts

Aggelos Kiayias

Submitted

[PDF] Submission 1 Aug 2019 8:39:38am EDT · 50dc557c030eaea055461a021b26a722477bf868f627677f2c70552b70031520

The root causes of many security vulnerabilities include a pernicious combination of two problems, often regarded as inescapable aspects of computing. First, the protection mechanisms provided by the mainstream processor architecture and C/C++ language abstractions, dating back to the 1970s and before, provide only coarse-grain virtual-memory-based protection. Second, mainstream system engineering relies almost exclusively on test-and-debug methods, with (at best) prose specifications. These methods have historically sufficed commercially for much of the computer industry, but they fail to prevent large numbers of exploitable bugs, and the security problems that this causes are becoming ever more acute. In this paper we show how more rigorous engineering methods can be applied to the development of a new security-enhanced processor architecture, with its accompanying hardware implementation and software stack. We use formal models of the complete instruction-set architecture (ISA) at the heart of the design and engineering process, both in lightweight ways that support and improve normal engineering practice -- as documentation, in emulators used as a test oracle for hardware and for running software, and for test generation -- and for formal verification. We formalise key intended security properties of the design, and establish that these hold with mechanised proof. This is for the same complete ISA models (complete enough to boot operating systems), without idealisation. We do this for CHERI, an architecture with \emph{hardware capabilities} that supports fine-grained memory protection and scalable secure compartmentalisation, while offering a smooth adoption path for existing software. CHERI is a maturing research architecture, developed since 2010, with work now underway to explore its possible adoption in mass-market commercial processors. The rigorous engineering work described here has been an integral part of its development to date, enabling more rapid and confident experimentation, and boosting confidence in the design.
K. Nienhuis, A. Joannou, A. Fox, M. Roe, T. Bauereiss, B. Campbell, M. Naylor, R. Norton, S. Moore, P. Neumann, I. Stark, R. Watson, P. Sewell [details]

Kyndylan Nienhuis (University of Cambridge) <Kyndylan.Nienhuis@cl.cam.ac.uk>

Alexandre Joannou (University of Cambridge) <aj443@cam.ac.uk>

Anthony Fox (Arm Limited) <anthony.fox@arm.com>

Michael Roe (University of Cambridge) <Michael.Roe@cl.cam.ac.uk>

Thomas Bauereiss (University of Cambridge) <Thomas.Bauereiss@cl.cam.ac.uk>

Brian Campbell (University of Edinburgh) <Brian.Campbell@ed.ac.uk>

Matthew Naylor (University of Cambridge) <Matthew.Naylor@cl.cam.ac.uk>

Robert M. Norton (University of Cambridge) <Robert.Norton@cl.cam.ac.uk>

Simon Moore (University of Cambridge) <Simon.Moore@cl.cam.ac.uk>

Peter Neumann (SRI International) <peter.neumann@sri.com>

Ian Stark (University of Ediburgh) <ian.stark@ed.ac.uk>

Robert N. M. Watson (University of Cambridge) <Robert.Watson@cl.cam.ac.uk>

Peter Sewell (University of Cambridge) <Peter.Sewell@cl.cam.ac.uk>

Access control and authorization Attacks and defenses Hardware security Language-based security Security architectures Trustworthy computing

Options
Names of prior submission venues in last 12 months [REQUIRED, CONFIDENTIAL]: A two-page abstract of part of this work was presented at the 2018 Automated Reasoning Workshop (ARW).
✓ Consider this paper for the Distinguished Student Paper Award

To edit this submission, sign in using your email and password.