Research Perspectives - Tools for Visualisation of Portfolios
EPSRC logo

EPSRC Database

Source RCUK EPSRC Data

EP/I00520X/1 - Trusted Autonomous Systems

Research Perspectives grant details from EPSRC portfolio

Professor AR Lomuscio EP/I00520X/1 - Trusted Autonomous Systems

Principal Investigator - Dept of Computing, Imperial College London


Leadership Fellowships

Research Areas

Artificial Intelligence Technologies Artificial Intelligence Technologies

Programming Languages and Compilers Programming Languages and Compilers

Verification and Correctness Verification and Correctness


University of Southampton University of Southampton

University of Rome La Sapienza University of Rome La Sapienza

University College London University College London

Sysbrain Sysbrain

Polish Academy of Sciences Polish Academy of Sciences

IBM Watson Research Centre IBM Watson Research Centre

Start Date


End Date




Similar Grants

Automatic generation of similar EPSRC grants

Similar Topics

Topic similar to the description of this grant

Grant Description

Summary and Description of the grant

Fully autonomous systems are here. In the past 50 years we have quickly moved from controlled systems, where the operator has full control on the actions of the system (e.g., a digger), to supervised systems that follow human instructions (e.g., automated sewing machines), to automatic systems performing a series of sophisticated operations without human control (e.g., today's robotic car assembly lines), to autonomous systems. Autonomous systems (AS) are highly adaptive systems that sense the environment and learn to make decisions on their actions displaying a high degree of pro-activeness in reaching a certain objective. They are autonomous in the sense that they do not need the presence of a human to operate, although they may communicate, cooperate, and negotiate with them or fellow autonomous systems to reach their goals.The objective of this fellowship is to develop the scientific and engineering underpinnings for autonomous systems to become part of our every-day's life. There is a clear benefit for society if repetitive or dangerous tasks are performed by machines. Yet, there is a perceived resistance in the media and the public at large to increasingly sophisticated technology assisting key aspects of our lives. These concerns are justified. Most people have first hand experience of software and automatic devices not performing as they should; why should they be willing to delegate to them crucial aspects of their needs?In a recent influential report published by the Royal Academy of Engineering in August 2009 and widely discussed in the media it is argued that there is a real danger that these technologies will not be put into use unless urgent questions about the legal, ethical, social, and regulatory implications are addressed. For instance, the report highlights the issue of liability in case of collisions between autonomous driverless cars. Who should be held responsible? The passenger? The software? The owner? The maker of the vehicle? Quite clearly society and the government need to engage in a maturedebate on several of these issues. However, the report identifies an even more fundamental point:``Who will be responsible for certification of autonomous systems? [Royal Society, Autonomous Systems, August 2009, page 4]While there are complex regulatory aspects to this question, its underlying scientific implication is that we, as computer scientists and engineers, urgently need to offer society techniques to be able to verify and certify that autonomous systems behave as they are intended to. To achieve this, four research objectives are identified:1) The formulation of logic-based languages for the principled specification of AS, including key properties such as fault-tolerance, diagnosability, resilience, etc.2) The development of efficient model checking techniques, including AS-based abstraction, parametric and parallel model checking, for the verification of AS. 3) The construction and open source release of a state-of-the-art model checker for autonomous systems to be used for use-cases certifications.4) The validation of these techniques in three key areas of immediate and mid-term societal importance: autonomous vehicles, services, and e-health.This fellowship intends to pursue novel techniques in computational logic to answer the technical challenges above. A success in these areas will open the way for the verification of AS, thereby opening the way to their certification for mainstream use in society.

Structured Data / Microdata

Grant Event Details:
Name: Trusted Autonomous Systems - EP/I00520X/1
Start Date: 2010-10-01T00:00:00+00:00
End Date: 2015-09-30T00:00:00+00:00

Organization: Imperial College London

Description: Fully autonomous systems are here. In the past 50 years we have quickly moved from controlled systems, where the operator has full control on the actions of the system (e.g., a digger), to supervised systems that follow human instructions (e.g., automated ...