Important Note:
This information is now rather out of date. My research notes are now kept on a wiki, a static version of which is located here.
Software Architectures for Embedded Systems
Background
The embedded systems that we are interesting in are those that must react to their environment, and as such are almost synonymous with robots. To gain background knowledge, it is useful to take a look at current robot programming methods.
- Biggs, G., MacDonald, B. A Survey of Robot Programming Systems in Proceedings of the Australasian Conference on Robotics and Automation, CSIRO, Brisbane, Australia, December 2003.
- My slides on the Briggs-MacDonald paper: survey.pdf
- Manual: includes text based and graphical languages. Text based languages include controller specific (assembly-like) languages, generic procedural languages, and behavior-based languages. Each of the robot's actions has to be explicitly described by the programmer.
- Automatic: Spans the range from simplistic demonstration systems to complex machine learning. Robot behavior is not specifically described initially, but emerges from training and learning.
Overview of Problem
The increasing complexity of embedded systems and the need for them to deal with uncertain information from sensors and failures have exposed the shortcomings of manual approaches to programming. There are simply too many possible system interactions for the programmer to enumerate, and too many unpredictable failure modes for the programmer to provide contingency plans for.
Automatic methods, on the other hand, are either too resource intensive or not robust enough to handle novel situations, such as an unpredicted sensor failure.
The ideal solution would be one that allows a programmer to specify system behavior without needing to reason about how to determine state or recognize component failures. State can be determined probabilistically from sensor input (which may be noisy and even faulty), and component failures should automatically detected and if possible recovered automatically. Furthermore, this must all be done within tight time and resource constraints common among embedded systems.
Model Based Programming
A possible solution is being investigated by the Model-based Embedded & Robotic Systems (MERS) group at MIT.
Model Based Programming allows a programmer to describe the behavior of a system as a progression of desired states. A "plant model" describes the system in terms of possible states and transitions (include failure states), as well as observable variables and control variables. State is estimated using the observable variables (such as sensors), and transitions are initiated by control variables (such as command signals). A generic deductive controller is responsible for moving the system to the state specified by the programmer.
Introduction to work by the MERS group:- Williams, B.C.; Ingham, M.D.; Chung, S.H.; Elliott, P.H.; , "Model-based programming of intelligent embedded systems and robotic space explorers," Proceedings of the IEEE , vol.91, no.1, pp. 212- 237, Jan 2003
- Phil Kim, Brian C. Williams, and Mark Abramson, "Executing Reactive, Model-based Programs through Graph-based Temporal Planning," Proceedings of the International Joint Conference on Artificial Intelligence, Seattle, WA, 2001, pp. 487-493.
- My slides on the preceding papers: introduction.pdf
Real Application... In Space!
The MERS work is rooted in the Remote Agent / Livingstone experiments performed on the Deep Space 1 satellite (DS1). For a short period of time, DS1 was under the direct control of Remote Agent, which successfully planned and executed several actions, such as rotating the satellite and photographing targets. There were also several simulated component failures, which were resolved with the Livingstone mode identification and reconfiguration module.
- Description of the DS1 Remote Agent experiment:
- D. Bernard, G. Dorais et al, "Spacecraft Autonomy Flight Experience: The DS1 Remote Agent Experiment" AIAA 99-4512, 1999
Titan
The Software Architecture proposed by the MERS group differs from the Remote Agent / Livingstone approach. The MERS architecture is focused around the Titan Executive, which takes as input a Control Program and a Plant Model, deduces the correct course of action, and manipulates the system through a low level interface to achieve the goal state.
- Overview of Titan:
- Lorraine Fesq, Mitch Ingham, Mike Pekala, John Van Eepoel, David Watson, and Brian C. Williams, "Model-Based Autonomy for the Next Generation of Robotic Spacecraft," Proceedings of the 53rd International Astronautical Congress, Houston, TX, October 2002, pp. 212-237. Paper # IAC-02-U.5.04.
- Slides provided by MERS: slides source site
- My general overview of Titan: Titan.pdf
Basic Examples
In order to get a more grounded idea of just how the Titan architecture works, I made a presentation that explores several simple examples. The main intention was to gain a firm understanding of the level of abstraction that each component is written at. For example, what are a plant's control variables, and how do we decide if something should be in the control program or incorporated into the plant model? Also, what should go into a Plant Model, and how does one go about writting it?
- Examples (from "Model-Based Autonomy for the Next Generation of Robotic Spacecraft")
- My slides: example.pdf