1. Introduction
1.1. Definitions
1.2. Specifications
1.3. Control archtiecture
1.4. Development and topics
1.5. Domains
1.5.1. Underwater robots
1.6. Biblioggraphy
2. Sensing
2.1. Introduction
2.2. Characterizing
2.3. Classification
2.3.1. Wheel/motor sensors
2.3.2. Motion sensors
2.3.3. Heading sensors
2.3.4. INS
2.3.5. Beacons
2.3.6. Ranging
2.3.7. Vision-based sensors
2.4. Bibliography
3. Modelling
3.1. Introduction
3.2. Kinematics Models
3.3. Dynamics Models
3.3.1. AUV Model
3.3.2. ALV Model
3.4. Identification
3.4.1. Example: URIS Identification
3.4.2. Example: Pioneer Identification
3.5. Applications
3.5.1. Control
3.5.2. Simulation
4. Localization
4.1. Probability review
4.2. Estimation
4.2.1. Maximum Likelihood
4.2.2. Maximum a Posteriori
4.2.3. Minimum Mean Squared Error
4.2.4. Recursive Bayesian
4.2.5. Least Squares
4.2.6. Kalman Filter
4.2.7. Extended Kalman Filter
4.3. EKF based SLAM
4.3.1. The SLAM problem
4.3.2. Inizialization
4.3.3. Vehicle Motion: the EKF prediction step
4.3.4. Data Association
4.3.5. Map update: the EKF estimation step
4.3.6. Adding newliy Ovserved Features
4.3.7. Consistency of the EKF-SLAM
4.4. Data Association
4.4.1. Continuous SLAM
4.4.2. Relocation
4.4.3. Geometric Constrains
4.4.4. Locality
5. Control Architectures
5.1. Definitions
5.2. Classification
5.3. Reactive Control
5.3.1. Definitions
5.3.2. Principles
5.3.3. Design Methodology
5.3.4. Expresion of Behaviours
5.3.5. Behavioural Encoding
5.3.6. Coordination
5.3.7. Cases of Study
6. Learning
6.1. Introduction
6.2. Evolutionary robotics
6.3. Reinforcement Learning
6.3.1. Reinforcement learning problem
6.3.2. Methodologies for solving the RLP
6.3.3. Q_learning
6.3.4. Application to robotics
6.4. Bibliography