Technical Glossary
A comprehensive glossary of robotics and AI terms used throughout the textbook.
A
Action Space [SOURCE NEEDED: Definition and examples]
Actuator A component that converts electrical, hydraulic, or pneumatic energy into mechanical motion. Examples: DC motors, servo motors, linear actuators.
ADAS (Advanced Driver-Assistance Systems) [SOURCE NEEDED: Definition]
B
Behavior Cloning [SOURCE NEEDED: Imitation learning technique]
Bipedal Locomotion [SOURCE NEEDED: Two-legged walking]
C
CLIP (Contrastive Language-Image Pre-training) [SOURCE NEEDED: Vision-language model explanation]
CUDA (Compute Unified Device Architecture) [SOURCE NEEDED: NVIDIA GPU programming platform]
D
DDS (Data Distribution Service) The middleware protocol used by ROS 2 for real-time, scalable, and reliable communication between nodes.
DOF (Degrees of Freedom) [SOURCE NEEDED: Definition with robotics examples]
Domain Randomization [SOURCE NEEDED: Sim-to-real transfer technique]
E
Embodied AI Artificial intelligence systems that interact with the physical world through sensors and actuators, as opposed to disembodied AI that operates purely in digital spaces.
End-Effector [SOURCE NEEDED: Robot arm tool/gripper definition]
F
Forward Kinematics [SOURCE NEEDED: Computing end-effector pose from joint angles]
G
Gazebo An open-source 3D robotics simulator with physics engines, sensor simulation, and ROS integration.
GPU (Graphics Processing Unit) [SOURCE NEEDED: Role in robotics and AI]
H
Humanoid Robot [SOURCE NEEDED: Anthropomorphic robot definition]
I
IMU (Inertial Measurement Unit) A sensor that measures acceleration, angular velocity, and sometimes magnetic field, used for estimating orientation and motion.
Inverse Kinematics [SOURCE NEEDED: Computing joint angles from desired end-effector pose]
Isaac Gym [SOURCE NEEDED: NVIDIA's massively parallel RL training platform]
Isaac Sim [SOURCE NEEDED: NVIDIA's photorealistic robot simulator]
J
Jacobian [SOURCE NEEDED: Matrix relating joint velocities to end-effector velocities]
Jetson [SOURCE NEEDED: NVIDIA's edge AI computing platform]
K
KaTeX A fast math typesetting library for the web, used to render LaTeX equations in this textbook.
Kinematics [SOURCE NEEDED: Study of motion without considering forces]
L
LiDAR (Light Detection and Ranging) A sensor that uses laser pulses to measure distances, creating 3D point clouds of the environment.
LLM (Large Language Model) [SOURCE NEEDED: Definition with robotics applications]
M
Manipulation [SOURCE NEEDED: Robot grasping and object handling]
Motion Planning [SOURCE NEEDED: Computing collision-free paths]
N
Navigation Stack [SOURCE NEEDED: ROS 2 Nav2 components]
O
Observation Space [SOURCE NEEDED: RL state representation]
Odometry [SOURCE NEEDED: Estimating position from motion sensors]
P
Perception-Action Loop The fundamental cycle in robotics: sense environment → process information → decide action → execute → sense again.
PID Controller [SOURCE NEEDED: Proportional-Integral-Derivative control]
Point Cloud [SOURCE NEEDED: 3D representation from depth sensors]
PPO (Proximal Policy Optimization) [SOURCE NEEDED: RL algorithm]
Q
QoS (Quality of Service) [SOURCE NEEDED: ROS 2 communication reliability policies]
R
Reinforcement Learning (RL) [SOURCE NEEDED: Learning by trial and error]
RGB-D Camera [SOURCE NEEDED: Camera with color and depth]
ROS (Robot Operating System) An open-source middleware framework for robot software development.
ROS 2 The second generation of ROS, with improved real-time capabilities, security, and DDS-based communication.
RViz [SOURCE NEEDED: ROS visualization tool]
S
SDF (Simulation Description Format) [SOURCE NEEDED: Gazebo's robot/world description format]
Sensor Fusion [SOURCE NEEDED: Combining multiple sensor inputs]
Servo Motor [SOURCE NEEDED: Motor with position feedback]
SLAM (Simultaneous Localization and Mapping) [SOURCE NEEDED: Building maps while localizing]
T
TensorRT [SOURCE NEEDED: NVIDIA's inference optimization engine]
Trajectory [SOURCE NEEDED: Time-parameterized path]
U
URDF (Unified Robot Description Format) An XML-based format used in ROS to describe robot kinematics, dynamics, and visual properties.
V
VLA (Vision-Language-Action) Models that integrate computer vision, natural language understanding, and robotic control to enable robots to perform tasks specified in natural language.
ViT (Vision Transformer) [SOURCE NEEDED: Transformer-based vision model]
W
Whole-Body Control [SOURCE NEEDED: Coordinating all DOFs of humanoid]
X
Xacro (XML Macros) [SOURCE NEEDED: Macros for URDF files]
Y
YOLO (You Only Look Once) [SOURCE NEEDED: Real-time object detection]
Z
Zero-Shot Learning [SOURCE NEEDED: Generalizing to unseen classes]
Return to Course Home