Aller au contenu principal

IA et robotique

Définition

AI in robotics covers perception (vision, touch), planning (motion, task), and control (actuation). Reinforcement learning and imitation learning train policies from data; sim-to-real transfer is a key challenge.

La perception utilise souvent la vision par ordinateur et parfois des modèles multimodaux. Les politiques de contrôle sont entraînées en simulation (DRL) or from human demonstrations; deploying to real hardware requires dealing with dynamics mismatch (sim-to-real), safety, and latency.

Comment ça fonctionne

Sensors (cameras, force/torque, proprioception) alimentent perception models that estimate state (par ex. object poses, scene layout). Planners (classical or learned) produce trajectories or high-level actions (par ex. “pick block A”). Controllers (par ex. PID, learned policy) execute low-level commands (joint torques, velocities) to track the plan. End-to-end learning maps raw sensor input to actions in one network; modular pipelines separate perception, planning, and control for interpretability and reuse. Training is often in simulation (DRL); sim-to-real (domain randomization, system identification) and safety constraints are critical for deployment.

Cas d'utilisation

AI robotics applies when perception, planning, or control are learned from data (manipulation, navigation, sim-to-real).

  • Manipulation and grasping (par ex. pick-and-place, assembly)
  • Navigation and autonomous driving
  • Sim-to-real and imitation learning for policy training

Documentation externe

Voir aussi