Welcome to the online hub for the book:
Report an issue with the book or its supporting code here. Known errata for the book can be viewed here. |
This book uses many examples based on the following open-source Python packages
Robotics Toolbox for Python, Machine Vision Toolbox for Python, Spatial Maths Toolbox for Python, Block Diagram Simulation for Python. These in turn have dependencies on other packages created by the author and third parties.
This package provides a simple one-step installation of all the required Toolboxes
pip install rvc3python
or
conda install rvc3python
There are a lot of dependencies and this might take a minute or so. You now have a very powerful computing environment for robotics and computer vision.
Given the rapid rate of language additions, particularly around type hinting, use at least Python 3.8. Python 3.7 goes end of life in June 2023.
Not all package dependencies will work with the latest release of Python. In particular, check:
- PyTorch used for segmentation examples in Chapter 12
- Open3D, used for point cloud examples in Chapter 14.
It's probably a good idea to create a virtual environment to keep this package and its dependencies separated from your other Python code and projects. If you've never used virtual environments before this might be a good time to start, and it is really easy using Conda:
conda create -n RVC3 python=3.10
conda activate RVC3
pip install rvc3python
Chapter 11 has some deep learning examples based on PyTorch. If you don't have
PyTorch installed you can use the pytorch
install option
pip install rvc3python[pytorch]
or
conda install rvc3python
The simplest way to get going is to use the command line tool
$ rvctool
____ _ _ _ __ ___ _ ___ ____ _ _ _____
| _ \ ___ | |__ ___ | |_(_) ___ ___ \ \ / (_)___(_) ___ _ __ ( _ ) / ___|___ _ __ | |_ _ __ ___ | | |___ /
| |_) / _ \| '_ \ / _ \| __| |/ __/ __| \ \ / /| / __| |/ _ \| '_ \ / _ \/\ | | / _ \| '_ \| __| '__/ _ \| | |_ \
| _ < (_) | |_) | (_) | |_| | (__\__ \_ \ V / | \__ \ | (_) | | | | | (_> < | |__| (_) | | | | |_| | | (_) | | ___) |
|_| \_\___/|_.__/ \___/ \__|_|\___|___( ) \_/ |_|___/_|\___/|_| |_| \___/\/ \____\___/|_| |_|\__|_| \___/|_| |____/
|/
for Python (RTB==1.1.0, MVTB==0.9.5, SG==1.1.7, SMTB==1.1.7, NumPy==1.24.2, SciPy==1.10.1, Matplotlib==3.7.1)
import math
import numpy as np
from scipy import linalg, optimize
import matplotlib.pyplot as plt
from spatialmath import *
from spatialmath.base import *
from spatialmath.base import sym
from spatialgeometry import *
from roboticstoolbox import *
from machinevisiontoolbox import *
import machinevisiontoolbox.base as mvb
# useful variables
from math import pi
puma = models.DH.Puma560()
panda = models.DH.Panda()
func/object? - show brief help
help(func/object) - show detailed help
func/object?? - show source code
Results of assignments will be displayed, use trailing ; to suppress
Python 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:24:27) [Clang 14.0.6 ]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.11.0 -- An enhanced Interactive Python. Type '?' for help.
>>>
This provides an interactive Python (IPython) session with all the Toolboxes and supporting packages imported, and ready to go. It's a highly capable, convenient, and "MATLAB-like" workbench environment for robotics and computer vision.
For example to load an ETS model of a Panda robot, solve a forward kinematics and inverse kinematics problem, and an interactive graphical display is simply:
>>> panda = models.ETS.Panda()
ERobot: Panda (by Franka Emika), 7 joints (RRRRRRR)
┌─────┬───────┬───────┬────────┬─────────────────────────────────────────────┐
│link │ link │ joint │ parent │ ETS: parent to link │
├─────┼───────┼───────┼────────┼─────────────────────────────────────────────┤
│ 0 │ link0 │ 0 │ BASE │ tz(0.333) ⊕ Rz(q0) │
│ 1 │ link1 │ 1 │ link0 │ Rx(-90°) ⊕ Rz(q1) │
│ 2 │ link2 │ 2 │ link1 │ Rx(90°) ⊕ tz(0.316) ⊕ Rz(q2) │
│ 3 │ link3 │ 3 │ link2 │ tx(0.0825) ⊕ Rx(90°) ⊕ Rz(q3) │
│ 4 │ link4 │ 4 │ link3 │ tx(-0.0825) ⊕ Rx(-90°) ⊕ tz(0.384) ⊕ Rz(q4) │
│ 5 │ link5 │ 5 │ link4 │ Rx(90°) ⊕ Rz(q5) │
│ 6 │ link6 │ 6 │ link5 │ tx(0.088) ⊕ Rx(90°) ⊕ tz(0.107) ⊕ Rz(q6) │
│ 7 │ @ee │ │ link6 │ tz(0.103) ⊕ Rz(-45°) │
└─────┴───────┴───────┴────────┴─────────────────────────────────────────────┘
┌─────┬─────┬────────┬─────┬───────┬─────┬───────┬──────┐
│name │ q0 │ q1 │ q2 │ q3 │ q4 │ q5 │ q6 │
├─────┼─────┼────────┼─────┼───────┼─────┼───────┼──────┤
│ qr │ 0° │ -17.2° │ 0° │ -126° │ 0° │ 115° │ 45° │
│ qz │ 0° │ 0° │ 0° │ 0° │ 0° │ 0° │ 0° │
└─────┴─────┴────────┴─────┴───────┴─────┴───────┴──────┘
>>> panda.fkine(panda.qz)
0.7071 0.7071 0 0.088
0.7071 -0.7071 0 0
0 0 -1 0.823
0 0 0 1
>>> panda.ikine_LM(SE3.Trans(0.4, 0.5, 0.2) * SE3.Ry(pi/2))
IKSolution(q=array([ -1.849, -2.576, -2.914, 1.22, -1.587, 2.056, -1.013]), success=True, iterations=13, searches=1, residual=3.3549072615799585e-10, reason='Success')
>>> panda.teach(panda.qz)
Computer vision is just as easy. For example, we can import an image, blur it and display it alongside the original
>>> mona = Image.Read("monalisa.png")
>>> Image.Hstack([mona, mona.smooth(sigma=5)]).disp()
or load two images of the same scene, compute SIFT features and display putative matches
>>> sf1 = Image.Read("eiffel-1.png", mono=True).SIFT()
>>> sf2 = Image.Read("eiffel-2.png", mono=True).SIFT()
>>> matches = sf1.match(sf2)
>>> matches.subset(100).plot("w")
rvctool
is a wrapper around
IPython where:
- robotics and vision functions and classes can be accessed without needing package prefixes
- results are displayed by default like MATLAB does, and like MATLAB you need to put a semicolon on the end of the line to prevent this
- the prompt is the standard Python REPL prompt
>>>
rather than the IPython prompt, this can be overridden by a command-line switch - allows cutting and pasting in lines from the book, and prompt characters are ignored
The Robotics, Vision & Control book uses rvctool
for all the included
examples.
rvctool
imports the all the above mentioned packages using import *
which is
not considered best Python practice. It is very convenient for interactive
experimentation, but in your own code you can handle the imports as you see
fit.
IPython is very forgiving when it comes to cutting and pasting in blocks of Python
code. It will strip off the >>>
prompt character and ignore indentation. The normal
python REPL is not so forgiving. IPython also maintains a command history and
allows command editing.
You can write very simple scripts, for example test.py
is
T = puma.fkine(puma.qn)
sol = puma.ikine_LM(T)
sol.q
puma.plot(sol.q);
then
$ rvctool test.py
0 0 1 0.5963
0 1 0 -0.1501
-1 0 0 0.6575
0 0 0 1
IKSolution(q=array([7.235e-08, -0.8335, 0.09396, 3.142, 0.8312, -3.142]), success=True, iterations=15, searches=1, residual=1.406125546650288e-07, reason='Success')
array([7.235e-08, -0.8335, 0.09396, 3.142, 0.8312, -3.142])
PyPlot3D backend, t = 0.05, scene:
robot: Text(0.0, 0.0, 'Puma 560')
>>>
and you are dropped into an IPython session after the script has run.
Check out the wiki page.
Graphics and animations are problematic in these environments, some things work well, some don't. As much as possible I've tweaked the Jupyter notebooks to work as best they can in these environments.
For local use the Jupyter plugin for Visual Studio Code is pretty decent. Colab suffers from old versions of major packages (though they are getting better at keeping up to date) and animations can suffer from slow update over the network.
Additional command line tools available (from the Robotics Toolbox) include:
eigdemo
, animation showing linear transformation of a rotating unit vector which demonstrates eigenvalues and eigenvectors.tripleangledemo
, Swift visualization that lets you experiment with various triple-angle sequences.twistdemo
, Swift visualization that lets you experiment with 3D twists. The screw axis is the blue rod and you can position and orient it using the sliders, and adjust its pitch. Then apply a rotation about the screw using the bottom slider.
Block diagram models are key to the pedagogy of the RVC3 book and 25 models are included. To simulate these models we use the Python package bdsim which can run models:
- written in Python using bdsim blocks and wiring.
- created graphically using
bdedit
and saved as a
.bd
(JSON format) file.
The models are included in the RVC3
package when it is installed and rvctool
adds them to the module search path. This means you can invoke them from
rvctool
by
>>> %run -m vloop_test
If you want to directly access the folder containing the models, the command line tool
bdsim_path
will display the full path to where they have been installed in the Python package tree.
This GitHub repo provides additional resources for readers including:
- Jupyter notebooks containing all code lines from each chapter, see
the
notebooks
folder - The code to produce every Python/Matplotlib (2D) figure in the book, see the
figures
folder - 3D points clouds from chapter 14, and the code to create them, see
the
pointclouds
folder. - 3D figures from chapters 2-3, 7-9, and the code to create them, see the
3dfigures
folder. - All example scripts, see the
examples
folder. - To run the visual odometry example in Sect. 14.8.3 you need to download two image sequence, each over 100MB, see the instructions here.
To get that material you must clone the repo
git clone https://github.com/petercorke/RVC3-python.git