diff --git a/docs/api_array.rst b/docs/api_array.rst index e081934e..12778ae2 100644 --- a/docs/api_array.rst +++ b/docs/api_array.rst @@ -30,28 +30,24 @@ PiRGBArray ========== .. autoclass:: PiRGBArray - :no-members: PiYUVArray ========== .. autoclass:: PiYUVArray - :no-members: PiBayerArray ============ .. autoclass:: PiBayerArray - :no-members: PiMotionArray ============= .. autoclass:: PiMotionArray - :no-members: PiAnalysisOutput @@ -64,21 +60,18 @@ PiRGBAnalysis ============= .. autoclass:: PiRGBAnalysis - :no-members: PiYUVAnalysis ============= .. autoclass:: PiYUVAnalysis - :no-members: PiMotionAnalysis ================ .. autoclass:: PiMotionAnalysis - :no-members: PiArrayTransform diff --git a/docs/api_mmalobj.rst b/docs/api_mmalobj.rst index 648593e1..7c739241 100644 --- a/docs/api_mmalobj.rst +++ b/docs/api_mmalobj.rst @@ -258,7 +258,7 @@ three output ports: resolution images efficiently. Generally, you don't need to worry about these differences. The ``mmalobj`` -layer knows about them and auto-negotiates the most efficient format it can for +layer knows about them and negotiates the most efficient format it can for connections. However, they're worth bearing in mind if you're aiming to get the most out of the firmware or if you're confused about why a particular format has been selected for a connection. @@ -347,7 +347,7 @@ activate the output port: >>> camera.outputs[2].enable() Unfortunately, that didn't seem to do much! An output port that is -participating in a connection needs nothing more: it knows where it's data is +participating in a connection needs nothing more: it knows where its data is going. However, an output port *without* a connection requires a callback function to be assigned so that something can be done with the buffers of data it produces. @@ -634,6 +634,7 @@ and the renderer: >>> transform.connection.enable() >>> preview.connection.enable() + >>> transform.enable() At this point we should take a look at the pipeline to see what's been configured automatically: @@ -653,16 +654,16 @@ alluded to above: RGB is a very large format (compared to I420 which is half its size, or OPQV which is tiny) so we're shuttling a *lot* of data around here. Expect this to drop frames at higher resolutions or framerates. -The other part of inefficiency isn't obvious from the debug output above which -gives the impression that the "py.transform" component is actually part of the -MMAL pipeline. In fact, this is a lie. Under the covers ``mmalobj`` installs an -output callback on the camera's output port to feed data to the "py.transform" -input port, uses a background thread to run the transform, then copies the -results into buffers obtained from the preview's input port. In other words -there's really *two* (very short) MMAL pipelines with a hunk of Python running -in between them. If ``mmalobj`` does its job properly you shouldn't need to -worry about this implementation detail but it's worth bearing in mind from the -perspective of performance. +The other source of inefficiency isn't obvious from the debug output above +which gives the impression that the "py.transform" component is actually part +of the MMAL pipeline. In fact, this is a lie. Under the covers ``mmalobj`` +installs an output callback on the camera's output port to feed data to the +"py.transform" input port, uses a background thread to run the transform, then +copies the results into buffers obtained from the preview's input port. In +other words there's really *two* (very short) MMAL pipelines with a hunk of +Python running in between them. If ``mmalobj`` does its job properly you +shouldn't need to worry about this implementation detail but it's worth bearing +in mind from the perspective of performance. Performance Hints @@ -800,7 +801,8 @@ Connections .. autoclass:: MMALBaseConnection -.. autoclass:: MMALConnection +.. autoclass:: MMALConnection(source, target, formats=default_formats, callback=None) + :show-inheritance: Buffers @@ -828,7 +830,8 @@ Python Extensions :show-inheritance: :private-members: _callback, _commit_port -.. autoclass:: MMALPythonConnection +.. autoclass:: MMALPythonConnection(source, target, formats=default_formats, callback=None) + :show-inheritance: .. autoclass:: MMALPythonSource :show-inheritance: diff --git a/docs/recipes2.rst b/docs/recipes2.rst index 5fa4c7e3..90a202d8 100644 --- a/docs/recipes2.rst +++ b/docs/recipes2.rst @@ -130,8 +130,8 @@ capture will be: \end{equation} The first 14336 bytes of the data (128*112) will be Y values, the next 3584 -bytes (128*112/4) will be U values, and the final 3584 bytes will be the V -values. +bytes (:math:`128 \times 112 \div 4`) will be U values, and the final 3584 +bytes will be the V values. The following code demonstrates capturing YUV image data, loading the data into a set of `numpy`_ arrays, and converting the data to RGB format in an efficient @@ -410,9 +410,9 @@ long (before exhausting the disk cache). If you are intending to perform processing on the frames after capture, you may be better off just capturing video and decoding frames from the resulting file rather than dealing with individual JPEG captures. Thankfully this is -relatively easy as the JPEG format has a well designed `magic number`_ (FF D8) -which cannot appear anywhere else in the JPEG data. This means we can use a -:ref:`custom output ` to separate the frames out of an MJPEG +relatively easy as the JPEG format has a well designed `magic number`_ (``FF +D8``) which cannot appear anywhere else in the JPEG data. This means we can use +a :ref:`custom output ` to separate the frames out of an MJPEG video recording by inspecting the first two bytes of each buffer: .. literalinclude:: examples/rapid_capture_mjpeg.py @@ -453,8 +453,8 @@ first - just set *use_video_port* to ``True`` in the .. literalinclude:: examples/rapid_streaming.py -Using this technique, the author can manage about 19fps of streaming at 640x480 -on firmware #685. However, utilizing the MJPEG splitting demonstrated in +Using this technique, the author can manage about 19fps of streaming at +640x480. However, utilizing the MJPEG splitting demonstrated in :ref:`rapid_capture` we can manage much faster: .. literalinclude:: examples/rapid_streaming_mjpeg.py @@ -560,15 +560,15 @@ a file-like object: Motion data is calculated at the `macro-block`_ level (an MPEG macro-block represents a 16x16 pixel region of the frame), and includes one extra column of data. Hence, if the camera's resolution is 640x480 (as in the example above) -there will be 41 columns of motion data ((640 / 16) + 1), in 30 rows (480 / -16). +there will be 41 columns of motion data (:math:`(640 \div 16) + 1`), in 30 rows +(:math:`480 \div 16`). Motion data values are 4-bytes long, consisting of a signed 1-byte x vector, a signed 1-byte y vector, and an unsigned 2-byte SAD (`Sum of Absolute Differences`_) value for each macro-block. Hence in the example above, each -frame will generate 4920 bytes of motion data (41 * 30 * 4). Assuming the data -contains 300 frames (in practice it may contain a few more) the motion data -should be 1,476,000 bytes in total. +frame will generate 4920 bytes of motion data (:math:`41 \times 30 \times 4`). +Assuming the data contains 300 frames (in practice it may contain a few more) +the motion data should be 1,476,000 bytes in total. The following code demonstrates loading the motion data into a three-dimensional numpy array. The first dimension represents the frame, with diff --git a/picamera/array.py b/picamera/array.py index be75c7c6..9794dc58 100644 --- a/picamera/array.py +++ b/picamera/array.py @@ -478,6 +478,14 @@ def flush(self): self.array = self._to_3d(self.array) def demosaic(self): + """ + Perform a rudimentary `de-mosaic`_ of ``self.array``, returning the + result as a new array. The result of the demosaic is *always* three + dimensional, with the last dimension being the color planes (see + *output_dims* parameter on the constructor). + + .. _de-mosaic: http://en.wikipedia.org/wiki/Demosaicing + """ if self._demo is None: # Construct 3D representation of Bayer data (if necessary) if self.output_dims == 2: diff --git a/picamera/mmalobj.py b/picamera/mmalobj.py index abfb4cd1..02b68a00 100644 --- a/picamera/mmalobj.py +++ b/picamera/mmalobj.py @@ -888,11 +888,6 @@ def _set_format(self, value): def supported_formats(self): """ Retrieves a sequence of supported encodings on this port. - - .. warning:: - - On older firmwares, property does not work on the camera's still - port (``MMALCamera.outputs[2]``) due to an underlying bug. """ try: mp = self.params[mmal.MMAL_PARAMETER_SUPPORTED_ENCODINGS] @@ -1127,14 +1122,12 @@ def connect(self, other, **options): """ Connect this port to the *other* :class:`MMALPort` (or :class:`MMALPythonPort`). The type and configuration of the connection - will be automatically selected. If *enable* is ``True`` (the default), - the connection will be implicitly enabled upon construction. + will be automatically selected. - Various connection options can be specified as keyword arguments. These - will be passed onto the :class:`MMALConnection` or + Various connection *options* can be specified as keyword arguments. + These will be passed onto the :class:`MMALConnection` or :class:`MMALPythonConnection` constructor that is called (see those classes for an explanation of the available options). - """ # Always construct connections from the output end if self.type != mmal.MMAL_PORT_TYPE_OUTPUT: @@ -1353,15 +1346,13 @@ def callback(port, buf): print(len(data)) Alternatively you can use the :attr:`data` property directly, which returns - and modifies the buffer's data as a :class:`bytes` object. However, beware - that you must still use the buffer as a context manager if you wish to - lock the buffer's memory (generally required when dealing with VideoCore - buffers):: + and modifies the buffer's data as a :class:`bytes` object (note this is + generally slower than using the buffer object unless you are simply + replacing the entire buffer):: def callback(port, buf): - with buf: - # the buffer contents as a byte-string - print(buf.data) + # the buffer contents as a byte-string + print(buf.data) """ __slots__ = ('_buf',) @@ -1495,7 +1486,9 @@ def replicate(self, source): .. note:: This is fundamentally different to the operation of the - :meth:`copy_from` method. + :meth:`copy_from` method. It is much faster, but imposes the burden + that two buffers now share data (the *source* cannot be released + until the replicant has been released). """ mmal_check( mmal.mmal_buffer_header_replicate(self._buf, source._buf), @@ -1506,13 +1499,14 @@ def copy_from(self, source): Copies all fields (including data) from the *source* :class:`MMALBuffer`. This buffer must have sufficient :attr:`size` to store :attr:`length` bytes from the *source* buffer. This method - implicitly sets :attr:`offset` to zero, the :attr:`length` to the + implicitly sets :attr:`offset` to zero, and :attr:`length` to the number of bytes copied. .. note:: This is fundamentally different to the operation of the - :meth:`replicate` method. + :meth:`replicate` method. It is much slower, but afterward the + copied buffer is entirely independent of the *source*. """ assert self.size >= source.length source_len = source._buf[0].length @@ -1954,9 +1948,11 @@ class MMALConnection(MMALBaseConnection): callback between MMAL components as it requires buffers to be copied from the GPU's memory to the CPU's memory and back again. - There's no *extra* penalty when the connection is between an MMAL - component and a Python MMAL component though, as such copying has - to take place anyway. + .. data:: default_formats + :annotation: = (MMAL_ENCODING_OPAQUE, MMAL_ENCODING_I420, MMAL_ENCODING_RGB24, MMAL_ENCODING_BGR24, MMAL_ENCODING_RGBA, MMAL_ENCODING_BGRA) + + Class attribute defining the default formats used to negotiate + connections between MMAL components. """ __slots__ = ('_connection', '_callback', '_wrapper') @@ -1971,7 +1967,6 @@ class MMALConnection(MMALBaseConnection): def __init__( self, source, target, formats=default_formats, callback=None): - if not isinstance(source, MMALPort): raise PiCameraValueError('source is not an MMAL port') if not isinstance(target, MMALPort): @@ -2831,14 +2826,12 @@ def connect(self, other, **options): """ Connect this port to the *other* :class:`MMALPort` (or :class:`MMALPythonPort`). The type and configuration of the connection - will be automatically selected. If *enable* is ``True`` (the default), - the connection will be implicitly enabled upon construction. + will be automatically selected. Various connection options can be specified as keyword arguments. These will be passed onto the :class:`MMALConnection` or :class:`MMALPythonConnection` constructor that is called (see those classes for an explanation of the available options). - """ # Always construct connections from the output end if self.type != mmal.MMAL_PORT_TYPE_OUTPUT: @@ -3333,6 +3326,14 @@ class MMALPythonConnection(MMALBaseConnection): data. The callable may optionally manipulate the :class:`MMALBuffer` and return it to permit it to continue traversing the connection, or return ``None`` in which case the buffer will be released. + + .. data:: default_formats + :annotation: = (MMAL_ENCODING_I420, MMAL_ENCODING_RGB24, MMAL_ENCODING_BGR24, MMAL_ENCODING_RGBA, MMAL_ENCODING_BGRA) + + Class attribute defining the default formats used to negotiate + connections between Python and and MMAL components, in preference + order. Note that OPAQUE is not present in contrast with the default + formats in :class:`MMALConnection`. """ __slots__ = ('_enabled', '_callback')