Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving Speed for Receiving Real-Time Image #3

Open
ghost opened this issue Mar 26, 2015 · 16 comments
Open

Improving Speed for Receiving Real-Time Image #3

ghost opened this issue Mar 26, 2015 · 16 comments

Comments

@ghost
Copy link

ghost commented Mar 26, 2015

Hello! I sincerely appreciate your work!
I am trying to write down a program that enables my computer to receive and saves real-time images from Bebop.
Because my application is a real-time, it is important to receive and save images fast.
My code (please look at below) does receive and show images from Bebop, but the images lag 1 or 2 seconds.
When I saw the Katarina Bebop - autonomous flight in corridor video in robotika website (http://robotika.cz/robots/katarina/), the video seems like to update frames fast in real-time.
I assumed that the video was not recorded, but streamed to a computer.
If this is true, I believe the way I am receiving and showing frames is wrong.
Could you give me some advice to improve speed? I appreciate your help!

# Library
import sys
import cv2
from bebop import Bebop
from commands import movePCMDCmd
from video import VideoFrames
from capdet import detectTwoColors, loadColors
from apyros.metalog import MetaLog, disableAsserts
from apyros.manual import myKbhit, ManualControlException

# Global variable
TMP_VIDEO_FILE = "video.bin"
g_vf = None

# videoCallback function
def videoCallback(data, robot=None, debug=False):
    print "dongki.py::videoCallback ~ in videoCallback"

    global g_vf
    g_vf.append(data)
    frame = g_vf.getFrame()

    if frame:
        f = open( TMP_VIDEO_FILE, "wb" )
        f.write( frame )
        f.close()

        cap = cv2.VideoCapture( TMP_VIDEO_FILE )    
        ret, img = cap.read()
        cap.release()

        if ret:
            cv2.imshow('image', img)
            key = cv2.waitKey(1)

def displayRealTimeImage (drone):
    print "dongki.py::displayRealTimeImage ~ In displayRealTimeImage function"
    global g_vf
    g_vf = VideoFrames(onlyIFrames=True, verbose=False)

    while (1):
        drone.videoCbk = videoCallback
        drone.videoEnable()
        drone.update(cmd=None)

def main():
    # Initialize drone
    drone = Bebop(metalog=None)
    print "dongki.py::main ~ Initialize drone complete"

    # Call displayRealTimeImage Function
    displayRealTimeImage(drone) 

main()
@m3d
Copy link
Member

m3d commented Mar 26, 2015

Hi dk683,
thanks :). Please update your git - there is major change regarding sending PCMD, see
https://github.com/ARDroneSDK3/libARCommands/issues/4
Otherwise I was not able to get complete 30 frames (typically I frame and couple P frames).

The second issue is cv2 replaying H.264 video. I do not know how to feed it packed by packet. There is C-code in https://github.com/robotika/heidi/tree/master/cvideo but I am trying to avoid it. So in this snippet is onlyIFrames=True ... BTW what OS do you use? Windows?

Regarding corridor video - I will probably disappoint you. This is recorded video on the drone.
thanks
Martin

@m3d
Copy link
Member

m3d commented Mar 26, 2015

p.s. I added sample code to stream video to stdout (under windows there was surely problem with end-of-line :(
https://github.com/robotika/katarina/blob/master/samples/video2stdout.py
You may need to create "log" folder, as logfiles are always created. I am also not sure if ffplay can be set to play without any buffering (?), i.e. at the moment I see a delay there too. I will probably try to rewrite this example with OpenCV only.

@ghost
Copy link
Author

ghost commented Mar 27, 2015

Thank you for your reply!
I am using Linux (Ubuntu 14.04) system.
I will definitely look into the c code (cvideo). This might be helpful because (from my knowledge) c/c++ is generally faster than python (http://stackoverflow.com/questions/801657/is-python-faster-and-lighter-than-c).
Yes, I noticed that I needed to create log folder. After I manually created the log folder, everything went well :)
I really thank for your help and writing the sample code (you are awesome!)!
I will also study the code further to see whether there is a way to optimize and have a faster processing speed.
Thank you again! 👍 👍 👍

@m3d
Copy link
Member

m3d commented Mar 27, 2015

Thanks - I am afraid that there is really problem with video delay. I wrote another "sample" with usage of Heidi's cvideo (it is directly decoding H.264 to bitmap from frame packets):
https://github.com/robotika/katarina/blob/master/samples/test_cvideo.py
and display only I-frames (i.e. approx 1 frame a second) and it is delayed. The delay is more than 1 second (it would be nice to measure it), so at the moment I am convinced the Bebop is sending me old packets (I am not sure) and I am trying to find a way how to limit that amount (https://github.com/ARDroneSDK3/libARCommands/issues/5).

@zbynekwinkler
Copy link
Member

Are there any B-frames in the video? If so, it seems logical that the video would be delayed by a key frame distance. If you have 1s between I-frames and matching 1s delay in the video - that would make sense.

@m3d
Copy link
Member

m3d commented Mar 27, 2015

No B-frames ... only I-frame followed by 29 P-frames (at 30fps)

@m3d
Copy link
Member

m3d commented Mar 31, 2015

Hi dk683,
I did some refactoring so video frames are collected directly in Bebop class (07e345e). This should not have any influence on speed - it is just easier to write video processing callback.
I also changed test_cvideo.py sample to decode also P-frames (9ecf941) ... it does not look too delayed, but you will probably have problem to get cvideo compiled under Linux or not? (issues with libraries)
Finally if you would like to program some autonomous behavior there is now revised "setVideoCallback()" function - one parameter for video frames out and second for reading results (5c6af7d).

@ghost
Copy link
Author

ghost commented Apr 4, 2015

Hello m3d.
I apologize for my late reply and thank you all your comment!!
Yes, I am having a problem with compiling cvideo.
I downloaded zip (https://github.com/robotika/heidi), moved to cvideo directory, and tried to build cvideo by "python setup.py build".
However, I got the following error message:

dkkim930122@dkkim930122:~/heidi-master/cvideo$ python setup.py build
running build
running build_ext
building 'cvideo' extension
creating build
creating build/temp.linux-x86_64-2.7
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -D__STDC_CONSTANT_MACROS -Ic:\Python27\Lib\site-packages\numpy\core\include -Im:\git\cvdrone\src\3rdparty\ffmpeg\include -I/usr/include/python2.7 -c cvideo.cpp -o build/temp.linux-x86_64-2.7/cvideo.o
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /usr/include/python2.7/numpy/ndarraytypes.h:1761:0,
                 from /usr/include/python2.7/numpy/ndarrayobject.h:17,
                 from /usr/include/python2.7/numpy/arrayobject.h:4,
                 from cvideo.cpp:2:
/usr/include/python2.7/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
 #warning "Using deprecated NumPy API, disable it by " \
  ^
cvideo.cpp: In function ‘PyObject* init(PyObject*, PyObject*)’:
cvideo.cpp:101:102: error: ‘av_mallocz’ was not declared in this scope
   bufferBGR = (uint8_t*)av_mallocz(avpicture_get_size(PIX_FMT_BGR24, dstX, dstY) * sizeof(uint8_t)*10);
                                                                                                      ^
cvideo.cpp: In function ‘PyObject* frame(PyObject*, PyObject*)’:
cvideo.cpp:127:25: error: ‘av_freep’ was not declared in this scope
     av_freep( bufferBGR );
                         ^
cvideo.cpp:128:104: error: ‘av_mallocz’ was not declared in this scope
     bufferBGR = (uint8_t*)av_mallocz(avpicture_get_size(PIX_FMT_BGR24, dstX, dstY) * sizeof(uint8_t)*10);
                                                                                                        ^
cvideo.cpp: At global scope:
cvideo.cpp:17:18: warning: ‘PyObject* green(PyObject*, PyObject*)’ defined but not used [-Wunused-function]
 static PyObject *green(PyObject *self, PyObject *args)
                  ^
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

@m3d
Copy link
Member

m3d commented Apr 6, 2015

Hi dk683,
it would be probably better to start new issue in Heidi repository. A friend of mine was using that repo under Linux, so he may comment it there.
thanks
Martin

@m3d m3d mentioned this issue Apr 9, 2015
@m3d
Copy link
Member

m3d commented Apr 13, 2015

Hi dk683,
do you have "avutil" installed? It looks like the missing prototypes are in
/usr/include/libavutil/mem.h
header. At the moment it compiles fine under "Ubuntu 12.04.3" but I still have problem afterwards, probably with library paths:

>>> import cvideo
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: dynamic module does not define init function (initcvideo)

@jocacace
Copy link

Dear dk683!
I have a (fake)solution to this problem... my solution uses ros (even tough is not fully mandatory) and rely on a ros node (https://github.com/gergondet/ros_h264_streamer) that uses ffmpeg to decode h264 image.

Actually what I do is to publish on a ros topic each frame received with the bebop python program (setting onlyIFrames=False ) and decoding it with that node.

If you or anyone is interested to this (fast but maybe a little bit fake :D ) solution, I can give more information!

@ghost
Copy link
Author

ghost commented Apr 17, 2015

Dear @m3d,
Hello! I appreciate your comment. I apologize for my late reply. I have been out of country, so I could not keep in contact. Yes, I installed avutil. I will also ask the issue in Heidi repository.
It is glad to hear that cvideo compiles under ubuntu 12.04. I am using Ubuntu 14.04, so I might re-install Ubuntu 12.04 and try again. Thank you! 👍

Dear @jocacace
Hello! Yes please! I am highly interested in your solution! If you don't mind, could you give me more information? If you could share your code, I would greatly appreciate it! :) Thank you for your help.

@jocacace
Copy link

@dk683 I can show you my solution... sadly I am not ready to share the code on a public repository (as it rely on code written by other people and I have no documentation)... can you please me contact via e-mail? As I have said is a "temporal" solution (even though works good!)

@ghost
Copy link
Author

ghost commented Apr 17, 2015

Dear @jocacace,
Thank you for your reply! I just sent you an email!

Best,
Dong Ki Kim

@frae83
Copy link

frae83 commented May 7, 2015

Dear @dk683
Here are the steps to compile cvideo under Unbuntu 14.04.01

Change setup.py with the following

from distutils.core import setup, Extension
module1 = Extension('cvideo', 
                 sources = ['cvideo.cpp'],

                define_macros = [ ('__STDC_CONSTANT_MACROS', None) ],
                include_dirs = [ "/usr/lib/python2.7/dist-packages/numpy/core/include",
                                         "/usr/include"],
                libraries = [ 'avcodec', 'avutil', 'swscale' ],
                )

setup (name = 'CVideoPackage',
   version = '0.1',
   description = 'C-video frame by frame reader',
   ext_modules = [module1])

You will also have to declare initcvideo() as extern C in cvideo.cpp

extern "C" void initcvideo()

(see http://stackoverflow.com/questions/28040833/importerror-dynamic-module-does-not-define-init-function )

Then execute the following 2 commands
>>sudo python setup.py build
>>sudo python setup.py install

It will do the job if libavcodec, libavutil and libswscale are installed (check ls /usr/include/libavcodec for instance

Best regards

@0xlen
Copy link
Contributor

0xlen commented Oct 27, 2017

Hi,

I tried to use the control flag already built-in on Bebop to stream the video via the RTP, it might be helpful to you and you can refer my PR #14.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants