-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
View with 2 videos is not getting recorded #5
Comments
Hi @nihtin9mk, As you described your use case as a 'video chat' application, I assume this is probably the problem you are encountering. Is one of the video views a live preview? Al |
@alskipp Please give me a solution for this. In my view controller I wrote the below codes
I believe this is not the proper way - please guide me. |
@alskipp Plz help me to do this |
Hi I'll do my best to point you in the right direction either later today or tomorrow. As you can probably appreciate this open source library doesn't pay the bills and I'm not independently wealthy, consequently I'm currently working for 'the man'. As a quick pointer - you need to get direct access to the pixel data in your live video input. If memory serves correctly you'll need to implement a method in AVCaptureVideoDataOutputSampleBufferDelegate. Probably the easiest thing to do is to create a CGImage iVar in your controller that you continuously update in -captureOutput:didOutputSampleBuffer:FromConnection. You then implement the delgate method like y |
Whoops. Typing this on phone and accidentally tapped close and comment. Anyway. You need to implement the delegate method, but you then draw the CGImage (which you created in captureOutput:didOutputSampleBuffer… into the context. |
Hi @alskipp - Thank you for your help in this, and of course all your open source works are really helpful and great for the developers like me. I am not so much familiar with AVFoundation and captureOutput:didOutputSampleBuffer things. Hope you can give a better picture on how to implement the delegate method. And once again a lot of thanks for your selfless work. |
Hi, I'm on my way back home now. I'll try and post a few code examples this evening or tomorrow. If you get chance take a look at the documentation for AVCaptureVideoDataOutputSampleBufferDelegate as I think your view controller will need to implement this delegate to receive the live video data. You'll then use this pixel data to write into the video context. Al |
I'll give a code example of how to turn the pixel data from captureOutput:didOutputSampleBuffer:FromConnection into an image you can use. |
OK. Here we go: As you're already previewing the video on screen I assume there's already an In [[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(captureBegan)
name:AVCaptureSessionDidStartRunningNotification
object:self.captureSession]; Then you'll need to implement the method named in the selector: - (void)captureBegan
{
[ASScreenRecorder sharedInstance].delegate = self;
} Don't forget to remove the delegate when the captureSession ends The next comment will show the basics of |
This doesn't do anything too useful yet - but let's see if the - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CGImageRef image = [self createCGImageFromSampleBuffer:sampleBuffer];
// check if image creation is successful
CGImageRelease(image);
} Here is how to get a CGImage from the sampleBuffer: - (CGImageRef)createCGImageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(imageContext);
CGContextRelease(imageContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return newImage;
} Let me know if all this works so far. Then I'll continue with the final bits that write the captured image into the video buffer. (for now don't call |
Just realised I missed out a vital bit. Sorry! We need to declare ourselves as delegate to AVCaptureVideoDataOutput. Otherwise we won't receive any calls from |
First we need a dispatch queue to receive calls to @property (strong, nonatomic) dispatch_queue_t videoQueue; We then initialize the queue and use it when we declare ourselves as delegate to the captureSession output. Here's some code that shows how the capture session is setup: - (void)setupCamera
{
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (![captureDevice hasMediaType:AVMediaTypeVideo]) {
return;
}
_videoQueue = dispatch_queue_create("CameraViewController.videoQueue", DISPATCH_QUEUE_SERIAL);
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:nil];
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
output.videoSettings = @{ (NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
[output setSampleBufferDelegate:self queue:_videoQueue];
[self.captureSession addOutput:output];
[self.captureSession addInput:input];
[self.captureSession setSessionPreset:AVCaptureSessionPreset640x480];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
[self.cameraView setVideoPreviewLayer:_previewLayer];
} The vital bit that you need to receive the sampleBuffers is: [output setSampleBufferDelegate:self queue:_videoQueue]; Once that is declared you should start receiving calls to |
Yes, this is a jigsaw puzzle with many pieces : ) |
By the way if you're using a third party library to do the video capture then most of this boilerplate code should already be set up for you. In that case you would set the 3rd party library as the delegate of ASScreenRecorder and locate where |
Hi @alskipp - everything works fine and the [self.delegate writeBackgroundFrameInContext:&bitmapContext]; getting called. But them the app got crashed with this message - How can I implement this method ? |
Hi, I can post some example code, but not until after 19:00 GMT today. If I get a spare moment during the day, I'll try and point you in the right direction to complete the task. Al |
Thank you so much @alskipp
|
Hi,
What we'll need: {
CGImageRef _capturedImage;
BOOL _needsNewImage; // set to YES before recording starts
} 1 dispatch_queue_t declared as a property - will be used to access _capturedImage @property (strong, nonatomic) dispatch_queue_t imageQueue;
- (void)viewDidLoad {
_imageQueue = dispatch_queue_create("CameraViewController.imageQueue", DISPATCH_QUEUE_SERIAL);
} We need to update the CGImageRef: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
dispatch_sync(_imageQueue, ^{
if (_needsNewImage) {
_needsNewImage = NO;
CGImageRelease(_capturedImage); // safe to use on NULL
_capturedImage = [self createCGImageFromSampleBuffer:sampleBuffer];
}
});
} Then to write the image into the context. You might need to adjust the context and position of the drawing for your own needs - the following is an example: - (void)writeBackgroundFrameInContext:(CGContextRef*)contextRef;
{
dispatch_sync(_imageQueue, ^{
if (_capturedImage) {
CGContextSaveGState(*contextRef);
CGAffineTransform flipRotate = CGAffineTransformMake(0.0, 1.0, 1.0, 0.0, 0.0, 0.0);
CGContextConcatCTM(*contextRef, flipRotate);
CGContextDrawImage(*contextRef, CGRectMake(0,0, CGRectGetHeight(_cameraView.bounds), CGRectGetWidth(_cameraView.bounds)), _capturedImage);
CGContextRestoreGState(*contextRef);
_needsNewImage = YES;
}
});
} The final thing to remember is to release |
I've just edited the above post (forgot to call Depending on how your view is positioned you might need to to adjust the position you draw in In if (self.delegate) {
[self.delegate writeBackgroundFrameInContext:&bitmapContext];
}
// draw each window into the context (other windows include UIKeyboard, UIAlert)
// FIX: UIKeyboard is currently only rendered correctly in portrait orientation
dispatch_sync(dispatch_get_main_queue(), ^{
UIGraphicsPushContext(bitmapContext); {
for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
[window drawViewHierarchyInRect:CGRectMake(0, 0, _viewSize.width, _viewSize.height) afterScreenUpdates:NO];
}
} UIGraphicsPopContext();
}); Move the delegate code to appear after the main drawing code listed above. Your video preview will then be drawn on top of everything else, making it easier to see if you are drawing it in the correct position. |
Ohh @alskipp - Thank you very much It worked at last. Only two minor problems - may be you can point me the mistakes from my side. |
"_cameraView" in my example is the view that contains the live video preview. It's only used to calculate the size and position to draw into the context. |
If the image is created successfully then it's just a matter of getting the positioning right when drawing it into the context. |
Sorry for my first comment - I just forgot to move the delegate code to appear after the main drawing code listed above.Thats why that view is not came in the video. now it worked fine. |
Is everything working now? To get the correct positioning you'll have to experiment with the CGRect in CGContextDrawImage(*contextRef, CGRectMake(50, 50, 100, 100), _capturedImage); |
Yes now everything works fine. Thank you for your tremendous support. Now I am experimenting with CGRect in CGContextDrawImage. |
If the orientation is incorrect you just have to transform the context. For my use, I needed the following: CGAffineTransform flipRotate = CGAffineTransformMake(0.0, 1.0, 1.0, 0.0, 0.0, 0.0);
CGContextConcatCTM(*contextRef, flipRotate); You could try and comment out that bit, or try a different transformation to see if it works. |
Yea I am trying for proper orientation. At times it crashes on - with ERROR - Thread 3: EXC_BAD_ACCESS (code=1, address=0x4bc000) |
Hmmm, the dreaded EXC_BAD_ACCESS. Has this only happened since adding the new code? I've not encountered the EXC_BAD_ACCESS crash, but threaded crashes are notoriously unpredictable. I'll have to think about what the cause could be. |
yea, after adding these new codes. |
Hi @alskipp - any solutions ? |
Just wondering whether you took my advice when I stated the following?: This is just a guess, but it could potentially cause the issue (if it crashes consistently on the 2nd record). Perhaps what occurs is that for the first recording the CGImageRef in If you are releasing (Letting a potential memory leak happen certainly isn't a fix, but it might help identify why the crash occurs). |
I am using ARC. so after finishing record - I am making _capturedImage image as Nil. _capturedImage = Nil; Hope this is fine. |
That could well be causing the issue. To release it correctly you need to use See if that works, if it doesn't then try not releasing it at all to see if the crash still happens. (We'd then have to figure out how best to deal with memory release). |
@alskipp - I tried this. I am doing CGImageRelease(_capturedImage); after recording finishes. But for the first recording it crashes at times.
Even the example project you have provided is also crashing at same point, I am using iPod 5th generation and iPhone 5 for testing. |
@alskipp Thanks so much for this. It helped me a lot. Just found one problem( I think) with your code.
It won't execute the methods inside dispatch_async, the reason is you already assigned the queue to the delegate method.
After removing this
From your sample, it works fine for me. |
Hi all, I have read through the entire discussion and also trying to implement the same in my code, but still confused on which methods have to be declared in which class. So can you please help? Thanks! |
If you use the pause mode then add
|
I am using ASScreenRecorder in my app. My view controller has 2 views both playing videos - just like video chat in skype application.
When I try to record my screen - screen is recorded with only one video view the other one is missing in the video why it is happening ??
The text was updated successfully, but these errors were encountered: