Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting the background Image #185

Closed
rgb1380 opened this issue Sep 25, 2020 · 13 comments
Closed

Setting the background Image #185

rgb1380 opened this issue Sep 25, 2020 · 13 comments
Assignees
Labels
enhancement New feature or request

Comments

@rgb1380
Copy link

rgb1380 commented Sep 25, 2020

Trying to see how the privacy background image works

I have put this after I enable video.

    List<AgoraLiveTranscodingUser> list = List();    
    AgoraLiveTranscoding config = AgoraLiveTranscoding.fromJson({
      'backgroundImage': AgoraImage(
          'https://www.muralswallpaper.com/app/uploads/Blue-Illustrated-Landscape-Mountains-Wallpaper-Mural.jpg',
          0,
          0,
          800,
          600),
      'transcodingUsers': list,
      'watermark': AgoraImage('https://www.muralswallpaper.com/app/uploads/Blue-Illustrated-Landscape-Mountains-Wallpaper-Mural.jpg', 0, 0, 20, 20),
    });
    AgoraRtcEngine.setLiveTranscoding(config);

Nothing happens. The documentation is a bit lacking. Not sure if this is how it is supposed to be used.

I am only trying to add a background image, however, AgoraLiveTranscoding has a toJson method that will try to read from watermark and also map transcodingUsers and if these do not exist will throw a null exception. So I had to add a water mark and an empty transcodingUsers list. Here is the toJson method for reference. If these fields are optional this needs to be fixed.

  Map<String, dynamic> toJson() => {
        'width': width,
        'height': height,
        'videoBitrate': videoBitrate,
        'videoFramerate': videoFramerate,
        'videoGop': videoGop,
        'videoCodecProfile': _resolveVideoCodecProfileType[videoCodecProfile],
        'transcodingUsers':
            transcodingUsers.map((item) => item.toJson()).toList(),
        'transcodingExtraInfo': transcodingExtraInfo,
        'watermark': watermark.toJson(),
        'backgroundImage': backgroundImage.toJson(),
        'audioSampleRate': _resolveAudioSampleRate[audioSampleRate],
        'audioBitrate': audioBitrate,
        'audioChannels': audioChannels,
        'backgroundColor': backgroundColor,
        'audioCodecProfile': _resolveAudioCodecProfileType[audioCodecProfile],
      };
@LichKing-2234
Copy link
Contributor

seem like you are using v1.x, you should use addVideoWatermark

@rgb1380
Copy link
Author

rgb1380 commented Sep 30, 2020

seem like you are using v1.x, you should use addVideoWatermark

I am using v1.x as version 3.x was not production-ready. So to be absolutely clear, to set the privacy background image you are suggesting that we should be adding a video watermark?

@LichKing-2234
Copy link
Contributor

setLiveTranscoding is for CDN live.
do addVideoWatermark meet your needs?

@rgb1380
Copy link
Author

rgb1380 commented Sep 30, 2020

I am trying to remove the background behind the caller (similar to Zoom) not to add a watermark.

@LichKing-2234
Copy link
Contributor

I think you should process video capture raw data before publishing.

@rgb1380
Copy link
Author

rgb1380 commented Sep 30, 2020

I think you should process video capture raw data before publishing.

But access to raw data is not available in the Flutter plug-in

@LichKing-2234
Copy link
Contributor

Yes, it is in planning, but now, you can only implement it in the native layer.

@rgb1380
Copy link
Author

rgb1380 commented Sep 30, 2020

Yes, it is in planning, but now, you can only implement it in the native layer.

Thanks. Do you have an estimated timeframe for that?

It is just that many of us (myself included), we use Flutter to avoid diving into the native layer as we only have the expertise in iOS or Android. Even for some newer recruits, Flutter is the only mobile app experience that they have had and they have never played with the native layer.

@plutoless
Copy link

@rgb1380 the problem is for certain features you have to touch native layer to achieve best performance, especially for native raw data processing. It's not reasonable to pass the raw video data to dart layer for you to process and return to native layer, most background replacement sdk also supports native layers only.
We understand the requirements and we are looking for a general approach to help you achieve such kind of requirements, but this takes time. Also we may not be able to cover all background replacement sdk in the market, so if you want to use a sdk for your own choice, you will have to touch native layer yourself.

@rgb1380
Copy link
Author

rgb1380 commented Oct 3, 2020

We understand the requirements and we are looking for a general approach to help you achieve such kind of requirements, but this takes

A couple of thoughts, if the bottleneck is IO. There are several trade-offs that could be made:

  1. Compress the frame as jpeg before passing it to the Dart layer and vice-versa.
  2. Allow the dart layer to request a lower number of frames per second. For example, 5 fps. A lot of image processing tasks do not require to run on every single frame.
  3. Allow the dart layer to specify a smaller video size and resample the signal at native layer. Again it is common for image processing to subsample the signal for performance.
  4. 2 & 3 are useful when the dart layer is only processing the images and is not modifying the signal to be transmitted. It will be useful to allow the dart layer to inject an overlay on the native stream (e.g. a background image with a transparent colour which is then overlaid on the native signal).

@LichKing-2234 LichKing-2234 self-assigned this Oct 8, 2020
@LichKing-2234 LichKing-2234 added the enhancement New feature or request label Oct 8, 2020
@plutoless
Copy link

@rgb1380 thank you for the suggestion. but it's hard to control how customer will use it and to best of my knowledge 5fps or lower resolution will fail to satisfy most customer needs like face beautification, customized recording etc, plus the preprocessing(compress/resample) cost CPU performance too. Real time video is very different from simple image processing, the experience is extremely important especially when you need to consider low-end devices. In this case from my perspective the best practice is still to do all this in native layer.

@LichKing-2234
Copy link
Contributor

#183 (comment)

@github-actions
Copy link
Contributor

This thread has been automatically locked since there has not been any recent activity after it was closed. If you are still experiencing a similar issue, please raise a new issue.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 28, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants