Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds new examples, replaces markdown with restructured text #945

Merged
merged 4 commits into from
May 18, 2017
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 0 additions & 32 deletions video/cloud-client/analyze/README.md

This file was deleted.

126 changes: 126 additions & 0 deletions video/cloud-client/analyze/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
.. This file is automatically generated. Do not edit this file directly.

Google Cloud Video Intelligence API Python Samples
===============================================================================

This directory contains samples for Google Cloud Video Intelligence API. `Google Cloud Video Intelligence API`_ allows developers to easily integrate feature detection in video.




.. _Google Cloud Video Intelligence API: https://cloud.google.com/video-intelligence/docs

Setup
-------------------------------------------------------------------------------


Authentication
++++++++++++++

Authentication is typically done through `Application Default Credentials`_,
which means you do not have to change the code to authenticate as long as
your environment has credentials. You have a few options for setting up
authentication:

#. When running locally, use the `Google Cloud SDK`_

.. code-block:: bash

gcloud auth application-default login


#. When running on App Engine or Compute Engine, credentials are already
set-up. However, you may need to configure your Compute Engine instance
with `additional scopes`_.

#. You can create a `Service Account key file`_. This file can be used to
authenticate to Google Cloud Platform services from any environment. To use
the file, set the ``GOOGLE_APPLICATION_CREDENTIALS`` environment variable to
the path to the key file, for example:

.. code-block:: bash

export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account.json

.. _Application Default Credentials: https://cloud.google.com/docs/authentication#getting_credentials_for_server-centric_flow
.. _additional scopes: https://cloud.google.com/compute/docs/authentication#using
.. _Service Account key file: https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount

Install Dependencies
++++++++++++++++++++

#. Install `pip`_ and `virtualenv`_ if you do not already have them.

#. Create a virtualenv. Samples are compatible with Python 2.7 and 3.4+.

.. code-block:: bash

$ virtualenv env
$ source env/bin/activate

#. Install the dependencies needed to run the samples.

.. code-block:: bash

$ pip install -r requirements.txt

.. _pip: https://pip.pypa.io/
.. _virtualenv: https://virtualenv.pypa.io/

Samples
-------------------------------------------------------------------------------

analyze
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



To run this sample:

.. code-block:: bash

$ python analyze.py

usage: analyze.py [-h] {faces,labels,labels_file,safe_search,shots} ...

This application demonstrates face detection, label detection, safe search,
and shot change detection using the Google Cloud API.

Usage Examples:

python analyze.py faces gs://demomaker/volleyball_court.mp4
python analyze.py labels gs://demomaker/cat.mp4
python analyze.py labels_file resources/cat.mp4
python analyze.py shots gs://demomaker/gbikes_dinosaur.mp4
python analyze.py safe_search gs://demomaker/cat.mp4

positional arguments:
{faces,labels,labels_file,safe_search,shots}
faces Detects faces given a GCS path.
labels Detects labels given a GCS path.
labels_file Detects labels given a file path.
safe_search Detects safe search features the GCS path to a video.
shots Detects camera shot changes.

optional arguments:
-h, --help show this help message and exit




The client library
-------------------------------------------------------------------------------

This sample uses the `Google Cloud Client Library for Python`_.
You can read the documentation for more details on API usage and use GitHub
to `browse the source`_ and `report issues`_.

.. Google Cloud Client Library for Python:
https://googlecloudplatform.github.io/google-cloud-python/
.. browse the source:
https://github.com/GoogleCloudPlatform/google-cloud-python
.. report issues:
https://github.com/GoogleCloudPlatform/google-cloud-python/issues


.. _Google Cloud SDK: https://cloud.google.com/sdk/
20 changes: 20 additions & 0 deletions video/cloud-client/analyze/README.rst.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# This file is used to generate README.rst

product:
name: Google Cloud Video Intelligence API
short_name: Cloud Video Intelligence API
url: https://cloud.google.com/video-intelligence/docs
description: >
`Google Cloud Video Intelligence API`_ allows developers to easily
integrate feature detection in video.

setup:
- auth
- install_deps

samples:
- name: analyze
file: analyze.py
show_help: True

cloud_client_library: true
138 changes: 118 additions & 20 deletions video/cloud-client/analyze/analyze.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,21 @@
# See the License for the specific language governing permissions and
# limitations under the License.

"""This application demonstrates how to perform basic operations with the
Google Cloud Video Intelligence API.
"""This application demonstrates face detection, label detection, safe search,
and shot change detection using the Google Cloud API.

Usage Examples:

python analyze.py faces gs://demomaker/volleyball_court.mp4
python analyze.py labels gs://demomaker/cat.mp4
python analyze.py labels_file resources/cat.mp4
python analyze.py shots gs://demomaker/gbikes_dinosaur.mp4
python analyze.py safe_search gs://demomaker/cat.mp4

For more information, check out the documentation at
https://cloud.google.com/videointelligence/docs.
"""

import argparse
import base64
import sys
import time

Expand All @@ -30,18 +37,49 @@
video_intelligence_service_client)


def analyze_safe_search(path):
""" Detects safe search features the GCS path to a video. """
video_client = (video_intelligence_service_client.
VideoIntelligenceServiceClient())
features = [enums.Feature.SAFE_SEARCH_DETECTION]
operation = video_client.annotate_video(path, features)
print('\nProcessing video for safe search annotations:')

while not operation.done():
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(1)

print('\nFinished processing.')

# first result is retrieved because a single video was processed
safe_annotations = (operation.result().annotation_results[0].
safe_search_annotations)

likely_string = ("Unknown", "Very unlikely", "Unlikely", "Possible",
"Likely", "Very likely")

for note in safe_annotations:
print('Time: {}s').format(note.time_offset / 1000000.0)
print('\tadult: {}').format(likely_string[note.adult])
print('\tspoof: {}').format(likely_string[note.spoof])
print('\tmedical: {}').format(likely_string[note.medical])
print('\tracy: {}').format(likely_string[note.racy])
print('\tviolent: {}\n').format(likely_string[note.violent])


def analyze_faces(path):
""" Detects faces given a GCS path. """
video_client = (video_intelligence_service_client.
VideoIntelligenceServiceClient())
features = [enums.Feature.FACE_DETECTION]
operation = video_client.annotate_video(path, features)
print('\nProcessing video for label annotations:')
print('\nProcessing video for face annotations:')

while not operation.done():
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(20)
time.sleep(1)

print('\nFinished processing.')

Expand All @@ -53,10 +91,16 @@ def analyze_faces(path):
print('Thumbnail size: {}'.format(len(face.thumbnail)))

for segment_id, segment in enumerate(face.segments):
print('Track {}: {} to {}'.format(
segment_id,
segment.start_time_offset,
segment.end_time_offset))
positions = 'Entire video'
if (segment.start_time_offset != -1 or
segment.end_time_offset != -1):
positions = '{}s to {}s'.format(
segment.start_time_offset / 1000000.0,
segment.end_time_offset / 1000000.0)

print('\tTrack {}: {}'.format(segment_id, positions))

print('\n')


def analyze_labels(path):
Expand All @@ -70,22 +114,66 @@ def analyze_labels(path):
while not operation.done():
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(20)
time.sleep(1)

print('\nFinished processing.')

# first result is retrieved because a single video was processed
results = operation.result().annotation_results[0]

for i, label in enumerate(results.label_annotations):
print('Label description: {}'.format(label.description))
print('Locations:')

for l, location in enumerate(label.locations):
positions = 'Entire video'
if (location.segment.start_time_offset != -1 or
location.segment.end_time_offset != -1):
positions = '{}s to {}s'.format(
location.segment.start_time_offset / 1000000.0,
location.segment.end_time_offset / 1000000.0)
print('\t{}: {}'.format(l, positions))

print('\n')


def analyze_labels_file(path):
""" Detects labels given a file path. """
video_client = (video_intelligence_service_client.
VideoIntelligenceServiceClient())
features = [enums.Feature.LABEL_DETECTION]

with open(path, "rb") as movie:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use io.open.

content_base64 = base64.b64encode(movie.read())

operation = video_client.annotate_video('', features,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: never hanging indents:

operation = video_client.annotate_video(
    '', features, input_content=content_base64

input_content=content_base64)
print('\nProcessing video for label annotations:')

while not operation.done():
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(1)

print('\nFinished processing.')

# first result is retrieved because a single video was processed
results = operation.result().annotation_results[0]

for label in results.label_annotations:
for i, label in enumerate(results.label_annotations):
print('Label description: {}'.format(label.description))
print('Locations:')

for l, location in enumerate(label.locations):
print('\t{}: {} to {}'.format(
l,
location.segment.start_time_offset,
location.segment.end_time_offset))
positions = 'Entire video'
if (location.segment.start_time_offset != -1 or
location.segment.end_time_offset != -1):
positions = '{} to {}'.format(
location.segment.start_time_offset / 1000000.0,
location.segment.end_time_offset / 1000000.0)
print('\t{}: {}'.format(l, positions))

print('\n')


def analyze_shots(path):
Expand All @@ -99,18 +187,18 @@ def analyze_shots(path):
while not operation.done():
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(20)
time.sleep(1)

print('\nFinished processing.')

# first result is retrieved because a single video was processed
shots = operation.result().annotation_results[0]

for note, shot in enumerate(shots.shot_annotations):
print('Scene {}: {} to {}'.format(
print('\tScene {}: {} to {}'.format(
note,
shot.start_time_offset,
shot.end_time_offset))
shot.start_time_offset / 1000000.0,
shot.end_time_offset / 1000000.0))


if __name__ == '__main__':
Expand All @@ -124,6 +212,12 @@ def analyze_shots(path):
analyze_labels_parser = subparsers.add_parser(
'labels', help=analyze_labels.__doc__)
analyze_labels_parser.add_argument('path')
analyze_labels_file_parser = subparsers.add_parser(
'labels_file', help=analyze_labels_file.__doc__)
analyze_labels_file_parser.add_argument('path')
analyze_safe_search_parser = subparsers.add_parser(
'safe_search', help=analyze_safe_search.__doc__)
analyze_safe_search_parser.add_argument('path')
analyze_shots_parser = subparsers.add_parser(
'shots', help=analyze_shots.__doc__)
analyze_shots_parser.add_argument('path')
Expand All @@ -134,5 +228,9 @@ def analyze_shots(path):
analyze_faces(args.path)
if args.command == 'labels':
analyze_labels(args.path)
if args.command == 'labels_file':
analyze_labels_file(args.path)
if args.command == 'shots':
analyze_shots(args.path)
if args.command == 'safe_search':
analyze_safe_search(args.path)
Loading