Skip to content

Commit

Permalink
Fix typos in the docs + minor wording improvements
Browse files Browse the repository at this point in the history
Signed-off-by: Eero Tamminen <eero.t.tamminen@intel.com>
  • Loading branch information
eero-t authored and dvrogozh committed Oct 6, 2022
1 parent f0755ed commit cddaed3
Show file tree
Hide file tree
Showing 5 changed files with 16 additions and 16 deletions.
2 changes: 1 addition & 1 deletion doc/apt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ First, build a docker image with APT web server using the following Dockerfile::
aptly && \
rm -rf /var/lib/apt/lists/*

# substituite this with other command to populate /opt/pkgs
# substitute this with other command to populate /opt/pkgs
# directory with required *.deb packages
COPY pkgs /opt/pkgs

Expand Down
4 changes: 2 additions & 2 deletions doc/docker.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Overview

Project includes pre-generated dockerfiles in the `docker <../docker>`_
folder for the key possible setups. If you've done any customizations to the
dockefiles template sources, regenerate dockerfiles with the following
dockerfiles template sources, regenerate dockerfiles with the following
commands::

cmake .
Expand All @@ -29,7 +29,7 @@ stored in `templates <../templates>`_ folder.
Templates Parameters
--------------------

It is possible to customize dockerfile setup passing some parameters during
It is possible to customize dockerfile setup by passing parameters during
Dockerfile generation from templates.

DEVEL
Expand Down
18 changes: 9 additions & 9 deletions doc/howto.rst
Original file line number Diff line number Diff line change
Expand Up @@ -119,12 +119,12 @@ These proxy settings will be used to:

Mind that **final image will NOT contain any pre-configured proxy configuration**. This
applies to package manager configuration as well. This is done for the reason that
generated image might run under different network settings comparing to where it
generated image might run under different network settings compared to where it
was generated.

Thus, if you will run the container under proxy you will need to pass proxy configuration
into it anew (well, if you will have a need to communicate with the outside network which
is not the case if you just run demo locally and don't play with the container). This
into it anew (if you have a need to communicate with the outside network which
is not the case when you just run demo locally and do not play with the container). This
can be done by passing proxy host envronment variables as follows::

docker run -it \
Expand All @@ -141,8 +141,8 @@ the image assets::
Container volumes (adding your content, access logs, etc.)
----------------------------------------------------------

Containers exposes few volumes which you can use to mount host folders and customize
samples behavior. See table below for the mount points inside a container and required
Containers expose few volumes which you can use to mount host folders and customize
behavior of the samples. See table below for the mount points inside a container and required
access rights.

=================== ============= ====================================
Expand All @@ -153,15 +153,15 @@ Volume Rights needed Purpose
/var/www/hls Read|Write Access server side generated content
=================== ============= ====================================

So, for example if you have some local content in a ``$HOME/media/`` folder which you
For example if you have some local content in a ``$HOME/media/`` folder which you
wish to play via demo, you can add this folder to the container as follows::

docker run -it \
-v $HOME/media:/opt/data/content \
<...rest-of-arguments...>

In case you want to access container output artifacts (streams, logs, etc.) you need
to give write permissions to the container users. The most stright forward
to give write permissions to the container users. The most straight forward
way would be::

mkdir $HOME/artifacts && chmod a+w $HOME/artifacts
Expand Down Expand Up @@ -198,9 +198,9 @@ recommended.
Managing access rights for container user
-----------------------------------------

Managing permissions between container and a host might be tricky. Remember that the
Managing permissions between container and a host can be tricky. The
user you have under container (by default Media Delivery containers have
user account named 'user') generally speaking is not the same user you have
user account named ``user``) is unlikely to match the user you have
on your host system. Hence, you might have access problems that
container user can't write to the host folder or it can write there, but
host user can't delete these files and you are forced to use ``sudo`` to modify
Expand Down
4 changes: 2 additions & 2 deletions doc/intel-gpu-dkms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,10 +93,10 @@ How to Update
-------------

To install new version of DKMS packages, just follow the usual installation steps. Package
manager will first automatically uninstall previous version, then install a new one.
manager automatically uninstalls previous version, then installs a new one.

When updating the entire kernel, make sure to install the corresponding kernel headers. DKMS
installation actually builds the modules for which kernel headers are required. So, when
installation builds kernel modules which requires corresponding kernel headers. So, when
updating the kernel, make sure to always install both kernel and its headers. For example,
to update from Ubuntu 20.04 5.14.0-1042-oem to 5.14.0-1047-oem kernel do::

Expand Down
4 changes: 2 additions & 2 deletions doc/virtualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ GPU SR-IOV Virtualization
Virtual Machine (VM) setup with GPU SR-IOV Virtualization is a type of setup which
allows non-exclusive time-sliced access to GPU from under VM. GPU SR-IOV Virtualization
can be used to setup multiple VMs (and a host) with the access to the same GPU. It's
possible to assign FPU resource limitations to each VM.
possible to assign GPU resource limitations to each VM.

This variant of GPU virtualization setup requires **host kernel to fully
support underlying GPU**.
Expand All @@ -220,7 +220,7 @@ Host Setup
GPU Flex Series (products formerly Arctic Sound) under the host.

* Check that desired GPU is detected and find it's device ID and PCI slot (in
the example below``56C0`` and ``4d:00.0`` respectively)::
the example below ``56C0`` and ``4d:00.0`` respectively)::

$ lspci -nnk | grep -A 3 -E "VGA|Display"
02:00.0 VGA compatible controller [0300]: ASPEED Technology, Inc. ASPEED Graphics Family [1a03:2000] (rev 41)
Expand Down

0 comments on commit cddaed3

Please sign in to comment.