Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation improvements #133

Merged
merged 6 commits into from
Oct 31, 2023
Merged

Documentation improvements #133

merged 6 commits into from
Oct 31, 2023

Conversation

capjamesg
Copy link
Contributor

Description

This PR includes a restructure of the Roboflow Inference documentation.

Type of change

This change only affects documentation and does not modify Inference code.

How has this change been tested, please provide a testcase or example of how you tested the change?

This change was tested by opening http://localhost:8000 and checking that each page was formatted correctly.

Any specific deployment considerations

The documentation will be deployed when this PR is merge to main.

Docs

See above.

@capjamesg capjamesg added the documentation Improvements or additions to documentation label Oct 27, 2023
@capjamesg capjamesg self-assigned this Oct 27, 2023

To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, [sign up for a free Roboflow account](https://app.roboflow.com). Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:

```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point, have we talked about how to start the roboflow inference server docker container? If now, we should mention it here and link to that part of the docs.

@@ -0,0 +1,21 @@
Roboflow Inference enables you to deploy computer vision models faster than ever.

Here is an example of a model running on a video using Inference (See the code):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is "See the code" supposed to be a link?


Click the "Deploy" link in the sidebar to find the information you will need to use your model with Inference:

![Model list](https://media.roboflow.com/docs-sidebar-list.png)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image appears very large. Any way to make it smaller?

from roboflow import Roboflow
rf = Roboflow(api_key="API_KEY")
project = rf.workspace().project("MODEL_ENDPOINT")
model = project.version(VERSION, local="http://localhost:9001").model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use localhost here but we haven't taught the user how to start a server yet. Can we add that snippet? Or better yet, can we just show how to run inference on an image the most simple way?

Suggested change
model = project.version(VERSION, local="http://localhost:9001").model
from inference.models.utils import get_roboflow_model
model = get_roboflow_model(...)
result = model.infer(image) #image can be a url string, numpy array, pil image, etc.


Create a new Python file and add the following code:

```python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a convoluted way to run inference on a video now that we have the stream interface? Can this guide look more like the Webcam stream guide and the rtsp stream guide?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a code snippet? I don't have any examples to hand.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just pass the path of a video file or the url of a hosted video file to the source parameter.

@@ -0,0 +1,63 @@
You can run computer vision models on RTSP stream frames with Inference.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guide is nearly identical to the webcam guide. Should we combine them into a single guide?


Foundation models are being built for a range of vision tasks, from image segmentation to classification to zero-shot object detection.

Autodistill supports the following foundation models:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo? Autostill --> Inference


Autodistill supports the following foundation models:

- LC2S-Net: Detect the direction in which someone is looking.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we call this Gaze.

@capjamesg capjamesg merged commit b75e0ae into main Oct 31, 2023
2 checks passed
@capjamesg capjamesg deleted the docs-improvements branch October 31, 2023 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants