-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation improvements #133
Conversation
|
||
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, [sign up for a free Roboflow account](https://app.roboflow.com). Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment: | ||
|
||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point, have we talked about how to start the roboflow inference server docker container? If now, we should mention it here and link to that part of the docs.
docs/quickstart/what_is_inference.md
Outdated
@@ -0,0 +1,21 @@ | |||
Roboflow Inference enables you to deploy computer vision models faster than ever. | |||
|
|||
Here is an example of a model running on a video using Inference (See the code): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is "See the code" supposed to be a link?
docs/quickstart/explore_models.md
Outdated
|
||
Click the "Deploy" link in the sidebar to find the information you will need to use your model with Inference: | ||
|
||
![Model list](https://media.roboflow.com/docs-sidebar-list.png) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This image appears very large. Any way to make it smaller?
from roboflow import Roboflow | ||
rf = Roboflow(api_key="API_KEY") | ||
project = rf.workspace().project("MODEL_ENDPOINT") | ||
model = project.version(VERSION, local="http://localhost:9001").model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use localhost here but we haven't taught the user how to start a server yet. Can we add that snippet? Or better yet, can we just show how to run inference on an image the most simple way?
model = project.version(VERSION, local="http://localhost:9001").model | |
from inference.models.utils import get_roboflow_model | |
model = get_roboflow_model(...) | |
result = model.infer(image) #image can be a url string, numpy array, pil image, etc. |
|
||
Create a new Python file and add the following code: | ||
|
||
```python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a convoluted way to run inference on a video now that we have the stream interface? Can this guide look more like the Webcam stream guide and the rtsp stream guide?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a code snippet? I don't have any examples to hand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just pass the path of a video file or the url of a hosted video file to the source
parameter.
docs/quickstart/run_model_on_rtsp.md
Outdated
@@ -0,0 +1,63 @@ | |||
You can run computer vision models on RTSP stream frames with Inference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This guide is nearly identical to the webcam guide. Should we combine them into a single guide?
docs/foundation/about.md
Outdated
|
||
Foundation models are being built for a range of vision tasks, from image segmentation to classification to zero-shot object detection. | ||
|
||
Autodistill supports the following foundation models: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo? Autostill --> Inference
docs/foundation/about.md
Outdated
|
||
Autodistill supports the following foundation models: | ||
|
||
- LC2S-Net: Detect the direction in which someone is looking. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we call this Gaze.
Description
This PR includes a restructure of the Roboflow Inference documentation.
Type of change
This change only affects documentation and does not modify Inference code.
How has this change been tested, please provide a testcase or example of how you tested the change?
This change was tested by opening
http://localhost:8000
and checking that each page was formatted correctly.Any specific deployment considerations
The documentation will be deployed when this PR is merge to main.
Docs
See above.