Skip to content

Commit

Permalink
Change serving docs from "mlflow pyfunc serve" to "mlflow models serv…
Browse files Browse the repository at this point in the history
…e" (mlflow#1321)

* Remove pyfunc server from docs

* Doc tweak

* whitespace

* Hyperlink fixes
  • Loading branch information
dbczumar authored May 23, 2019
1 parent 0f6d4b2 commit 156ac19
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 26 deletions.
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ MLflow artifacts and then load them again for serving. There is an example train
Score: 0.666
Model saved in run <run-id>

$ mlflow pyfunc serve -r <run-id> -m model
$ mlflow models serve --model-uri runs:/<run-id>/model

$ curl -d '{"columns":[0],"index":[0,1],"data":[[1],[-1]]}' -H 'Content-Type: application/json' localhost:5000/invocations

Expand Down
7 changes: 3 additions & 4 deletions docs/source/models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -455,7 +455,7 @@ Not all deployment methods are available for all model flavors.
:local:
:depth: 1

.. _model_deployment:
.. _local_model_deployment:

Deploy MLflow models locally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -643,9 +643,8 @@ MLflow provides a default Docker image definition; however, it is up to you to b
MLflow includes the utility function ``build_and_push_container`` to perform this step. Once built and uploaded, you can use the MLflow container for all MLflow Models. Model webservers deployed using the :py:mod:`mlflow.sagemaker`
module accept the following data formats as input, depending on the deployment flavor:

* ``python_function``: For this deployment flavor, the endpoint accepts the same formats
as the pyfunc server. These formats are described in the
:ref:`pyfunc deployment documentation <pyfunc_deployment>`.
* ``python_function``: For this deployment flavor, the endpoint accepts the same formats described
in the :ref:`local model deployment documentation <local_model_deployment>`.

* ``mleap``: For this deployment flavor, the endpoint accepts `only`
JSON-serialized pandas DataFrames in the ``split`` orientation. For example,
Expand Down
19 changes: 4 additions & 15 deletions docs/source/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -162,19 +162,19 @@ simple REST server for python-based models:

.. code-block:: bash
mlflow pyfunc serve -r <RUN_ID> -m model
mlflow models serve -m runs:/<RUN_ID>/model
.. note::

By default the server runs on port 5000. If that port is already in use, use the `--port` option to
specify a different port. For example: ``mlflow pyfunc serve --port 1234 -r <RUN_ID> -m model``
specify a different port. For example: ``mlflow models serve -m runs:/<RUN_ID>/model --port 1234``

Once you have started the server, you can pass it some sample data and see the
predictions.

The following example uses ``curl`` to send a JSON-serialized pandas DataFrame with the ``split``
orientation to the pyfunc server. For more information about the input data formats accepted by
the pyfunc model server, see the :ref:`MLflow deployment tools documentation <pyfunc_deployment>`.
orientation to the model server. For more information about the input data formats accepted by
the pyfunc model server, see the :ref:`MLflow deployment tools documentation <local_model_deployment>`.

.. code-block:: bash
Expand All @@ -184,15 +184,4 @@ which returns::

{"predictions": [1, 0]}

.. note::

The ``sklearn_logistic_regression/train.py`` script must be run with the same Python version as
the version of Python that runs ``mlflow pyfunc serve``. If they are not the same version,
the stacktrace below may appear::

File "/usr/local/lib/python3.6/site-packages/mlflow/sklearn.py", line 54, in _load_model_from_local_file
return pickle.load(f)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc6 in position 0: ordinal not in range(128)


For more information, see :doc:`models`.
8 changes: 4 additions & 4 deletions docs/source/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ in MLflow saved the model as an artifact within the run.

.. code-block:: bash
mlflow pyfunc serve -m /Users/mlflow/mlflow-prototype/mlruns/0/7c1a0d5c42844dcdb8f5191146925174/artifacts/model -p 1234
mlflow models serve -m /Users/mlflow/mlflow-prototype/mlruns/0/7c1a0d5c42844dcdb8f5191146925174/artifacts/model -p 1234
.. note::

Expand All @@ -302,9 +302,9 @@ in MLflow saved the model as an artifact within the run.

Once you have deployed the server, you can pass it some sample data and see the
predictions. The following example uses ``curl`` to send a JSON-serialized pandas DataFrame
with the ``split`` orientation to the pyfunc server. For more information about the input data
formats accepted by the pyfunc model server, see the
:ref:`MLflow deployment tools documentation <pyfunc_deployment>`.
with the ``split`` orientation to the model server. For more information about the input data
formats accepted by the model server, see the
:ref:`MLflow deployment tools documentation <local_model_deployment>`.

.. code-block:: bash
Expand Down
4 changes: 2 additions & 2 deletions examples/flower_classifier/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,12 +85,12 @@ run_id ``101``.

- To test REST api scoring do the following two steps:

1. Deploy the model as a local REST endpoint by running mlflow pyfunc serve:
1. Deploy the model as a local REST endpoint by running ``mlflow models serve``:

.. code-block:: bash
# deploy the model to local REST api endpoint
mlflow pyfunc serve -p 54321 -r 101 -m model
mlflow models serve --model-uri runs:/101/model --port 54321
2. Apply the model to new data using the provided score_images_rest.py script:
Expand Down

0 comments on commit 156ac19

Please sign in to comment.