diff --git a/EAST_TFLite.ipynb b/EAST_TFLite.ipynb index 9606d3e..804d196 100644 --- a/EAST_TFLite.ipynb +++ b/EAST_TFLite.ipynb @@ -179,16 +179,6 @@ "If we export the float16 model with a fixed known input shape we can can likely accelerate its inference with TFLite GPU delegate. We can specify the `input_shapes` argument in the `tf.compat.v1.lite.TFLiteConverter.from_frozen_graph()` function to do this. We are going to follow this same principle for other quantization (i.e. int8 and dynamic-range) methods as well. " ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "ZiSFtIOBwCXW", - "colab_type": "text" - }, - "source": [ - "For int8 and dynamic-range quantized models we can simply use `tf.compat.v1.lite.TFLiteConverter.from_frozen_graph`. " - ] - }, { "cell_type": "code", "metadata": {