Skip to content

Releases: apple/coremltools

coremltools 5.0b3

16 Aug 21:00
c6354af
Compare
Choose a tag to compare
coremltools 5.0b3 Pre-release
Pre-release
  • Native M1 support for Python 3.8 and Python 3.9
  • Adds the compute_units parameter to MLModel and coremltools.convert. Use this to specify where your models can run:
    • ALL - use all compute units available, including the neural engine.
    • CPU_ONLY - limit the model to only use the CPU.
    • CPU_AND_GPU - use both the CPU and GPU, but not the neural engine.
  • With the above change we are deprecating the useCPUOnly parameter for MLModel and coremltools.convert.
  • For ML programs the default compute precision has changed from Float 32 to Float 16. This can be overridden with the compute_precision parameter of coremltools.convert.
  • Support for TensorFlow 2.5
  • Removed scipy dependency
  • Various bug fixes and optimizations

coremltools 5.0b2

07 Jul 21:06
b074a31
Compare
Choose a tag to compare
coremltools 5.0b2 Pre-release
Pre-release
  • Python 3.9 support
  • Ubuntu 18 support
  • Torch 1.9.0 support
  • Added flag to skip loading a model during conversion. Useful when converting for new macOS on older macOS.
  • New torch ops: affine_grid_generator, grid_sampler, linear, maximum, minimum, SiLUs
  • Fuse Activation SiLUs optimization
  • Add no-op transpose into noop_elimination
  • Various bug fixes and other improvements, including:
    • bug fix in coremltools.utils.rename_feature utility for ML Program spec
    • bug fix in classifier model conversion for ML Program target

coremltools 5.0b1

08 Jun 21:14
f19052c
Compare
Choose a tag to compare
coremltools 5.0b1 Pre-release
Pre-release

To install this version run: pip install coremltools==5.0b1

Whats New

  • Added a new kind of Core ML model type, called ML Program. TensorFlow and Pytorch models can now be converted to ML Programs.
    • To learn about ML Programs, how they are different from the classicial Core ML neural network types, and what they offer, please see the documentation here
    • Use the convert_to argument with the unified converter API to indicate the model type of the Core ML model.
      • coremltools.convert(..., convert_to=“mlprogram”) converts to a Core ML model of type ML program.
      • coremltools.convert(..., convert_to=“neuralnetwork”) converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just coremltools.convert(...) will default to produce a neural network Core ML model.
    • When targeting ML program, there is an additional option available to set the compute precision of the Core ML model to either float 32 or float16. That is,
      • ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32) or ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)
      • To know more about how this affects the runtime, see the documentation on Typed execution.
  • You can save to the new Model Package format through the usual coremltool’s save method. Simply use model.save("<model_name>.mlpackage") instead of the usual model.save(<"model_name>.mlmodel")
    • Core ML is introducing a new model format called model packages. It’s a container that stores each of a model’s components in its own file, separating out its architecture, weights, and metadata. By separating these components, model packages allow you to easily edit metadata and track changes with source control. They also compile more efficiently, and provide more flexibility for tools which read and write models.
    • ML Programs can only be saved in the model package format.
  • Several performance improvements by adding new graph passes in the conversion pipeline for deep learning models, including “fuse_gelu”, “replace_stack_reshape”, “concat_to_pixel_shuffle”, “fuse_layernorm_or_instancenorm” etc
  • New Translation methods for Torch ops such as “einsum”, “GRU”, “zeros_like” etc
  • OS versions supported by coremltools 5.0b1: macOS10.15 and above, Linux with C++17 and above

Deprecations and Removals

  • Caffe converter has been removed. If you are still using the Caffe converter, please use coremltools 4.
  • Keras.io and ONNX converters will be deprecated in coremltools 6. Users are recommended to transition to the TensorFlow/PyTorch conversion via the unified converter API.
  • Methods, such as convert_neural_network_weights_to_fp16(), convert_neural_network_spec_weights_to_fp16() , that had been deprecated in coremltools 4, have been removed.

Known Issues

  • The default compute precision for conversion to ML Programs is set to precision.FLOAT32, although it will be updated to precision.FLOAT16 in a later beta release, prior to the official coremltools 5.0 release.
  • Core ML may downcast float32 tensors specified in ML Program model types when running on a device with Neural Engine support. Workaround: Restrict compute units to .cpuAndGPU in MLModelConfiguration for seed 1
  • Converting some models to ML Program may lead to an error (such as a segmentation fault or “Error in building plan”), due to a bug in the Core ML GPU runtime. Workaround: When using coremltools, you can force the prediction to stay on the CPU, without changing the prediction code, by specifying the useCPUOnly argument during conversion. That is, ct.convert(source_model, convert_to='mlprogram', useCPUOnly=True). And for such models, in your swift code you can use the MLComputeUnits.cpuOnly option at the time of loading the model, to restrict the compute unit to CPU.
  • Flexible input shapes, for image inputs have a bug when using with the ML Program type, in seed 1 of Core ML framework. This will be fixed in an upcoming seed release.
  • coremltools 5.0b1 supports python versions 3.5, 3.6, 3.7, 3.8. Support for python 3.9 will be enabled in a future beta release.

coremltools 4.1

05 Feb 02:07
1931758
Compare
Choose a tag to compare
  • Support for python 2 deprecated. This release contains wheels for python 3.5, 3.6, 3.7, 3.8
  • PyTorch converter updates:
    • added translation methods for ops topK, groupNorm, log10, pad, stacked LSTMs
    • support for PyTorch 1.7
  • TensorFlow Converter updates:
    • Added translation functions for ops Mfcc, AudioSpectrogram
  • Miscellaneous Bug fixes

coremltools 4.0

10 Oct 18:21
523d5e0
Compare
Choose a tag to compare

What's new in coremltools 4.0

  • New documentation available at http://coremltools.readme.io.
  • New converters from PyTorch, TensorFlow 1, and TensorFlow 2 available via the new unified converter API, ct.convert()
  • New Model Intermediate Language (MIL) builder library, using which the new converters have been implemented. Using MIL its easy to build neural network models directly or implement composite operations.
  • New utilities to configure inputs while converting from PyTorch and TensorFlow, using ct.convert() with ct.ImageType(), ct.ClassifierConfig(), etc., see details: https://coremltools.readme.io/docs/neural-network-conversion.

Highlights of Core ML 4

  • Model Deployment
  • Model Encryption
  • Unified converter API with PyTorch and TensorFlow 2 support in coremltools 4
  • MIL builder for neural networks and composite ops in coremltools 4
  • New layers in neural network:
    * CumSum
    * OneHot
    * ClampedReLu
    * ArgSort
    * SliceBySize
    * Convolution3D
    * Pool3D
    * Bilinear Upsample with align corners and fractional factors
    * PixelShuffle
    * MatMul with int8 weights and int8 activations
    * Concat interleave
    * See NeuralNetwork.proto
  • Enhanced Xcode model view with interactive previews
  • Enhanced Xcode Playground support for Core ML models

coremltools 4.0b4

01 Oct 20:44
cf5f317
Compare
Choose a tag to compare
coremltools 4.0b4 Pre-release
Pre-release
  • Several bug fixes, including:

    • Fix in rename_feature API, when used with a neural network model with image inputs
    • Bug fixes in conversion of torch ops such as layer norm, flatten, conv transpose, expand, dynamic reshape, slice etc.
    • Fixes when converting from PyTorch 1.6.0
    • Fixes in supporting .pth extension, in addition to .pt extension , for torch conversion
    • Fixes in TF2 LSTM with dynamic batch size
    • Fixes in control flow models with TF 2.3.0
    • Fixes in numerical issues with the inverse layer, on a few devices, by increasing the lower bound of the output
  • Added conversion functions for PyTorch ops such as neg, sum, repeat, where, adaptive_max_pool2d, floordiv etc

  • Update Doc strings for several MIL ops

  • Support for TF1 models with fake quant ops when used with convolution ops

  • Several new MIL optimization passes such as no-op elimination, pad and conv fusion etc.

coremltools 4.0b3

18 Aug 22:59
9473763
Compare
Choose a tag to compare
coremltools 4.0b3 Pre-release
Pre-release

Whats new

  • Support for PyTorch 1.6
  • concat with interleave option
  • New Torch ops support added
    • acos
    • acosh
    • argsort
    • asin
    • asinh
    • atan
    • atan
    • atanh
    • avg_pool3d
    • bmm
    • ceil
    • cos
    • cosh
    • cumsum
    • elu
    • exp
    • exp2
    • floor
    • gather
    • hardsigmoid
    • is_floating_point
    • leaky_relu
    • log
    • max_pool
    • prelu
    • reciprocal
    • relu6
    • round
    • rsqrt
    • sign
    • sin
    • sinh
    • softplus
    • softsign
    • sqrt
    • square
    • tan
    • tanh
    • threshold
    • true_divide
  • Improved TF2 test coverage
  • MIL definition update
    • LSTM activation function moved from TupleInput to individual inputs
  • Improvements in MIL infrastructure

Known Issues

  • TensorFlow 2 model conversion is supported for models with 1 concrete function.
  • Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.

coremltools 4.0b2

27 Jul 21:22
37e619d
Compare
Choose a tag to compare
coremltools 4.0b2 Pre-release
Pre-release

What's New

  • Improved documentation available at http://coremltools.readme.io.
  • New converter path to directly convert PyTorch models without going through ONNX.
  • Enhanced TensorFlow 2 conversion support, which now includes support for dynamic control flow and LSTM layers. Support for several popular models and architectures, including Transformers such as GPT and BERT-variants.
  • New unified conversion API ct.convert() for converting PyTorch and TensorFlow (including tf.keras) models.
  • New Model Intermediate Language (MIL) builder library to either build neural network models directly or implement composite operations.
  • New utilities to configure inputs while converting from PyTorch and TensorFlow, using ct.convert() with ct.ImageType(), ct.ClassifierConfig(), etc., see details: https://coremltools.readme.io/docs/neural-network-conversion.
  • onnx-coreml converter is now moved under coremltools and can be accessed as ct.converters.onnx.convert().

Deprecations

  • Deprecated the following methods

    • NeuralNetworkShaper class.
    • get_allowed_shape_ranges().
    • can_allow_multiple_input_shapes().
    • visualize_spec() method of the MLModel class.
    • quantize_spec_weights(), instead use the quantize_weights() method.
    • get_custom_layer_names(), replace_custom_layer_name(), has_custom_layer(), moved them to internal methods.
  • Added deprecation warnings for, will be deprecated in next major release.

Known Issues

  • Latest version of Pytorch tested to work with the converter is Torch 1.5.0.
  • TensorFlow 2 model conversion is supported for models with 1 concrete function.
  • Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.
  • coremltools.utils.rename_feature does not work correctly in renaming the output feature of a model of type neural network classifier
  • leaky_relu layer is not added yet to the PyTorch converter, although it's supported in MIL and the Tensorflow converters.

coremltools 4.0b1

22 Jun 23:17
7fc09d2
Compare
Choose a tag to compare
coremltools 4.0b1 Pre-release
Pre-release

Whats New

  • New documentation available at http://coremltools.readme.io.
  • New converter path to directly convert PyTorch models without going through ONNX.
  • Enhanced TensorFlow 2 conversion support, which now includes support for dynamic control flow and LSTM layers. Support for several popular models and architectures, including Transformers such as GPT and BERT-variants.
  • New unified conversion API ct.convert() for converting PyTorch and TensorFlow (including tf.keras) models.
  • New Model Intermediate Language (MIL) builder library to either build neural network models directly or implement composite operations.
  • New utilities to configure inputs while converting from PyTorch and TensorFlow, using ct.convert() with ct.ImageType(), ct.ClassifierConfig(), etc., see details: https://coremltools.readme.io/docs/neural-network-conversion.
  • onnx-coreml converter is now moved under coremltools and can be accessed as ct.converters.onnx.convert().

Deprecations

  • Deprecated the following methods

    • NeuralNetworkShaper class.
    • get_allowed_shape_ranges().
    • can_allow_multiple_input_shapes().
    • visualize_spec() method of the MLModel class.
    • quantize_spec_weights(), instead use the quantize_weights() method.
    • get_custom_layer_names(), replace_custom_layer_name(), has_custom_layer(), moved them to internal methods.
  • Added deprecation warnings for, will be deprecated in next major release.

Known Issues

  • Tensorflow 2 model conversion is supported for models with 1 concrete function.
  • Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.
  • coremltools.utils.rename_feature does not work correctly in renaming the output feature of a model of type neural network classifier
  • leaky_relu layer is not added yet to the PyTorch converter, although its supported in MIL and the Tensorflow converters.

coremltools 3.4

19 May 18:42
a21651b
Compare
Choose a tag to compare
  • Added support for tf.einsum op
  • Bug fixes in image pre-processing error handling, quantization function for the embeddingND layer, conversion of tf.stack op
  • Updated the transpose removal mlmodel pass
  • Fixed import statement to support scikit-learn >=0.21 (@sapieneptus )
  • Added deprecation warnings for class NeuralNetworkShaper and methods visualize_spec, quantize_spec_weights
  • Updated the names of a few functions that were unintentionally exposed to the public API, to internal, by prepending with underscore. The original methods still work but deprecation warnings have been added.