While this hasn't been an issue with NLP models where inputs are int64 and embeddings convert those into the right float type based on model's weights dtype, for non-NLP models this conversion is lacking. And so the proposal is for Deepspeed to automatically convert float32 inputs to float16 if fp16 is enabled. And do nothing if the inputs aren't fp32.
While the user could convert it themselves this is not the norm since AMP or Apex do it automatically.
Context: huggingface/transformers#11638 (review)
Thank you!