In this spirit, we raise a toast to FP16 or float16, also known as half-precision floating point format, which can be a more appropriate format in some circumstances and some algorithms than what is known as single precision floating point which occupies 32 bits (and hence also called FP32 or float32).
But why use a less precise data type at all, when more precision options are available?
Half precision values are useful in applications where perfect precision is not required, such applications include image processing and neural networks.
No comments:
Post a Comment