Deep Learning

TensorFlow 2.9.0 Released

May 18, 2022
18 min read
TensorFlow-2.X.X2.9-Blog.jpg

TensorFlow 2.9.0 Now Available

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

The newest version of TensorFlow brings a number of major features, improvements, bug fixes and other changes.

Highlights include performance improvements with oneDNN, and the release of DTensor, a new API for model distribution that can be used to seamlessly move from data parallelism to model parallelism.


Interested in a deep learning solution?
Learn more about Exxact AI workstations starting around $5,500


Breaking Changes

  • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
  • Build, Compilation and Packaging
    • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
    • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
    • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
  • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
    • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
      • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
      • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
      • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
    • In the following rare cases, you need to make more changes when switching to the non-experimental API:
      • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
      • If you passed a value to the loss_scale argument (the second argument) of Policy:
        • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
      • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
        • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
  • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
  • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

    Major Features and Improvements

    • tf.keras:
      • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
      • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
      • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
      • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
      • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
      • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
      • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
      • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
      • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
    • tf.lite:
      • Added TFLite builtin op support for the following TF ops:
        • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
        • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
      • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
      • Add support for unsigned 16-bit integer tensor types in cast op.
      • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
      • Enabled a new MLIR-based dynamic range quantization backend by default
        • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
        • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
      • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
    • tf.function:
      • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
      • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
      • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
    • Unified eager and tf.function execution:
      • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
      • It is available for immediate use.
        • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
        • Eager performance should be similar with this feature enabled.
          • A roughly 5us per-op overhead may be observed when running many small functions.
          • Note a known issue with GPU performance.
        • The behavior of tf.function itself is unaffected.
      • Note: This feature will be enabled by default in an upcoming version of TensorFlow.
    • tf.experimental.dtensor: Added DTensor, an extension to TensorFlow for large-scale modeling with minimal changes to user code. You are welcome to try it out, though be aware that the DTensor API is experimental and up-to backward-incompatible changes. DTensor and Keras integration is published under tf.keras.dtensor in this release (refer to the tf.keras entry). The tutoral and guide for DTensor will be published on https://www.tensorflow.org/. Please stay tuned.
    • oneDNN CPU performance optimizations are available in Linux x86, Windows x86, and Linux aarch64 packages.
      • Linux x86 packages:
        • oneDNN optimizations are enabled by default on CPUs with neural-network-focused hardware features such as AVX512_VNNI, AVX512_BF16, AMX, etc. (Intel Cascade Lake and newer CPUs.)
        • For older CPUs, oneDNN optimizations are disabled by default.
      • Windows x86 package: oneDNN optimizations are disabled by default.
      • Linux aach64 (--config=mkl_aarch64) package:
        • Experimental oneDNN optimizations are disabled by default.
        • If you experience issues with oneDNN optimizations on, we recommend turning them off.
      • To explicitly enable or disable oneDNN optimizations, set the environment variable TF_ENABLE_ONEDNN_OPTS to 1 (enable) or 0 (disable) before running TensorFlow. (The variable is checked during import tensorflow.) To fall back to default settings, unset the environment variable.
      • These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
      • To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.

    Bug Fixes and Other Changes

    • tf.data:
      • Fixed bug in tf.data.experimental.parse_example_dataset when tf.io.RaggedFeatures would specify value_key but no partitions. Before the fix, setting value_key but no partitions would result in the feature key being replaced by the value key, e.g. {'value_key': <RaggedTensor>} instead of {'key': <RaggedTensor>}. Now the correct feature key will be used. This aligns the behavior of tf.data.experimental.parse_example_dataset to match the behavior of tf.io.parse_example.
      • Added a new field, filter_parallelization, to tf.data.experimental.OptimizationOptions. If it is set to True, tf.data will run Filter transformation with multiple threads. Its default value is False if not specified.
    • tf.keras:
      • Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are ShardedVariables (used for training with tf.distribute.experimental.ParameterServerStrategy).
    • tf.random:
      • Added tf.random.experimental.index_shuffle, for shuffling a sequence without materializing the sequence in memory.
    • tf.RaggedTensor:
      • Introduced tf.experimental.RowPartition, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
      • Introduced tf.experimental.DynamicRaggedShape, which represents the shape of a RaggedTensor.

      Security

          Install TensorFlow 2

          Click here to install TensorFlow 2


          Download TensorFlow 2.8.0 on the GitHub page:
          https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0


          Topics

          TensorFlow-2.X.X2.9-Blog.jpg
          Deep Learning

          TensorFlow 2.9.0 Released

          May 18, 202218 min read

          TensorFlow 2.9.0 Now Available

          TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

          The newest version of TensorFlow brings a number of major features, improvements, bug fixes and other changes.

          Highlights include performance improvements with oneDNN, and the release of DTensor, a new API for model distribution that can be used to seamlessly move from data parallelism to model parallelism.


          Interested in a deep learning solution?
          Learn more about Exxact AI workstations starting around $5,500


          Breaking Changes

          • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
          • Build, Compilation and Packaging
            • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
            • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
            • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
          • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
            • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
              • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
              • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
              • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
            • In the following rare cases, you need to make more changes when switching to the non-experimental API:
              • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
              • If you passed a value to the loss_scale argument (the second argument) of Policy:
                • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
              • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
                • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
          • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
          • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

            Major Features and Improvements

            • tf.keras:
              • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
              • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
              • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
              • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
              • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
              • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
              • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
              • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
              • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
            • tf.lite:
              • Added TFLite builtin op support for the following TF ops:
                • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
                • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
              • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
              • Add support for unsigned 16-bit integer tensor types in cast op.
              • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
              • Enabled a new MLIR-based dynamic range quantization backend by default
                • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
                • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
              • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
            • tf.function:
              • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
              • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
              • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
            • Unified eager and tf.function execution:
              • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
              • It is available for immediate use.
                • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
                • Eager performance should be similar with this feature enabled.
                  • A roughly 5us per-op overhead may be observed when running many small functions.
                  • Note a known issue with GPU performance.
                • The behavior of tf.function itself is unaffected.
              • Note: This feature will be enabled by default in an upcoming version of TensorFlow.
            • tf.experimental.dtensor: Added DTensor, an extension to TensorFlow for large-scale modeling with minimal changes to user code. You are welcome to try it out, though be aware that the DTensor API is experimental and up-to backward-incompatible changes. DTensor and Keras integration is published under tf.keras.dtensor in this release (refer to the tf.keras entry). The tutoral and guide for DTensor will be published on https://www.tensorflow.org/. Please stay tuned.
            • oneDNN CPU performance optimizations are available in Linux x86, Windows x86, and Linux aarch64 packages.
              • Linux x86 packages:
                • oneDNN optimizations are enabled by default on CPUs with neural-network-focused hardware features such as AVX512_VNNI, AVX512_BF16, AMX, etc. (Intel Cascade Lake and newer CPUs.)
                • For older CPUs, oneDNN optimizations are disabled by default.
              • Windows x86 package: oneDNN optimizations are disabled by default.
              • Linux aach64 (--config=mkl_aarch64) package:
                • Experimental oneDNN optimizations are disabled by default.
                • If you experience issues with oneDNN optimizations on, we recommend turning them off.
              • To explicitly enable or disable oneDNN optimizations, set the environment variable TF_ENABLE_ONEDNN_OPTS to 1 (enable) or 0 (disable) before running TensorFlow. (The variable is checked during import tensorflow.) To fall back to default settings, unset the environment variable.
              • These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
              • To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.

            Bug Fixes and Other Changes

            • tf.data:
              • Fixed bug in tf.data.experimental.parse_example_dataset when tf.io.RaggedFeatures would specify value_key but no partitions. Before the fix, setting value_key but no partitions would result in the feature key being replaced by the value key, e.g. {'value_key': <RaggedTensor>} instead of {'key': <RaggedTensor>}. Now the correct feature key will be used. This aligns the behavior of tf.data.experimental.parse_example_dataset to match the behavior of tf.io.parse_example.
              • Added a new field, filter_parallelization, to tf.data.experimental.OptimizationOptions. If it is set to True, tf.data will run Filter transformation with multiple threads. Its default value is False if not specified.
            • tf.keras:
              • Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are ShardedVariables (used for training with tf.distribute.experimental.ParameterServerStrategy).
            • tf.random:
              • Added tf.random.experimental.index_shuffle, for shuffling a sequence without materializing the sequence in memory.
            • tf.RaggedTensor:
              • Introduced tf.experimental.RowPartition, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
              • Introduced tf.experimental.DynamicRaggedShape, which represents the shape of a RaggedTensor.

              Security

                  Install TensorFlow 2

                  Click here to install TensorFlow 2


                  Download TensorFlow 2.8.0 on the GitHub page:
                  https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0


                  Topics