Deep Learning
New TensorFlow Release v1.14.0
June 20, 2019
11 min read
Here is an overview of TensorFlow’s latest release 1.14.0.
libtensorflow_framework.so.1
libtensorflow_framework.1.dylib
libtensorflow
tarball archives contain the libtensorflow
library and two symlinks. MacOS .dylib
libraries are the same, but match MacOS library naming requirements (i.e. libtensorflow.1.dylib
):libtensorflow.so.1.14.0
, the main librarylibtensorflow.so.1
, symlinked to the main librarylibtensorflow.so
, symlinked to .so.1
AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops. AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of built-in training loops such as tf.keras
compile
and fit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.compile
API (strings and v1 losses) which are not instances of v2 Loss
class in LossWrapper
class. => All losses will now use SUM_OVER_BATCH_SIZE
reduction as default.run_eagerly
and distribution strategy if there are symbolic tensors added to the model using add_metric
or add_loss
.map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.TF_CUDA_HOST_MEM_LIMIT_IN_MB
has been changed to TF_GPU_HOST_MEM_LIMIT_IN_MB
.norm_axis
and params_axis
with axis
.clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager.metrics
param in Keras compile
.cumsum
and cumprod
keras backend functions.dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in the call
method.add_metric
in the graph function mode.add_update
can now be passed a zero-arg callable in order to support turning off the update when setting trainable=False
on a Layer of a Model compiled with run_eagerly=True
.weighted
prefix from weighted metric names.defun
, providing an escape hatch to continue using the legacy Defun
.tensorflow_core
and tensorflow
is just a virtual pip package. No code changes are needed for projects using TensorFlow, the change is transparenttf.compat.v1.estimator.inputs
instead of tf.estimator.inputs
contrib
references with tf.estimator.experimental.*
for APIs in early_stopping.py
--iterations_per_loop
for TPUEstimator or DistributionStrategy continues to be a challenge for our users. We propose dynamically tuning the --iterations_per_loop
variable, specifically for using TPUEstimator in training mode, based on a user target TPU execution time. Users might specify a value such as: --iterations_per_loop=300s
, which will result in roughly 300 seconds being spent on the TPU between host side operations.Here is an overview of TensorFlow’s latest release 1.14.0.
libtensorflow_framework.so.1
libtensorflow_framework.1.dylib
libtensorflow
tarball archives contain the libtensorflow
library and two symlinks. MacOS .dylib
libraries are the same, but match MacOS library naming requirements (i.e. libtensorflow.1.dylib
):libtensorflow.so.1.14.0
, the main librarylibtensorflow.so.1
, symlinked to the main librarylibtensorflow.so
, symlinked to .so.1
AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops. AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of built-in training loops such as tf.keras
compile
and fit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.compile
API (strings and v1 losses) which are not instances of v2 Loss
class in LossWrapper
class. => All losses will now use SUM_OVER_BATCH_SIZE
reduction as default.run_eagerly
and distribution strategy if there are symbolic tensors added to the model using add_metric
or add_loss
.map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.TF_CUDA_HOST_MEM_LIMIT_IN_MB
has been changed to TF_GPU_HOST_MEM_LIMIT_IN_MB
.norm_axis
and params_axis
with axis
.clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager.metrics
param in Keras compile
.cumsum
and cumprod
keras backend functions.dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in the call
method.add_metric
in the graph function mode.add_update
can now be passed a zero-arg callable in order to support turning off the update when setting trainable=False
on a Layer of a Model compiled with run_eagerly=True
.weighted
prefix from weighted metric names.defun
, providing an escape hatch to continue using the legacy Defun
.tensorflow_core
and tensorflow
is just a virtual pip package. No code changes are needed for projects using TensorFlow, the change is transparenttf.compat.v1.estimator.inputs
instead of tf.estimator.inputs
contrib
references with tf.estimator.experimental.*
for APIs in early_stopping.py
--iterations_per_loop
for TPUEstimator or DistributionStrategy continues to be a challenge for our users. We propose dynamically tuning the --iterations_per_loop
variable, specifically for using TPUEstimator in training mode, based on a user target TPU execution time. Users might specify a value such as: --iterations_per_loop=300s
, which will result in roughly 300 seconds being spent on the TPU between host side operations.