TensorFlow 2.7.0 Now Available
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.
The newest version of TensorFlow is a stability release and brings a number of major features, improvements, bug fixes and other changes.
Interested in a deep learning solution?
Learn more about Exxact AI workstations starting at $3,700
Breaking Changes
tf.keras
:- The methods
Model.fit()
,Model.predict()
, andModel.evaluate()
will no longer uprank input data of shape(batch_size,)
to become(batch_size, 1)
. This enablesModel
subclasses to process scalar data in theirtrain_step()
/test_step()
/predict_step()
methods. - Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in the
train_step()
/test_step()
/predict_step()
methods, e.g.if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1)
. Functional models as well as Sequential models built with an explicit input shape are not affected. - The methods
Model.to_yaml()
andkeras.models.model_from_yaml
have been replaced to raise aRuntimeError
as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5. LinearModel
andWideDeepModel
are moved to thetf.compat.v1.keras.models.
namespace (tf.compat.v1.keras.models.LinearModel
andtf.compat.v1.keras.models.WideDeepModel
), and theirexperimental
endpoints (tf.keras.experimental.models.LinearModel
andtf.keras.experimental.models.WideDeepModel
) are being deprecated.- RNG behavior change for all
tf.keras.initializers
classes. For any class constructed with a fixed seed, it will no longer generate same value when invoked multiple times. Instead, it will return different value, but a determinisitic sequence. This change will make the initialize behavior align between v1 and v2.
- The methods
tf.lite
:- Rename fields
SignatureDef
table in schema to maximize the parity with TF SavedModel's Signature concept. - Deprecate Makefile builds. Makefile users need to migrate their builds to CMake or Bazel. Please refer to the Build TensorFlow Lite with CMake and Build TensorFlow Lite for ARM boards for the migration.
- Deprecate
tflite::OpResolver::GetDelegates
. The list returned by TfLite'sBuiltinOpResolver::GetDelegates
is now always empty. Instead, recommend using new methodtflite::OpResolver::GetDelegateCreators
in order to achieve lazy initialization on TfLite delegate instances.
- Rename fields
- TF Core:
tf.Graph.get_name_scope()
now always returns a string, as documented. Previously, when called withinname_scope("")
orname_scope(None)
contexts, it returnedNone
; now it returns the empty string.tensorflow/core/ir/
contains a new MLIR-based Graph dialect that is isomorphic to GraphDef and will be used to replace GraphDef-based (e.g., Grappler) optimizations.- Deprecated and removed
attrs()
function in shape inference. All attributes should be queried by name now (rather than range returned) to enable changing the underlying storage there. - The following Python symbols were accidentally added in earlier versions of TensorFlow and now are removed. Each symbol has a replacement that should be used instead, but note the replacement's argument names are different.
tf.quantize_and_dequantize_v4
(accidentally introduced in TensorFlow 2.4): Usetf.quantization.quantize_and_dequantize_v2
instead.tf.batch_mat_mul_v3
(accidentally introduced in TensorFlow 2.6): Usetf.linalg.matmul
instead.tf.sparse_segment_sum_grad
(accidentally introduced in TensorFlow 2.6): Usetf.raw_ops.SparseSegmentSumGrad
instead. Directly calling this op is typically not necessary, as it is automatically used when computing the gradient oftf.sparse.segment_sum
.
- Renaming of tensorflow::int64 to int_64_t in numerous places (the former is an alias for the latter) which could result in needing to regenerate selective op registration headers else execution would fail with unregistered kernels error.
- Modular File System Migration:
- Support for S3 and HDFS file systems has been migrated to a modular file systems based approach and is now available in https://github.com/tensorflow/io. The
tensorflow-io
python package should be installed for S3 and HDFS support with tensorflow.
- Support for S3 and HDFS file systems has been migrated to a modular file systems based approach and is now available in https://github.com/tensorflow/io. The
Major Features and Improvements
- Improvements to the TensorFlow debugging experience:
- Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code). This behavior can be disabled by calling
tf.debugging.disable_traceback_filtering()
, and can be re-enabled viatf.debugging.enable_traceback_filtering()
. If you are debugging a TensorFlow-internal issue (e.g. to prepare a TensorFlow PR), make sure to disable traceback filtering. You can check whether this feature is currently enabled by callingtf.debugging.is_traceback_filtering_enabled()
. Note that this feature is only available with Python 3.7 or higher. - Improve the informativeness of error messages raised by Keras
Layer.__call__()
, by adding the full list of argument values passed to the layer in every exception.
- Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code). This behavior can be disabled by calling
- Introduce the
tf.compat.v1.keras.utils.track_tf1_style_variables
decorator, which enables using large classes of tf1-style variable_scope,get_variable
, andcompat.v1.layer
-based components from within TF2 models running with TF2 behavior enabled. tf.data
:- tf.data service now supports auto-sharding. Users specify the sharding policy with
tf.data.experimental.service.ShardingPolicy
enum. It can be one ofOFF
(equivalent to today's"parallel_epochs"
mode),DYNAMIC
(equivalent to today's"distributed_epoch"
mode), or one of the static sharding policies:FILE
,DATA
,FILE_OR_DATA
, orHINT
(corresponding to values oftf.data.experimental.AutoShardPolicy
). Static sharding (auto-sharding) requires the number of tf.data service workers be fixed. Users need to specify the worker addresses intensorflow.data.experimental.DispatcherConfig
. tf.data.experimental.service.register_dataset
now accepts optionalcompression
argument.
- tf.data service now supports auto-sharding. Users specify the sharding policy with
- Keras:
tf.keras.layers.Conv
now includes a publicconvolution_op
method. This method can be used to simplify the implementation of Conv subclasses. There are two primary ways to use this new method. The first is to use the method directly in your owncall
method:
class StandardizedConv2D(tf.keras.layers.Conv2D): def call(self, inputs): mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True) return self.convolution_op(inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10))
Alternatively, you can override
convolution_op
:class StandardizedConv2D(tf.keras.Layer): def convolution_op(self, inputs, kernel): mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True) # Author code uses std + 1e-5 return super().convolution_op(inputs, (kernel - mean) / tf.sqrt(var + 1e-10))
- Added
merge_state()
method totf.keras.metrics.Metric
for use in distributed computations. - Added
sparse
andragged
options totf.keras.layers.TextVectorization
to allow forSparseTensor
andRaggedTensor
outputs from the layer.
- distribute.experimental.rpc package introduces APIs to create a GRPC based server to register tf.function methods and a GRPC client to invoke remote registered methods. RPC APIs are intended for multi-client setups i.e. server and clients are started in separate binaries independently.
- Example usage to create server:
server = tf.distribute.experimental.rpc.Server.create("grpc", "127.0.0.1:1234") @tf.function(input_signature=[ tf.TensorSpec([], tf.int32), tf.TensorSpec([], dtypes.int32) ]) def _remote_multiply(a, b): return tf.math.multiply(a, b) server.register("multiply", _remote_multiply)
client = tf.distribute.experimental.rpc.Client.create("grpc", address) a = tf.constant(2, dtype=tf.int32) b = tf.constant(3, dtype=tf.int32) result = client.multiply(a, b)
tf.lite
:- Add experimental API
experimental_from_jax
to support conversion from Jax models to TensorFlow Lite. - Support uint32 data type for cast op.
- Add experimental quantization debugger
tf.lite.QuantizationDebugger
- Add experimental API
- Extension Types
- Add experimental API to define new Python classes that can be handled by TensorFlow APIs. To create an extension type, simply define a Python class with
tf.experimental.ExtensionType
as its base, and use type annotations to specify the type for each field. E.g.:
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor
The
tf.ExtensionType
base class works similarly totyping.NamedTuple
and@dataclasses.dataclass
from the standard Python library. - Add experimental API to define new Python classes that can be handled by TensorFlow APIs. To create an extension type, simply define a Python class with
- Extension types are supported by Keras, tf.data, TF-hub, SavedModel, tf.function, control flow ops, py_function, and distribution strategy.
- Add "dispatch decorators" that can be used to override the default behavior of TensorFlow ops (such as
tf.add
ortf.concat
) when they are applied to ExtensionType values. - The
BatchableExtensionType
API can be used to define extension types that support APIs that make use of batching, such astf.data.Dataset
andtf.map_fn
. - For more information, see the Extension types guide.
Bug Fixes and Other Changes
- TF Core:
- Random number generation (RNG) system
- Add argument
alg
totf.random.stateless_*
functions to explicitly select the RNG algorithm. - Add
tf.nn.experimental.stateless_dropout
, a stateless version oftf.nn.dropout
. tf.random.Generator
now can be created inside the scope oftf.distribute.experimental.ParameterServerStrategy
andtf.distribute.experimental.CentralStorageStrategy
.
- Add argument
- Add an experimental session config
tf.experimental.disable_functional_ops_lowering
which disables functional control flow op lowering optimization. This is useful when executing within a portable runtime where control flow op kernels may not be loaded due to selective registration. - Add a new experimental argument
experimental_is_anonymous
totf.lookup.StaticHashTable.__init__
to create the table in anonymous mode. In this mode, the table resource can only be accessed via resource handles (not resource names) and will be deleted automatically when all resource handles pointing to it are gone.
- Random number generation (RNG) system
tf.data
:- Introduce the
tf.data.experimental.at
API which provides random access for input pipelines that consist of transformations that support random access. The initial set of transformations that support random access includes:tf.data.Dataset.from_tensor_slices
,tf.data.Dataset.shuffle
,tf.data.Dataset.batch
,tf.data.Dataset.shard
,tf.data.Dataset.map
, andtf.data.Dataset.range
. - Promote
tf.data.Options.experimental_deterministic
API totf.data.Options.deterministic
and deprecate the experimental endpoint. - Move autotuning options from
tf.data.Options.experimental_optimization.autotune*
to a newly createdtf.data.Options.autotune.*
and remove support fortf.data.Options.experimental_optimization.autotune_buffers
. - Add support for user-defined names of tf.data core Python API, which can be used to disambiguate tf.data events in TF Profiler Trace Viewer.
- Promote
tf.data.experimental.sample_from_datasets
API totf.data.Dataset.sample_from_datasets
and deprecate the experimental endpoint.
- Introduce the
- TF SavedModel:
- Custom gradients are now saved by default. See
tf.saved_model.SaveOptions
to disable this. - The saved_model_cli's
--input_examples
inputs are now restricted to python literals to avoid code injection.
- Custom gradients are now saved by default. See
- XLA:
- Add a new API that allows custom call functions to signal errors. The old API will be deprecated in a future release. See https://www.tensorflow.org/xla/custom_call for details.
- XLA:GPU reductions are deterministic by default (reductions within
jit_compile=True
are now deterministic). - XLA:GPU works with Horovod (OSS contribution by Trent Lo from NVidia)
tf.saved_model.save
:- When saving a model, not specifying a namespace whitelist for custom ops with a namespace will now default to allowing rather than rejecting them all.
Security
- Fixes a code injection issue in
saved_model_cli
(CVE-2021-41228) - Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
- Fixes a heap OOB in
FusedBatchNorm
kernels (CVE-2021-41223) - Fixes an arbitrary memory read in
ImmutableConst
(CVE-2021-41227) - Fixes a heap OOB in
SparseBinCount
(CVE-2021-41226) - Fixes a heap OOB in
SparseFillEmptyRows
(CVE-2021-41224) - Fixes a segfault due to negative splits in
SplitV
(CVE-2021-41222) - Fixes segfaults and vulnerabilities caused by accesses to invalid memory during shape inference in
Cudnn*
ops (CVE-2021-41221) - Fixes a null pointer exception when
Exit
node is not preceded byEnter
op (CVE-2021-41217) - Fixes an integer division by 0 in
tf.raw_ops.AllToAll
(CVE-2021-41218) - Fixes a use after free and a memory leak in
CollectiveReduceV2
(CVE-2021-41220) - Fixes an undefined behavior via
nullptr
reference binding in sparse matrix multiplication (CVE-2021-41219) - Fixes a heap buffer overflow in
Transpose
(CVE-2021-41216) - Prevents deadlocks arising from mutually recursive
tf.function
objects (CVE-2021-41213) - Fixes a null pointer exception in
DeserializeSparse
(CVE-2021-41215) - Fixes an undefined behavior arising from reference binding to
nullptr
intf.ragged.cross
(CVE-2021-41214) - Fixes a heap OOB read in
tf.ragged.cross
(CVE-2021-41212) - Fixes a heap OOB in shape inference for
QuantizeV2
(CVE-2021-41211) - Fixes a heap OOB read in all
tf.raw_ops.QuantizeAndDequantizeV*
ops (CVE-2021-41205) - Fixes an FPE in
ParallelConcat
(CVE-2021-41207) - Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
- Fixes a heap OOB read in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-41210) - Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
- Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
- Fixes a vulnerability caused by unitialized access in
EinsumHelper::ParseEquation
(CVE-2021-41201) - Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
- Fixes an overflow producing a crash in
tf.range
(CVE-2021-41202) - Fixes an overflow producing a crash in
tf.image.resize
when size is large (CVE-2021-41199) - Fixes an overflow producing a crash in
tf.tile
when tiling tensor is large (CVE-2021-41198) - Fixes a vulnerability produced due to incomplete validation in
tf.summary.create_file_writer
(CVE-2021-41200) - Fixes multiple crashes due to overflow and
CHECK
-fail in ops with large tensor shapes (CVE-2021-41197) - Fixes a crash in
max_pool3d
when size argument is 0 or negative (CVE-2021-41196) - Fixes a crash in
tf.math.segment_*
operations (CVE-2021-41195) - Updates
curl
to7.78.0
to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.
Install TensorFlow 2
Click here to install TensorFlow 2
Download TensorFlow 2.7.0 on the GitHub page:
https://github.com/tensorflow/tensorflow/releases/tag/v2.7.0
TensorFlow 2.7.0 Released
TensorFlow 2.7.0 Now Available
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.
The newest version of TensorFlow is a stability release and brings a number of major features, improvements, bug fixes and other changes.
Interested in a deep learning solution?
Learn more about Exxact AI workstations starting at $3,700
Breaking Changes
tf.keras
:- The methods
Model.fit()
,Model.predict()
, andModel.evaluate()
will no longer uprank input data of shape(batch_size,)
to become(batch_size, 1)
. This enablesModel
subclasses to process scalar data in theirtrain_step()
/test_step()
/predict_step()
methods. - Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in the
train_step()
/test_step()
/predict_step()
methods, e.g.if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1)
. Functional models as well as Sequential models built with an explicit input shape are not affected. - The methods
Model.to_yaml()
andkeras.models.model_from_yaml
have been replaced to raise aRuntimeError
as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5. LinearModel
andWideDeepModel
are moved to thetf.compat.v1.keras.models.
namespace (tf.compat.v1.keras.models.LinearModel
andtf.compat.v1.keras.models.WideDeepModel
), and theirexperimental
endpoints (tf.keras.experimental.models.LinearModel
andtf.keras.experimental.models.WideDeepModel
) are being deprecated.- RNG behavior change for all
tf.keras.initializers
classes. For any class constructed with a fixed seed, it will no longer generate same value when invoked multiple times. Instead, it will return different value, but a determinisitic sequence. This change will make the initialize behavior align between v1 and v2.
- The methods
tf.lite
:- Rename fields
SignatureDef
table in schema to maximize the parity with TF SavedModel's Signature concept. - Deprecate Makefile builds. Makefile users need to migrate their builds to CMake or Bazel. Please refer to the Build TensorFlow Lite with CMake and Build TensorFlow Lite for ARM boards for the migration.
- Deprecate
tflite::OpResolver::GetDelegates
. The list returned by TfLite'sBuiltinOpResolver::GetDelegates
is now always empty. Instead, recommend using new methodtflite::OpResolver::GetDelegateCreators
in order to achieve lazy initialization on TfLite delegate instances.
- Rename fields
- TF Core:
tf.Graph.get_name_scope()
now always returns a string, as documented. Previously, when called withinname_scope("")
orname_scope(None)
contexts, it returnedNone
; now it returns the empty string.tensorflow/core/ir/
contains a new MLIR-based Graph dialect that is isomorphic to GraphDef and will be used to replace GraphDef-based (e.g., Grappler) optimizations.- Deprecated and removed
attrs()
function in shape inference. All attributes should be queried by name now (rather than range returned) to enable changing the underlying storage there. - The following Python symbols were accidentally added in earlier versions of TensorFlow and now are removed. Each symbol has a replacement that should be used instead, but note the replacement's argument names are different.
tf.quantize_and_dequantize_v4
(accidentally introduced in TensorFlow 2.4): Usetf.quantization.quantize_and_dequantize_v2
instead.tf.batch_mat_mul_v3
(accidentally introduced in TensorFlow 2.6): Usetf.linalg.matmul
instead.tf.sparse_segment_sum_grad
(accidentally introduced in TensorFlow 2.6): Usetf.raw_ops.SparseSegmentSumGrad
instead. Directly calling this op is typically not necessary, as it is automatically used when computing the gradient oftf.sparse.segment_sum
.
- Renaming of tensorflow::int64 to int_64_t in numerous places (the former is an alias for the latter) which could result in needing to regenerate selective op registration headers else execution would fail with unregistered kernels error.
- Modular File System Migration:
- Support for S3 and HDFS file systems has been migrated to a modular file systems based approach and is now available in https://github.com/tensorflow/io. The
tensorflow-io
python package should be installed for S3 and HDFS support with tensorflow.
- Support for S3 and HDFS file systems has been migrated to a modular file systems based approach and is now available in https://github.com/tensorflow/io. The
Major Features and Improvements
- Improvements to the TensorFlow debugging experience:
- Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code). This behavior can be disabled by calling
tf.debugging.disable_traceback_filtering()
, and can be re-enabled viatf.debugging.enable_traceback_filtering()
. If you are debugging a TensorFlow-internal issue (e.g. to prepare a TensorFlow PR), make sure to disable traceback filtering. You can check whether this feature is currently enabled by callingtf.debugging.is_traceback_filtering_enabled()
. Note that this feature is only available with Python 3.7 or higher. - Improve the informativeness of error messages raised by Keras
Layer.__call__()
, by adding the full list of argument values passed to the layer in every exception.
- Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code). This behavior can be disabled by calling
- Introduce the
tf.compat.v1.keras.utils.track_tf1_style_variables
decorator, which enables using large classes of tf1-style variable_scope,get_variable
, andcompat.v1.layer
-based components from within TF2 models running with TF2 behavior enabled. tf.data
:- tf.data service now supports auto-sharding. Users specify the sharding policy with
tf.data.experimental.service.ShardingPolicy
enum. It can be one ofOFF
(equivalent to today's"parallel_epochs"
mode),DYNAMIC
(equivalent to today's"distributed_epoch"
mode), or one of the static sharding policies:FILE
,DATA
,FILE_OR_DATA
, orHINT
(corresponding to values oftf.data.experimental.AutoShardPolicy
). Static sharding (auto-sharding) requires the number of tf.data service workers be fixed. Users need to specify the worker addresses intensorflow.data.experimental.DispatcherConfig
. tf.data.experimental.service.register_dataset
now accepts optionalcompression
argument.
- tf.data service now supports auto-sharding. Users specify the sharding policy with
- Keras:
tf.keras.layers.Conv
now includes a publicconvolution_op
method. This method can be used to simplify the implementation of Conv subclasses. There are two primary ways to use this new method. The first is to use the method directly in your owncall
method:
class StandardizedConv2D(tf.keras.layers.Conv2D): def call(self, inputs): mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True) return self.convolution_op(inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10))
Alternatively, you can override
convolution_op
:class StandardizedConv2D(tf.keras.Layer): def convolution_op(self, inputs, kernel): mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True) # Author code uses std + 1e-5 return super().convolution_op(inputs, (kernel - mean) / tf.sqrt(var + 1e-10))
- Added
merge_state()
method totf.keras.metrics.Metric
for use in distributed computations. - Added
sparse
andragged
options totf.keras.layers.TextVectorization
to allow forSparseTensor
andRaggedTensor
outputs from the layer.
- distribute.experimental.rpc package introduces APIs to create a GRPC based server to register tf.function methods and a GRPC client to invoke remote registered methods. RPC APIs are intended for multi-client setups i.e. server and clients are started in separate binaries independently.
- Example usage to create server:
server = tf.distribute.experimental.rpc.Server.create("grpc", "127.0.0.1:1234") @tf.function(input_signature=[ tf.TensorSpec([], tf.int32), tf.TensorSpec([], dtypes.int32) ]) def _remote_multiply(a, b): return tf.math.multiply(a, b) server.register("multiply", _remote_multiply)
client = tf.distribute.experimental.rpc.Client.create("grpc", address) a = tf.constant(2, dtype=tf.int32) b = tf.constant(3, dtype=tf.int32) result = client.multiply(a, b)
tf.lite
:- Add experimental API
experimental_from_jax
to support conversion from Jax models to TensorFlow Lite. - Support uint32 data type for cast op.
- Add experimental quantization debugger
tf.lite.QuantizationDebugger
- Add experimental API
- Extension Types
- Add experimental API to define new Python classes that can be handled by TensorFlow APIs. To create an extension type, simply define a Python class with
tf.experimental.ExtensionType
as its base, and use type annotations to specify the type for each field. E.g.:
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor
The
tf.ExtensionType
base class works similarly totyping.NamedTuple
and@dataclasses.dataclass
from the standard Python library. - Add experimental API to define new Python classes that can be handled by TensorFlow APIs. To create an extension type, simply define a Python class with
- Extension types are supported by Keras, tf.data, TF-hub, SavedModel, tf.function, control flow ops, py_function, and distribution strategy.
- Add "dispatch decorators" that can be used to override the default behavior of TensorFlow ops (such as
tf.add
ortf.concat
) when they are applied to ExtensionType values. - The
BatchableExtensionType
API can be used to define extension types that support APIs that make use of batching, such astf.data.Dataset
andtf.map_fn
. - For more information, see the Extension types guide.
Bug Fixes and Other Changes
- TF Core:
- Random number generation (RNG) system
- Add argument
alg
totf.random.stateless_*
functions to explicitly select the RNG algorithm. - Add
tf.nn.experimental.stateless_dropout
, a stateless version oftf.nn.dropout
. tf.random.Generator
now can be created inside the scope oftf.distribute.experimental.ParameterServerStrategy
andtf.distribute.experimental.CentralStorageStrategy
.
- Add argument
- Add an experimental session config
tf.experimental.disable_functional_ops_lowering
which disables functional control flow op lowering optimization. This is useful when executing within a portable runtime where control flow op kernels may not be loaded due to selective registration. - Add a new experimental argument
experimental_is_anonymous
totf.lookup.StaticHashTable.__init__
to create the table in anonymous mode. In this mode, the table resource can only be accessed via resource handles (not resource names) and will be deleted automatically when all resource handles pointing to it are gone.
- Random number generation (RNG) system
tf.data
:- Introduce the
tf.data.experimental.at
API which provides random access for input pipelines that consist of transformations that support random access. The initial set of transformations that support random access includes:tf.data.Dataset.from_tensor_slices
,tf.data.Dataset.shuffle
,tf.data.Dataset.batch
,tf.data.Dataset.shard
,tf.data.Dataset.map
, andtf.data.Dataset.range
. - Promote
tf.data.Options.experimental_deterministic
API totf.data.Options.deterministic
and deprecate the experimental endpoint. - Move autotuning options from
tf.data.Options.experimental_optimization.autotune*
to a newly createdtf.data.Options.autotune.*
and remove support fortf.data.Options.experimental_optimization.autotune_buffers
. - Add support for user-defined names of tf.data core Python API, which can be used to disambiguate tf.data events in TF Profiler Trace Viewer.
- Promote
tf.data.experimental.sample_from_datasets
API totf.data.Dataset.sample_from_datasets
and deprecate the experimental endpoint.
- Introduce the
- TF SavedModel:
- Custom gradients are now saved by default. See
tf.saved_model.SaveOptions
to disable this. - The saved_model_cli's
--input_examples
inputs are now restricted to python literals to avoid code injection.
- Custom gradients are now saved by default. See
- XLA:
- Add a new API that allows custom call functions to signal errors. The old API will be deprecated in a future release. See https://www.tensorflow.org/xla/custom_call for details.
- XLA:GPU reductions are deterministic by default (reductions within
jit_compile=True
are now deterministic). - XLA:GPU works with Horovod (OSS contribution by Trent Lo from NVidia)
tf.saved_model.save
:- When saving a model, not specifying a namespace whitelist for custom ops with a namespace will now default to allowing rather than rejecting them all.
Security
- Fixes a code injection issue in
saved_model_cli
(CVE-2021-41228) - Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
- Fixes a heap OOB in
FusedBatchNorm
kernels (CVE-2021-41223) - Fixes an arbitrary memory read in
ImmutableConst
(CVE-2021-41227) - Fixes a heap OOB in
SparseBinCount
(CVE-2021-41226) - Fixes a heap OOB in
SparseFillEmptyRows
(CVE-2021-41224) - Fixes a segfault due to negative splits in
SplitV
(CVE-2021-41222) - Fixes segfaults and vulnerabilities caused by accesses to invalid memory during shape inference in
Cudnn*
ops (CVE-2021-41221) - Fixes a null pointer exception when
Exit
node is not preceded byEnter
op (CVE-2021-41217) - Fixes an integer division by 0 in
tf.raw_ops.AllToAll
(CVE-2021-41218) - Fixes a use after free and a memory leak in
CollectiveReduceV2
(CVE-2021-41220) - Fixes an undefined behavior via
nullptr
reference binding in sparse matrix multiplication (CVE-2021-41219) - Fixes a heap buffer overflow in
Transpose
(CVE-2021-41216) - Prevents deadlocks arising from mutually recursive
tf.function
objects (CVE-2021-41213) - Fixes a null pointer exception in
DeserializeSparse
(CVE-2021-41215) - Fixes an undefined behavior arising from reference binding to
nullptr
intf.ragged.cross
(CVE-2021-41214) - Fixes a heap OOB read in
tf.ragged.cross
(CVE-2021-41212) - Fixes a heap OOB in shape inference for
QuantizeV2
(CVE-2021-41211) - Fixes a heap OOB read in all
tf.raw_ops.QuantizeAndDequantizeV*
ops (CVE-2021-41205) - Fixes an FPE in
ParallelConcat
(CVE-2021-41207) - Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
- Fixes a heap OOB read in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-41210) - Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
- Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
- Fixes a vulnerability caused by unitialized access in
EinsumHelper::ParseEquation
(CVE-2021-41201) - Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
- Fixes an overflow producing a crash in
tf.range
(CVE-2021-41202) - Fixes an overflow producing a crash in
tf.image.resize
when size is large (CVE-2021-41199) - Fixes an overflow producing a crash in
tf.tile
when tiling tensor is large (CVE-2021-41198) - Fixes a vulnerability produced due to incomplete validation in
tf.summary.create_file_writer
(CVE-2021-41200) - Fixes multiple crashes due to overflow and
CHECK
-fail in ops with large tensor shapes (CVE-2021-41197) - Fixes a crash in
max_pool3d
when size argument is 0 or negative (CVE-2021-41196) - Fixes a crash in
tf.math.segment_*
operations (CVE-2021-41195) - Updates
curl
to7.78.0
to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.
Install TensorFlow 2
Click here to install TensorFlow 2
Download TensorFlow 2.7.0 on the GitHub page:
https://github.com/tensorflow/tensorflow/releases/tag/v2.7.0