Updates in TensorFlow 2.0

Posted by


TensorFlow is an open-source machine learning library developed by Google that is widely used for building and training deep learning models. TensorFlow 2.0 brings several important changes and improvements that make it even more user-friendly and efficient than the previous versions. In this tutorial, we will explore some of the key changes in TensorFlow 2.0 and how they can benefit machine learning practitioners.

  1. Eager Execution:
    One of the most significant changes in TensorFlow 2.0 is the adoption of eager execution by default. In previous versions of TensorFlow, computations were executed in a graph mode, where users had to define the entire computation graph before running it. With eager execution, computations are executed eagerly as they are defined, making it easier to debug and prototype models. This change brings TensorFlow closer to other deep learning libraries like PyTorch, which also use eager execution.

To enable eager execution in TensorFlow 2.0, you simply need to set tf.config.experimental_run_functions_eagerly(True). Once enabled, you can define and run TensorFlow operations just like you would with NumPy arrays.

import tensorflow as tf

# Enable eager execution
tf.config.experimental_run_functions_eagerly(True)

# Define a simple computation
a = tf.constant(2)
b = tf.constant(3)
c = a + b

# Print the result
print(c.numpy())  # Output: 5
  1. Keras Integration:
    Another major change in TensorFlow 2.0 is the tighter integration with the Keras API. Keras is a high-level neural networks API that was integrated into TensorFlow in version 1.10. In TensorFlow 2.0, Keras is the official high-level API for building and training deep learning models. This integration simplifies the process of building and training models, as Keras provides a more user-friendly and intuitive interface.

To create a simple neural network model using Keras in TensorFlow 2.0, you can use the tf.keras.Sequential class to define a sequence of layers. Here is an example that creates a simple two-layer neural network with Keras:

import tensorflow as tf

# Define a simple neural network model
model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu', input_shape=(784,)),
    tf.keras.layers.Dense(10, activation='softmax')
])

# Compile the model
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Train the model
model.fit(x_train, y_train, epochs=10, batch_size=32)
  1. Model Subclassing:
    In TensorFlow 2.0, you can also define custom models using the model subclassing API, which allows for more flexibility and control over the model architecture. This approach is useful when you need to define complex and custom architectures that cannot be easily constructed using the sequential or functional API in Keras.

To define a custom model using model subclassing in TensorFlow 2.0, you need to create a new class that inherits from tf.keras.Model and define the model architecture in the __init__ method and the forward pass in the call method. Here is an example that defines a custom model using model subclassing:

import tensorflow as tf

class CustomModel(tf.keras.Model):
    def __init__(self):
        super(CustomModel, self).__init__()
        self.dense1 = tf.keras.layers.Dense(64, activation='relu')
        self.dense2 = tf.keras.layers.Dense(10, activation='softmax')

    def call(self, inputs):
        x = self.dense1(inputs)
        return self.dense2(x)

# Create an instance of the custom model
model = CustomModel()

# Compile and train the model as usual
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10, batch_size=32)
  1. Improved Performance:
    TensorFlow 2.0 also introduces several performance improvements that make training deep learning models faster and more efficient. For example, TensorFlow now uses XLA (Accelerated Linear Algebra) by default, which optimizes the computation graph and makes use of hardware accelerators like GPUs and TPUs for faster execution.

In addition, TensorFlow 2.0 introduces tf.function, a new decorator that allows you to decorate Python functions and automatically convert them into TensorFlow graphs. This can further improve performance by reducing overhead and optimizing the computation graph.

@tf.function
def train_step(inputs, targets):
    with tf.GradientTape() as tape:
        predictions = model(inputs)
        loss_value = loss_fn(targets, predictions)
    gradients = tape.gradient(loss_value, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    return loss_value

# Use tf.function to compile the train_step function
train_step = tf.function(train_step)

Overall, TensorFlow 2.0 brings significant changes that make it easier and more efficient to build and train deep learning models. With eager execution, tighter integration with Keras, model subclassing, and improved performance, TensorFlow 2.0 offers a more user-friendly and powerful platform for machine learning practitioners. By familiarizing yourself with these changes and best practices, you can take advantage of the new features and capabilities in TensorFlow 2.0 to build and deploy cutting-edge deep learning models.

0 0 votes
Article Rating

Leave a Reply

27 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@rrMaxwell
2 hours ago

bro come back

@laliborio
2 hours ago

Hi, dear Aurélien! Passing by just to say that your ml book is just amazing. Please keep writing these art works!

@PritishMishra
2 hours ago

I am currently 16 and I know python very-very well as soon as I started ML. I get to know about your Book and just bought the Book…. and I searched your name on YouTube and just landed on your channel. You don't know how much you have helped beginners like me !! It will be Really really Appreciated if you make more such videos !! Thanks 🙂

@gesuchter
2 hours ago

I've got your book on TensorFlow 1. Would you recommend to read it and later migrate to TensorFlow 2? Thanks!:D

@tulliolevichivita5130
2 hours ago

Hi, Aurélien! I've got your beautiful book about ML and TF. I think it is the best book about the subject. Do you have plans to write a new version to cover TF2.0?

@alexandrithsharron6097
2 hours ago

Thank you! Great video! Can't wait to pick up the second edition!

@denismerigold486
2 hours ago

What do I need to know to make a deep learning framework?

@eduardoarnold1344
2 hours ago

It definitely improves over TF 1.0. However, as a PyTorch user I am still not convinced to switch frameworks. So far my main concern is on how autograd will work on TF 2.0. I found the GradientTape a clumsy solution.

@vladp7664
2 hours ago

I’ve studied TF from your book. It would be great if you could add a chapter about tf.data API in the next edition.
+ Also it would be very interesting to see a video about how you have mastered TF.
Thank you!

@Gerald-iz7mv
2 hours ago

hi, when will it be released?

@danieldaza3412
2 hours ago

Great video, as usual! I'll be looking forward for the new edition of your book.

@deeplearningpartnership
2 hours ago

This looks good!

@youserega
2 hours ago

This is great analysis, although I didn't like the balance scale, especially with TF 2.0 not even released yet. I also agree with others that frequently changing API is off-putting.

@koustubhavachat
2 hours ago

Make it more pythonic

@sidim.aourid9958
2 hours ago

Thank you for the video. Looking forward for your new book.

@RaphaelRibas
2 hours ago

I agree that the way variable is re-used in TF is ugly, but straight out deprecating it in favor of keras is just hiding the dirt under the rug I think. I think a lot of people using TF will stick to graph mode and it is not clear Keras would work well with that. Also, eager execution is nice and may pytorch users would love it, but if you care enough could you are already using pytorch and may not have many reasons to switch. Meanwhile, someone enjoy the extra power and performance from graphs may just get frustrated with the fact that it is not even the default anymore (and potentially won't be as well documented and supported).

@jacek_poplawski
2 hours ago

Thanks for very informative video 🙂

@geoffreyanderson4719
2 hours ago

Forecast for year 2019: Obsolete TF "answers" in code that does not work in TF2.0 still presented on StackOverflow after any Google search. TF tutorials still don't show anyone how to save and load models effectively. Nobody on Earth — at least who publishes anything rather than keeping competitive secrets from corporate rivals — knows how to use the "new" TF according to Google. Regurgitations of essentially the TF2.0 tutorial code, which is woefully incomplete in regard to real world deep learning project management, is widely re-written without attribution to original authors and without any evidence of working using reproducible research, except to buff the ego of fakers to their potential employers, and to get more hits from the world's most sub-par search engines.

@inflationova
2 hours ago

PyTorch 1.0 also brings optimizations in term of JIT and hybrid front-end. It would be very interesting to see how they compete each other.

@govindnarasimman6819
2 hours ago

hopefully things there is some solution for memory also. I have often noticed tensorflow easily reports segmentation fault or aborted() without much comments. usually a model as big as 800mb easily fails to work on 12gb ram gpu. and multiple GPU usage is hard because of variable_scope and stuff.

27
0
Would love your thoughts, please comment.x
()
x