diff --git a/docs/r2/images/scalars_custom_lr.png b/docs/r2/images/scalars_custom_lr.png
new file mode 100644
index 0000000000..01cc1172d9
Binary files /dev/null and b/docs/r2/images/scalars_custom_lr.png differ
diff --git a/docs/r2/images/scalars_loss.png b/docs/r2/images/scalars_loss.png
new file mode 100644
index 0000000000..6be81fe3b2
Binary files /dev/null and b/docs/r2/images/scalars_loss.png differ
diff --git a/docs/r2/tensorboard_scalars_and_keras.ipynb b/docs/r2/tensorboard_scalars_and_keras.ipynb
new file mode 100644
index 0000000000..64c31bf106
--- /dev/null
+++ b/docs/r2/tensorboard_scalars_and_keras.ipynb
@@ -0,0 +1,472 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "name": "tensorboard_scalars_and_keras.ipynb",
+ "version": "0.3.2",
+ "provenance": [],
+ "collapsed_sections": []
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ }
+ },
+ "cells": [
+ {
+ "metadata": {
+ "id": "djUvWu41mtXa",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "##### Copyright 2018 The TensorFlow Authors."
+ ]
+ },
+ {
+ "metadata": {
+ "id": "su2RaORHpReL",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
+ "# you may not use this file except in compliance with the License.\n",
+ "# You may obtain a copy of the License at\n",
+ "#\n",
+ "# https://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing, software\n",
+ "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
+ "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
+ "# See the License for the specific language governing permissions and\n",
+ "# limitations under the License."
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "NztQK2uFpXT-",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "# TensorBoard Scalars: Logging basic training metrics in Keras\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "eDXRFe_qp5C3",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "\n",
+ "## Overview\n",
+ "\n",
+ "Machine learning invariably involves understanding key metrics such as loss and how they change as training progresses. These metrics can help you understand if you're [overfitting](https://en.wikipedia.org/wiki/Overfitting), for example, or if you're unnecessarily training for too long. You may want to compare these metrics across different training runs to help debug and improve your model.\n",
+ "\n",
+ "TensorBoard's **Scalars Dashboard** allows you to visualize these metrics using a simple API with very little effort. This tutorial presents very basic examples to help you learn how to use these APIs with TensorBoard when developing your Keras model. You will learn how to use the Keras TensorBoard callback and TensorFlow Summary APIs to visualize default and custom scalars."
+ ]
+ },
+ {
+ "metadata": {
+ "id": "dG-nnZK9qW9z",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Setup"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "3U5gdCw_nSG3",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "# Ensure TensorFlow 2.0 is installed.\n",
+ "!pip install -q tf-nightly-2.0-preview\n",
+ "# Load the TensorBoard notebook extension.\n",
+ "%load_ext tensorboard.notebook"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "1qIKtOBrqc9Y",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "from __future__ import absolute_import\n",
+ "from __future__ import division\n",
+ "from __future__ import print_function\n",
+ "\n",
+ "from datetime import datetime\n",
+ "from packaging import version\n",
+ "\n",
+ "import tensorflow as tf\n",
+ "from tensorflow import keras\n",
+ "\n",
+ "import numpy as np\n",
+ "\n",
+ "print(\"TensorFlow version: \", tf.__version__)\n",
+ "assert version.parse(tf.__version__).release[0] >= 2, \\\n",
+ " \"This notebook requires TensorFlow 2.0 or above.\""
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "6YDAoNCN3ZNS",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Set up data for a simple regression\n",
+ "\n",
+ "You're now going to use Keras to calculate a regression, i.e., find the best line of fit for a paired data set. (While using neural networks and gradient descent is [overkill for this kind of problem](https://stats.stackexchange.com/questions/160179/do-we-need-gradient-descent-to-find-the-coefficients-of-a-linear-regression-mode), it does make for a very easy to understand example.)\n",
+ "\n",
+ "You're going to use TensorBoard to observe how training and test **loss** change across epochs. Hopefully, you'll see training and test loss decrease over time and then remain steady.\n",
+ "\n",
+ "First, generate 1000 data points roughly along the line *y = 0.5x + 2*. Split these data points into training and test sets. Your hope is that the neural net learns this relationship."
+ ]
+ },
+ {
+ "metadata": {
+ "id": "j-ryO6OxnQH_",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "data_size = 1000\n",
+ "# 80% of the data is for training.\n",
+ "train_pct = 0.8\n",
+ "\n",
+ "train_size = int(data_size * train_pct)\n",
+ "\n",
+ "# Create some input data between -1 and 1 and randomize it.\n",
+ "x = np.linspace(-1, 1, data_size)\n",
+ "np.random.shuffle(x)\n",
+ "\n",
+ "# Generate the output data.\n",
+ "# y = 0.5x + 2 + noise\n",
+ "y = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))\n",
+ "\n",
+ "# Split into test and train pairs.\n",
+ "x_train, y_train = x[:train_size], y[:train_size]\n",
+ "x_test, y_test = x[train_size:], y[train_size:]"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "Je59_8Ts3rq0",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Training the model and logging loss\n",
+ "\n",
+ "You're now ready to define, train and evaluate your model. \n",
+ "\n",
+ "To log the *loss* scalar as you train, you'll create the Keras TensorBoard callback, specifying a log directory, and pass it to [Model.fit()](https://https://www.tensorflow.org/api_docs/python/tf/keras/models/Model#fit).\n",
+ "\n",
+ "TensorBoard reads log data from the log directory hierarchy. In this notebook, the root log directory is \"logs/scalars\", suffixed by a timestamped subdirectory. This enables easily identification and selection of training runs as you use TensorBoard and iterate on your model.\n",
+ " "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "VmEQwCon3i7m",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "logdir=\"logs/scalars/\" + datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n",
+ "tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)\n",
+ "\n",
+ "model = keras.models.Sequential([\n",
+ " keras.layers.Dense(16, input_dim=1),\n",
+ " keras.layers.Dense(1),\n",
+ "])\n",
+ "\n",
+ "model.compile(\n",
+ " loss='mse', # keras.losses.mean_squared_error\n",
+ " optimizer=keras.optimizers.SGD(lr=0.2),\n",
+ ")\n",
+ "\n",
+ "print(\"Training ... With default parameters, this takes less than 10 seconds.\")\n",
+ "training_history = model.fit(\n",
+ " x_train, # input\n",
+ " y_train, # output\n",
+ " batch_size=train_size,\n",
+ " verbose=0, # Suppress chatty output; use Tensorboard instead\n",
+ " epochs=100,\n",
+ " validation_data=(x_test, y_test),\n",
+ " callbacks=[tensorboard_callback],\n",
+ ")\n",
+ "\n",
+ "print(\"Average test loss: \", np.average(training_history.history['loss']))"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "042k7GMERVkx",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Examining loss using TensorBoard\n",
+ "\n",
+ "Now, start TensorBoard, specifying the root log directory.\n",
+ "\n",
+ "Wait a few seconds for TensorBoard's UI to spin up. "
+ ]
+ },
+ {
+ "metadata": {
+ "id": "6pck56gKReON",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "%tensorboard --logdir logs/scalars"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "QmQHlG10Kpu2",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ ""
+ ]
+ },
+ {
+ "metadata": {
+ "id": "ciSIRibhRi6N",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "You may see TensorBoard display the message \"No dashboards are active for the current data set\". That's because initial logging data hasn't been saved yet. As training progresses, the Keras model will start logging data. TensorBoard will periodicially refresh and show you your scalar metrics. If you're impatient, you can tap the Refresh arrow on the top right.\n",
+ "\n",
+ "As you watch the training progress, note how both training and validation loss rapidly decrease and then remain stable. In fact, you could have stopped training after 25 epochs, because the training didn't improve much after that point.\n",
+ "\n",
+ "Hover over the graph to see specific data points. You can also try zooming in with your mouse, or selecting part of them to view more detail.\n",
+ "\n",
+ "Notice the \"Runs\" selector on the left. A \"run\" represents a set of logs from a complete round of training, in this case the result of Model.fit(). Developers typically have many, many runs, as they experiment and develop their model over time. \n",
+ "\n",
+ "Use the \"Runs\" selector to choose specific runs, or choose from only training or validation. Comparing runs will help you evaluate which version of your code is solving your problem better.\n"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "finK0GfYyefe",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "Ok, TensorBoard's loss graph demonstrates that the loss consistently decreased for both training and validation and then stabilized. That means that the model's metrics are likely very good! Now see how the model actually behaves in real life. \n",
+ "\n",
+ "Given the input data (60, 25, 2), the line *y = 0.5x + 2* should yield (32, 14.5, 3). Does the model agree?"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "EuiLgxQstt32",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "print(model.predict([60, 25, 2]))"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "bom4MdeewRKS",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "Not bad!"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "vvwGmJK9XWmh",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "## Logging custom scalars\n",
+ "\n",
+ "What if you want to log custom values, such as a dynamic learning rate? To do that, you need to use the TensorFlow Summary API.\n",
+ "\n",
+ "Retrain the regression model and log a custom learning rate. Here's how:\n",
+ "\n",
+ "1. Create a file writer, using ```tf.summary.create_file_writer()```.\n",
+ "2. Define a custom learning rate function. This will be passed to the Keras [LearningRateScheduler](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LearningRateScheduler) callback.\n",
+ "3. Inside the learning rate function, use ```tf.summary.scalar()``` to log the custom learning rate.\n",
+ "4. Pass the LearningRateScheduler callback to Model.fit().\n",
+ "\n",
+ "In general, to log a custom scalar, you need to use ```tf.summary.scalar()``` with a file writer. The file writer is responsible for writing data for this run to the specified directory and is implicitly used when you use the ```tf.summary.scalar()```."
+ ]
+ },
+ {
+ "metadata": {
+ "id": "XB95ltRiXVXk",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "logdir=\"logs/scalars/\" + datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n",
+ "file_writer = tf.summary.create_file_writer(logdir + \"/metrics\")\n",
+ "file_writer.set_as_default()\n",
+ "\n",
+ "def lr_schedule(epoch):\n",
+ " \"\"\"\n",
+ " Returns a custom learning rate that decreases as epochs progress.\n",
+ " \"\"\"\n",
+ " learning_rate = 0.2\n",
+ " if epoch > 10:\n",
+ " learning_rate = 0.02\n",
+ " if epoch > 20:\n",
+ " learning_rate = 0.01\n",
+ " if epoch > 50:\n",
+ " learning_rate = 0.005\n",
+ "\n",
+ " tf.summary.scalar('learning rate', data=learning_rate, step=epoch)\n",
+ " return learning_rate\n",
+ "\n",
+ "lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)\n",
+ "tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)\n",
+ "\n",
+ "model = keras.models.Sequential([\n",
+ " keras.layers.Dense(16, input_dim=1),\n",
+ " keras.layers.Dense(1),\n",
+ "])\n",
+ "\n",
+ "model.compile(\n",
+ " loss='mse', # keras.losses.mean_squared_error\n",
+ " optimizer=keras.optimizers.SGD(),\n",
+ ")\n",
+ "\n",
+ "training_history = model.fit(\n",
+ " x_train, # input\n",
+ " y_train, # output\n",
+ " batch_size=train_size,\n",
+ " verbose=0, # Suppress chatty output; use Tensorboard instead\n",
+ " epochs=100,\n",
+ " validation_data=(x_test, y_test),\n",
+ " callbacks=[tensorboard_callback, lr_callback],\n",
+ ")"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "pck8OQEjayDM",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "Let's look at TensorBoard again."
+ ]
+ },
+ {
+ "metadata": {
+ "id": "0sjM2wXGa0mF",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "%tensorboard --logdir logs/scalars"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "metadata": {
+ "id": "GkIahGZKK9I7",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ ""
+ ]
+ },
+ {
+ "metadata": {
+ "id": "RRlUDnhlkN_q",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "Using the \"Runs\" selector on the left, notice that you have a ```/metrics``` run. Selecting this run displays a \"learning rate\" graph that allows you to verify the progression of the learning rate during this run. \n",
+ "\n",
+ "You can also compare this run's training and validation loss curves against your earlier runs."
+ ]
+ },
+ {
+ "metadata": {
+ "id": "l0TTI16Nl0nk",
+ "colab_type": "text"
+ },
+ "cell_type": "markdown",
+ "source": [
+ "How does this model do?"
+ ]
+ },
+ {
+ "metadata": {
+ "id": "97T4vT3QkQJH",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "cell_type": "code",
+ "source": [
+ "print(model.predict([60, 25, 2]))"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ }
+ ]
+}
\ No newline at end of file