From 27df8007dfcf2c0199800027041c2fc14838179c Mon Sep 17 00:00:00 2001 From: Sanskar Modi Date: Wed, 28 Aug 2024 17:42:34 +0530 Subject: [PATCH 1/4] fixed the tf issue #74564 I looked into the issue and found out that after the evaluation we are getting a list of size 4 where only the last one is the accuracy of the export_model. So ignored all the other output to only get the required loss and accuracy --- site/en/tutorials/keras/text_classification.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/site/en/tutorials/keras/text_classification.ipynb b/site/en/tutorials/keras/text_classification.ipynb index c66d0fce0d..768d3182e4 100644 --- a/site/en/tutorials/keras/text_classification.ipynb +++ b/site/en/tutorials/keras/text_classification.ipynb @@ -861,7 +861,7 @@ ")\n", "\n", "# Test it with `raw_test_ds`, which yields raw strings\n", - "loss, accuracy = export_model.evaluate(raw_test_ds)\n", + "_, _, loss, accuracy = export_model.evaluate(raw_test_ds)\n", "print(accuracy)" ] }, From 93f3a79020d43f6d59887c2985671b29a98d626c Mon Sep 17 00:00:00 2001 From: Sanskar Modi Date: Thu, 29 Aug 2024 11:02:50 +0530 Subject: [PATCH 2/4] Delete site/en/tutorials/keras/text_classification.ipynb --- .../tutorials/keras/text_classification.ipynb | 982 ------------------ 1 file changed, 982 deletions(-) delete mode 100644 site/en/tutorials/keras/text_classification.ipynb diff --git a/site/en/tutorials/keras/text_classification.ipynb b/site/en/tutorials/keras/text_classification.ipynb deleted file mode 100644 index 768d3182e4..0000000000 --- a/site/en/tutorials/keras/text_classification.ipynb +++ /dev/null @@ -1,982 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "Ic4_occAAiAT" - }, - "source": [ - "##### Copyright 2019 The TensorFlow Authors." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "ioaprt5q5US7" - }, - "outputs": [], - "source": [ - "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "# you may not use this file except in compliance with the License.\n", - "# You may obtain a copy of the License at\n", - "#\n", - "# https://www.apache.org/licenses/LICENSE-2.0\n", - "#\n", - "# Unless required by applicable law or agreed to in writing, software\n", - "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "# See the License for the specific language governing permissions and\n", - "# limitations under the License." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "yCl0eTNH5RS3" - }, - "outputs": [], - "source": [ - "#@title MIT License\n", - "#\n", - "# Copyright (c) 2017 François Chollet\n", - "#\n", - "# Permission is hereby granted, free of charge, to any person obtaining a\n", - "# copy of this software and associated documentation files (the \"Software\"),\n", - "# to deal in the Software without restriction, including without limitation\n", - "# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n", - "# and/or sell copies of the Software, and to permit persons to whom the\n", - "# Software is furnished to do so, subject to the following conditions:\n", - "#\n", - "# The above copyright notice and this permission notice shall be included in\n", - "# all copies or substantial portions of the Software.\n", - "#\n", - "# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n", - "# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n", - "# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n", - "# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n", - "# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n", - "# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n", - "# DEALINGS IN THE SOFTWARE." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ItXfxkxvosLH" - }, - "source": [ - "# Basic text classification" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "hKY4XMc9o8iB" - }, - "source": [ - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " View on TensorFlow.org\n", - " \n", - " Run in Google Colab\n", - " \n", - " View source on GitHub\n", - " \n", - " Download notebook\n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Eg62Pmz3o83v" - }, - "source": [ - "This tutorial demonstrates text classification starting from plain text files stored on disk. You'll train a binary classifier to perform sentiment analysis on an IMDB dataset. At the end of the notebook, there is an exercise for you to try, in which you'll train a multi-class classifier to predict the tag for a programming question on Stack Overflow.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "8RZOuS9LWQvv" - }, - "outputs": [], - "source": [ - "import matplotlib.pyplot as plt\n", - "import os\n", - "import re\n", - "import shutil\n", - "import string\n", - "import tensorflow as tf\n", - "\n", - "from tensorflow.keras import layers\n", - "from tensorflow.keras import losses\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "6-tTFS04dChr" - }, - "outputs": [], - "source": [ - "print(tf.__version__)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "NBTI1bi8qdFV" - }, - "source": [ - "## Sentiment analysis\n", - "\n", - "This notebook trains a sentiment analysis model to classify movie reviews as *positive* or *negative*, based on the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.\n", - "\n", - "You'll use the [Large Movie Review Dataset](https://ai.stanford.edu/~amaas/data/sentiment/) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "iAsKG535pHep" - }, - "source": [ - "### Download and explore the IMDB dataset\n", - "\n", - "Let's download and extract the dataset, then explore the directory structure." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "k7ZYnuajVlFN" - }, - "outputs": [], - "source": [ - "url = \"https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\"\n", - "\n", - "dataset = tf.keras.utils.get_file(\"aclImdb_v1\", url,\n", - " untar=True, cache_dir='.',\n", - " cache_subdir='')\n", - "\n", - "dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "355CfOvsV1pl" - }, - "outputs": [], - "source": [ - "os.listdir(dataset_dir)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "7ASND15oXpF1" - }, - "outputs": [], - "source": [ - "train_dir = os.path.join(dataset_dir, 'train')\n", - "os.listdir(train_dir)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ysMNMI1CWDFD" - }, - "source": [ - "The `aclImdb/train/pos` and `aclImdb/train/neg` directories contain many text files, each of which is a single movie review. Let's take a look at one of them." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "R7g8hFvzWLIZ" - }, - "outputs": [], - "source": [ - "sample_file = os.path.join(train_dir, 'pos/1181_9.txt')\n", - "with open(sample_file) as f:\n", - " print(f.read())" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Mk20TEm6ZRFP" - }, - "source": [ - "### Load the dataset\n", - "\n", - "Next, you will load the data off disk and prepare it into a format suitable for training. To do so, you will use the helpful [text_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text_dataset_from_directory) utility, which expects a directory structure as follows.\n", - "\n", - "```\n", - "main_directory/\n", - "...class_a/\n", - "......a_text_1.txt\n", - "......a_text_2.txt\n", - "...class_b/\n", - "......b_text_1.txt\n", - "......b_text_2.txt\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "nQauv38Lnok3" - }, - "source": [ - "To prepare a dataset for binary classification, you will need two folders on disk, corresponding to `class_a` and `class_b`. These will be the positive and negative movie reviews, which can be found in `aclImdb/train/pos` and `aclImdb/train/neg`. As the IMDB dataset contains additional folders, you will remove them before using this utility." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "VhejsClzaWfl" - }, - "outputs": [], - "source": [ - "remove_dir = os.path.join(train_dir, 'unsup')\n", - "shutil.rmtree(remove_dir)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "95kkUdRoaeMw" - }, - "source": [ - "Next, you will use the `text_dataset_from_directory` utility to create a labeled `tf.data.Dataset`. [tf.data](https://www.tensorflow.org/guide/data) is a powerful collection of tools for working with data.\n", - "\n", - "When running a machine learning experiment, it is a best practice to divide your dataset into three splits: [train](https://developers.google.com/machine-learning/glossary#training_set), [validation](https://developers.google.com/machine-learning/glossary#validation_set), and [test](https://developers.google.com/machine-learning/glossary#test-set).\n", - "\n", - "The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the `validation_split` argument below." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "nOrK-MTYaw3C" - }, - "outputs": [], - "source": [ - "batch_size = 32\n", - "seed = 42\n", - "\n", - "raw_train_ds = tf.keras.utils.text_dataset_from_directory(\n", - " 'aclImdb/train',\n", - " batch_size=batch_size,\n", - " validation_split=0.2,\n", - " subset='training',\n", - " seed=seed)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "5Y33oxOUpYkh" - }, - "source": [ - "As you can see above, there are 25,000 examples in the training folder, of which you will use 80% (or 20,000) for training. As you will see in a moment, you can train a model by passing a dataset directly to `model.fit`. If you're new to `tf.data`, you can also iterate over the dataset and print out a few examples as follows." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "51wNaPPApk1K" - }, - "outputs": [], - "source": [ - "for text_batch, label_batch in raw_train_ds.take(1):\n", - " for i in range(3):\n", - " print(\"Review\", text_batch.numpy()[i])\n", - " print(\"Label\", label_batch.numpy()[i])" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "JWq1SUIrp1a-" - }, - "source": [ - "Notice the reviews contain raw text (with punctuation and occasional HTML tags like `
`). You will show how to handle these in the following section.\n", - "\n", - "The labels are 0 or 1. To see which of these correspond to positive and negative movie reviews, you can check the `class_names` property on the dataset.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "MlICTG8spyO2" - }, - "outputs": [], - "source": [ - "print(\"Label 0 corresponds to\", raw_train_ds.class_names[0])\n", - "print(\"Label 1 corresponds to\", raw_train_ds.class_names[1])" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "pbdO39vYqdJr" - }, - "source": [ - "Next, you will create a validation and test dataset. You will use the remaining 5,000 reviews from the training set for validation." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "SzxazN8Hq1pF" - }, - "source": [ - "Note: When using the `validation_split` and `subset` arguments, make sure to either specify a random seed, or to pass `shuffle=False`, so that the validation and training splits have no overlap." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "JsMwwhOoqjKF" - }, - "outputs": [], - "source": [ - "raw_val_ds = tf.keras.utils.text_dataset_from_directory(\n", - " 'aclImdb/train',\n", - " batch_size=batch_size,\n", - " validation_split=0.2,\n", - " subset='validation',\n", - " seed=seed)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "rdSr0Nt3q_ns" - }, - "outputs": [], - "source": [ - "raw_test_ds = tf.keras.utils.text_dataset_from_directory(\n", - " 'aclImdb/test',\n", - " batch_size=batch_size)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "qJmTiO0IYAjm" - }, - "source": [ - "### Prepare the dataset for training\n", - "\n", - "Next, you will standardize, tokenize, and vectorize the data using the helpful `tf.keras.layers.TextVectorization` layer.\n", - "\n", - "Standardization refers to preprocessing the text, typically to remove punctuation or HTML elements to simplify the dataset. Tokenization refers to splitting strings into tokens (for example, splitting a sentence into individual words, by splitting on whitespace). Vectorization refers to converting tokens into numbers so they can be fed into a neural network. All of these tasks can be accomplished with this layer.\n", - "\n", - "As you saw above, the reviews contain various HTML tags like `
`. These tags will not be removed by the default standardizer in the `TextVectorization` layer (which converts text to lowercase and strips punctuation by default, but doesn't strip HTML). You will write a custom standardization function to remove the HTML." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ZVcHl-SLrH-u" - }, - "source": [ - "Note: To prevent [training-testing skew](https://developers.google.com/machine-learning/guides/rules-of-ml#training-serving_skew) (also known as training-serving skew), it is important to preprocess the data identically at train and test time. To facilitate this, the `TextVectorization` layer can be included directly inside your model, as shown later in this tutorial." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "SDRI_s_tX1Hk" - }, - "outputs": [], - "source": [ - "def custom_standardization(input_data):\n", - " lowercase = tf.strings.lower(input_data)\n", - " stripped_html = tf.strings.regex_replace(lowercase, '
', ' ')\n", - " return tf.strings.regex_replace(stripped_html,\n", - " '[%s]' % re.escape(string.punctuation),\n", - " '')" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "d2d3Aw8dsUux" - }, - "source": [ - "Next, you will create a `TextVectorization` layer. You will use this layer to standardize, tokenize, and vectorize our data. You set the `output_mode` to `int` to create unique integer indices for each token.\n", - "\n", - "Note that you're using the default split function, and the custom standardization function you defined above. You'll also define some constants for the model, like an explicit maximum `sequence_length`, which will cause the layer to pad or truncate sequences to exactly `sequence_length` values." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "-c76RvSzsMnX" - }, - "outputs": [], - "source": [ - "max_features = 10000\n", - "sequence_length = 250\n", - "\n", - "vectorize_layer = layers.TextVectorization(\n", - " standardize=custom_standardization,\n", - " max_tokens=max_features,\n", - " output_mode='int',\n", - " output_sequence_length=sequence_length)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "vlFOpfF6scT6" - }, - "source": [ - "Next, you will call `adapt` to fit the state of the preprocessing layer to the dataset. This will cause the model to build an index of strings to integers." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "lAhdjK7AtroA" - }, - "source": [ - "Note: It's important to only use your training data when calling adapt (using the test set would leak information)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "GH4_2ZGJsa_X" - }, - "outputs": [], - "source": [ - "# Make a text-only dataset (without labels), then call adapt\n", - "train_text = raw_train_ds.map(lambda x, y: x)\n", - "vectorize_layer.adapt(train_text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "SHQVEFzNt-K_" - }, - "source": [ - "Let's create a function to see the result of using this layer to preprocess some data." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "SCIg_T50wOCU" - }, - "outputs": [], - "source": [ - "def vectorize_text(text, label):\n", - " text = tf.expand_dims(text, -1)\n", - " return vectorize_layer(text), label" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "XULcm6B3xQIO" - }, - "outputs": [], - "source": [ - "# retrieve a batch (of 32 reviews and labels) from the dataset\n", - "text_batch, label_batch = next(iter(raw_train_ds))\n", - "first_review, first_label = text_batch[0], label_batch[0]\n", - "print(\"Review\", first_review)\n", - "print(\"Label\", raw_train_ds.class_names[first_label])\n", - "print(\"Vectorized review\", vectorize_text(first_review, first_label))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "6u5EX0hxyNZT" - }, - "source": [ - "As you can see above, each token has been replaced by an integer. You can lookup the token (string) that each integer corresponds to by calling `.get_vocabulary()` on the layer." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "kRq9hTQzhVhW" - }, - "outputs": [], - "source": [ - "print(\"1287 ---> \",vectorize_layer.get_vocabulary()[1287])\n", - "print(\" 313 ---> \",vectorize_layer.get_vocabulary()[313])\n", - "print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XD2H6utRydGv" - }, - "source": [ - "You are nearly ready to train your model. As a final preprocessing step, you will apply the TextVectorization layer you created earlier to the train, validation, and test dataset." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "2zhmpeViI1iG" - }, - "outputs": [], - "source": [ - "train_ds = raw_train_ds.map(vectorize_text)\n", - "val_ds = raw_val_ds.map(vectorize_text)\n", - "test_ds = raw_test_ds.map(vectorize_text)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "YsVQyPMizjuO" - }, - "source": [ - "### Configure the dataset for performance\n", - "\n", - "These are two important methods you should use when loading data to make sure that I/O does not become blocking.\n", - "\n", - "`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.\n", - "\n", - "`.prefetch()` overlaps data preprocessing and model execution while training.\n", - "\n", - "You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "wMcs_H7izm5m" - }, - "outputs": [], - "source": [ - "AUTOTUNE = tf.data.AUTOTUNE\n", - "\n", - "train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)\n", - "val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)\n", - "test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "LLC02j2g-llC" - }, - "source": [ - "### Create the model\n", - "\n", - "It's time to create your neural network:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "dkQP6in8yUBR" - }, - "outputs": [], - "source": [ - "embedding_dim = 16" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "xpKOoWgu-llD" - }, - "outputs": [], - "source": [ - "model = tf.keras.Sequential([\n", - " layers.Embedding(max_features, embedding_dim),\n", - " layers.Dropout(0.2),\n", - " layers.GlobalAveragePooling1D(),\n", - " layers.Dropout(0.2),\n", - " layers.Dense(1, activation='sigmoid')])\n", - "\n", - "model.summary()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "6PbKQ6mucuKL" - }, - "source": [ - "The layers are stacked sequentially to build the classifier:\n", - "\n", - "1. The first layer is an `Embedding` layer. This layer takes the integer-encoded reviews and looks up an embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`. To learn more about embeddings, check out the [Word embeddings](https://www.tensorflow.org/text/guide/word_embeddings) tutorial.\n", - "2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.\n", - "3. The last layer is densely connected with a single output node." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "L4EqVWg4-llM" - }, - "source": [ - "### Loss function and optimizer\n", - "\n", - "A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), you'll use `losses.BinaryCrossentropy` loss function.\n", - "\n", - "Now, configure the model to use an optimizer and a loss function:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Mr0GP-cQ-llN" - }, - "outputs": [], - "source": [ - "model.compile(loss=losses.BinaryCrossentropy(),\n", - " optimizer='adam',\n", - " metrics=[tf.metrics.BinaryAccuracy(threshold=0.5)])" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "35jv_fzP-llU" - }, - "source": [ - "### Train the model\n", - "\n", - "You will train the model by passing the `dataset` object to the fit method." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "tXSGrjWZ-llW" - }, - "outputs": [], - "source": [ - "epochs = 10\n", - "history = model.fit(\n", - " train_ds,\n", - " validation_data=val_ds,\n", - " epochs=epochs)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "9EEGuDVuzb5r" - }, - "source": [ - "### Evaluate the model\n", - "\n", - "Let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "zOMKywn4zReN" - }, - "outputs": [], - "source": [ - "loss, accuracy = model.evaluate(test_ds)\n", - "\n", - "print(\"Loss: \", loss)\n", - "print(\"Accuracy: \", accuracy)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "z1iEXVTR0Z2t" - }, - "source": [ - "This fairly naive approach achieves an accuracy of about 86%." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ldbQqCw2Xc1W" - }, - "source": [ - "### Create a plot of accuracy and loss over time\n", - "\n", - "`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "-YcvZsdvWfDf" - }, - "outputs": [], - "source": [ - "history_dict = history.history\n", - "history_dict.keys()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "1_CH32qJXruI" - }, - "source": [ - "There are four entries: one for each monitored metric during training and validation. You can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "2SEMeQ5YXs8z" - }, - "outputs": [], - "source": [ - "acc = history_dict['binary_accuracy']\n", - "val_acc = history_dict['val_binary_accuracy']\n", - "loss = history_dict['loss']\n", - "val_loss = history_dict['val_loss']\n", - "\n", - "epochs = range(1, len(acc) + 1)\n", - "\n", - "# \"bo\" is for \"blue dot\"\n", - "plt.plot(epochs, loss, 'bo', label='Training loss')\n", - "# b is for \"solid blue line\"\n", - "plt.plot(epochs, val_loss, 'b', label='Validation loss')\n", - "plt.title('Training and validation loss')\n", - "plt.xlabel('Epochs')\n", - "plt.ylabel('Loss')\n", - "plt.legend()\n", - "\n", - "plt.show()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Z3PJemLPXwz_" - }, - "outputs": [], - "source": [ - "plt.plot(epochs, acc, 'bo', label='Training acc')\n", - "plt.plot(epochs, val_acc, 'b', label='Validation acc')\n", - "plt.title('Training and validation accuracy')\n", - "plt.xlabel('Epochs')\n", - "plt.ylabel('Accuracy')\n", - "plt.legend(loc='lower right')\n", - "\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "hFFyCuJoXy7r" - }, - "source": [ - "In this plot, the dots represent the training loss and accuracy, and the solid lines are the validation loss and accuracy.\n", - "\n", - "Notice the training loss *decreases* with each epoch and the training accuracy *increases* with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration.\n", - "\n", - "This isn't the case for the validation loss and accuracy—they seem to peak before the training accuracy. This is an example of overfitting: the model performs better on the training data than it does on data it has never seen before. After this point, the model over-optimizes and learns representations *specific* to the training data that do not *generalize* to test data.\n", - "\n", - "For this particular case, you could prevent overfitting by simply stopping the training when the validation accuracy is no longer increasing. One way to do so is to use the `tf.keras.callbacks.EarlyStopping` callback." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-to23J3Vy5d3" - }, - "source": [ - "## Export the model\n", - "\n", - "In the code above, you applied the `TextVectorization` layer to the dataset before feeding text to the model. If you want to make your model capable of processing raw strings (for example, to simplify deploying it), you can include the `TextVectorization` layer inside your model. To do so, you can create a new model using the weights you just trained." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "FWXsMvryuZuq" - }, - "outputs": [], - "source": [ - "export_model = tf.keras.Sequential([\n", - " vectorize_layer,\n", - " model,\n", - " layers.Activation('sigmoid')\n", - "])\n", - "\n", - "export_model.compile(\n", - " loss=losses.BinaryCrossentropy(from_logits=False), optimizer=\"adam\", metrics=['accuracy']\n", - ")\n", - "\n", - "# Test it with `raw_test_ds`, which yields raw strings\n", - "_, _, loss, accuracy = export_model.evaluate(raw_test_ds)\n", - "print(accuracy)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "TwQgoN88LoEF" - }, - "source": [ - "### Inference on new data\n", - "\n", - "To get predictions for new examples, you can simply call `model.predict()`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "QW355HH5L49K" - }, - "outputs": [], - "source": [ - "examples = tf.constant([\n", - " \"The movie was great!\",\n", - " \"The movie was okay.\",\n", - " \"The movie was terrible...\"\n", - "])\n", - "\n", - "export_model.predict(examples)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "MaxlpFWpzR6c" - }, - "source": [ - "Including the text preprocessing logic inside your model enables you to export a model for production that simplifies deployment, and reduces the potential for [train/test skew](https://developers.google.com/machine-learning/guides/rules-of-ml#training-serving_skew).\n", - "\n", - "There is a performance difference to keep in mind when choosing where to apply your TextVectorization layer. Using it outside of your model enables you to do asynchronous CPU processing and buffering of your data when training on GPU. So, if you're training your model on the GPU, you probably want to go with this option to get the best performance while developing your model, then switch to including the TextVectorization layer inside your model when you're ready to prepare for deployment.\n", - "\n", - "Visit this [tutorial](https://www.tensorflow.org/tutorials/keras/save_and_load) to learn more about saving models." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "eSSuci_6nCEG" - }, - "source": [ - "## Exercise: multi-class classification on Stack Overflow questions\n", - "\n", - "This tutorial showed how to train a binary classifier from scratch on the IMDB dataset. As an exercise, you can modify this notebook to train a multi-class classifier to predict the tag of a programming question on [Stack Overflow](http://stackoverflow.com/).\n", - "\n", - "A [dataset](https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz) has been prepared for you to use containing the body of several thousand programming questions (for example, \"How can I sort a dictionary by value in Python?\") posted to Stack Overflow. Each of these is labeled with exactly one tag (either Python, CSharp, JavaScript, or Java). Your task is to take a question as input, and predict the appropriate tag, in this case, Python.\n", - "\n", - "The dataset you will work with contains several thousand questions extracted from the much larger public Stack Overflow dataset on [BigQuery](https://console.cloud.google.com/marketplace/details/stack-exchange/stack-overflow), which contains more than 17 million posts.\n", - "\n", - "After downloading the dataset, you will find it has a similar directory structure to the IMDB dataset you worked with previously:\n", - "\n", - "```\n", - "train/\n", - "...python/\n", - "......0.txt\n", - "......1.txt\n", - "...javascript/\n", - "......0.txt\n", - "......1.txt\n", - "...csharp/\n", - "......0.txt\n", - "......1.txt\n", - "...java/\n", - "......0.txt\n", - "......1.txt\n", - "```\n", - "\n", - "Note: To increase the difficulty of the classification problem, occurrences of the words Python, CSharp, JavaScript, or Java in the programming questions have been replaced with the word *blank* (as many questions contain the language they're about).\n", - "\n", - "To complete this exercise, you should modify this notebook to work with the Stack Overflow dataset by making the following modifications:\n", - "\n", - "1. At the top of your notebook, update the code that downloads the IMDB dataset with code to download the [Stack Overflow dataset](https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz) that has already been prepared. As the Stack Overflow dataset has a similar directory structure, you will not need to make many modifications.\n", - "\n", - "1. Modify the last layer of your model to `Dense(4)`, as there are now four output classes.\n", - "\n", - "1. When compiling the model, change the loss to `tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)`. This is the correct loss function to use for a multi-class classification problem, when the labels for each class are integers (in this case, they can be 0, *1*, *2*, or *3*). In addition, change the metrics to `metrics=['accuracy']`, since this is a multi-class classification problem (`tf.metrics.BinaryAccuracy` is only used for binary classifiers).\n", - "\n", - "1. When plotting accuracy over time, change `binary_accuracy` and `val_binary_accuracy` to `accuracy` and `val_accuracy`, respectively.\n", - "\n", - "1. Once these changes are complete, you will be able to train a multi-class classifier." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "F0T5SIwSm7uc" - }, - "source": [ - "## Learning more\n", - "\n", - "This tutorial introduced text classification from scratch. To learn more about the text classification workflow in general, check out the [Text classification guide](https://developers.google.com/machine-learning/guides/text-classification/) from Google Developers.\n" - ] - } - ], - "metadata": { - "accelerator": "GPU", - "colab": { - "name": "text_classification.ipynb", - "provenance": [], - "toc_visible": true - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} From 70c03a9627f3f9a5ffad25d5dbc0ff65674b1cf9 Mon Sep 17 00:00:00 2001 From: Sanskar Modi Date: Thu, 29 Aug 2024 11:03:53 +0530 Subject: [PATCH 3/4] saved the formatted nb --- .../tutorials/keras/text_classification.ipynb | 981 ++++++++++++++++++ 1 file changed, 981 insertions(+) create mode 100644 site/en/tutorials/keras/text_classification.ipynb diff --git a/site/en/tutorials/keras/text_classification.ipynb b/site/en/tutorials/keras/text_classification.ipynb new file mode 100644 index 0000000000..7b44c8e521 --- /dev/null +++ b/site/en/tutorials/keras/text_classification.ipynb @@ -0,0 +1,981 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "Ic4_occAAiAT" + }, + "source": [ + "##### Copyright 2019 The TensorFlow Authors." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "ioaprt5q5US7" + }, + "outputs": [], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "yCl0eTNH5RS3" + }, + "outputs": [], + "source": [ + "#@title MIT License\n", + "#\n", + "# Copyright (c) 2017 François Chollet\n", + "#\n", + "# Permission is hereby granted, free of charge, to any person obtaining a\n", + "# copy of this software and associated documentation files (the \"Software\"),\n", + "# to deal in the Software without restriction, including without limitation\n", + "# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n", + "# and/or sell copies of the Software, and to permit persons to whom the\n", + "# Software is furnished to do so, subject to the following conditions:\n", + "#\n", + "# The above copyright notice and this permission notice shall be included in\n", + "# all copies or substantial portions of the Software.\n", + "#\n", + "# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n", + "# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n", + "# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n", + "# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n", + "# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n", + "# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n", + "# DEALINGS IN THE SOFTWARE." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ItXfxkxvosLH" + }, + "source": [ + "# Basic text classification" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hKY4XMc9o8iB" + }, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " View on TensorFlow.org\n", + " \n", + " Run in Google Colab\n", + " \n", + " View source on GitHub\n", + " \n", + " Download notebook\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Eg62Pmz3o83v" + }, + "source": [ + "This tutorial demonstrates text classification starting from plain text files stored on disk. You'll train a binary classifier to perform sentiment analysis on an IMDB dataset. At the end of the notebook, there is an exercise for you to try, in which you'll train a multi-class classifier to predict the tag for a programming question on Stack Overflow.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "8RZOuS9LWQvv" + }, + "outputs": [], + "source": [ + "import matplotlib.pyplot as plt\n", + "import os\n", + "import re\n", + "import shutil\n", + "import string\n", + "import tensorflow as tf\n", + "\n", + "from tensorflow.keras import layers\n", + "from tensorflow.keras import losses\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "6-tTFS04dChr" + }, + "outputs": [], + "source": [ + "print(tf.__version__)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "NBTI1bi8qdFV" + }, + "source": [ + "## Sentiment analysis\n", + "\n", + "This notebook trains a sentiment analysis model to classify movie reviews as *positive* or *negative*, based on the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.\n", + "\n", + "You'll use the [Large Movie Review Dataset](https://ai.stanford.edu/~amaas/data/sentiment/) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "iAsKG535pHep" + }, + "source": [ + "### Download and explore the IMDB dataset\n", + "\n", + "Let's download and extract the dataset, then explore the directory structure." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "k7ZYnuajVlFN" + }, + "outputs": [], + "source": [ + "url = \"https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\"\n", + "\n", + "dataset = tf.keras.utils.get_file(\"aclImdb_v1\", url,\n", + " untar=True, cache_dir='.',\n", + " cache_subdir='')\n", + "\n", + "dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "355CfOvsV1pl" + }, + "outputs": [], + "source": [ + "os.listdir(dataset_dir)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "7ASND15oXpF1" + }, + "outputs": [], + "source": [ + "train_dir = os.path.join(dataset_dir, 'train')\n", + "os.listdir(train_dir)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ysMNMI1CWDFD" + }, + "source": [ + "The `aclImdb/train/pos` and `aclImdb/train/neg` directories contain many text files, each of which is a single movie review. Let's take a look at one of them." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "R7g8hFvzWLIZ" + }, + "outputs": [], + "source": [ + "sample_file = os.path.join(train_dir, 'pos/1181_9.txt')\n", + "with open(sample_file) as f:\n", + " print(f.read())" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Mk20TEm6ZRFP" + }, + "source": [ + "### Load the dataset\n", + "\n", + "Next, you will load the data off disk and prepare it into a format suitable for training. To do so, you will use the helpful [text_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text_dataset_from_directory) utility, which expects a directory structure as follows.\n", + "\n", + "```\n", + "main_directory/\n", + "...class_a/\n", + "......a_text_1.txt\n", + "......a_text_2.txt\n", + "...class_b/\n", + "......b_text_1.txt\n", + "......b_text_2.txt\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nQauv38Lnok3" + }, + "source": [ + "To prepare a dataset for binary classification, you will need two folders on disk, corresponding to `class_a` and `class_b`. These will be the positive and negative movie reviews, which can be found in `aclImdb/train/pos` and `aclImdb/train/neg`. As the IMDB dataset contains additional folders, you will remove them before using this utility." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "VhejsClzaWfl" + }, + "outputs": [], + "source": [ + "remove_dir = os.path.join(train_dir, 'unsup')\n", + "shutil.rmtree(remove_dir)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "95kkUdRoaeMw" + }, + "source": [ + "Next, you will use the `text_dataset_from_directory` utility to create a labeled `tf.data.Dataset`. [tf.data](https://www.tensorflow.org/guide/data) is a powerful collection of tools for working with data.\n", + "\n", + "When running a machine learning experiment, it is a best practice to divide your dataset into three splits: [train](https://developers.google.com/machine-learning/glossary#training_set), [validation](https://developers.google.com/machine-learning/glossary#validation_set), and [test](https://developers.google.com/machine-learning/glossary#test-set).\n", + "\n", + "The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the `validation_split` argument below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nOrK-MTYaw3C" + }, + "outputs": [], + "source": [ + "batch_size = 32\n", + "seed = 42\n", + "\n", + "raw_train_ds = tf.keras.utils.text_dataset_from_directory(\n", + " 'aclImdb/train',\n", + " batch_size=batch_size,\n", + " validation_split=0.2,\n", + " subset='training',\n", + " seed=seed)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "5Y33oxOUpYkh" + }, + "source": [ + "As you can see above, there are 25,000 examples in the training folder, of which you will use 80% (or 20,000) for training. As you will see in a moment, you can train a model by passing a dataset directly to `model.fit`. If you're new to `tf.data`, you can also iterate over the dataset and print out a few examples as follows." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "51wNaPPApk1K" + }, + "outputs": [], + "source": [ + "for text_batch, label_batch in raw_train_ds.take(1):\n", + " for i in range(3):\n", + " print(\"Review\", text_batch.numpy()[i])\n", + " print(\"Label\", label_batch.numpy()[i])" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JWq1SUIrp1a-" + }, + "source": [ + "Notice the reviews contain raw text (with punctuation and occasional HTML tags like `
`). You will show how to handle these in the following section.\n", + "\n", + "The labels are 0 or 1. To see which of these correspond to positive and negative movie reviews, you can check the `class_names` property on the dataset.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "MlICTG8spyO2" + }, + "outputs": [], + "source": [ + "print(\"Label 0 corresponds to\", raw_train_ds.class_names[0])\n", + "print(\"Label 1 corresponds to\", raw_train_ds.class_names[1])" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "pbdO39vYqdJr" + }, + "source": [ + "Next, you will create a validation and test dataset. You will use the remaining 5,000 reviews from the training set for validation." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "SzxazN8Hq1pF" + }, + "source": [ + "Note: When using the `validation_split` and `subset` arguments, make sure to either specify a random seed, or to pass `shuffle=False`, so that the validation and training splits have no overlap." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "JsMwwhOoqjKF" + }, + "outputs": [], + "source": [ + "raw_val_ds = tf.keras.utils.text_dataset_from_directory(\n", + " 'aclImdb/train',\n", + " batch_size=batch_size,\n", + " validation_split=0.2,\n", + " subset='validation',\n", + " seed=seed)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "rdSr0Nt3q_ns" + }, + "outputs": [], + "source": [ + "raw_test_ds = tf.keras.utils.text_dataset_from_directory(\n", + " 'aclImdb/test',\n", + " batch_size=batch_size)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qJmTiO0IYAjm" + }, + "source": [ + "### Prepare the dataset for training\n", + "\n", + "Next, you will standardize, tokenize, and vectorize the data using the helpful `tf.keras.layers.TextVectorization` layer.\n", + "\n", + "Standardization refers to preprocessing the text, typically to remove punctuation or HTML elements to simplify the dataset. Tokenization refers to splitting strings into tokens (for example, splitting a sentence into individual words, by splitting on whitespace). Vectorization refers to converting tokens into numbers so they can be fed into a neural network. All of these tasks can be accomplished with this layer.\n", + "\n", + "As you saw above, the reviews contain various HTML tags like `
`. These tags will not be removed by the default standardizer in the `TextVectorization` layer (which converts text to lowercase and strips punctuation by default, but doesn't strip HTML). You will write a custom standardization function to remove the HTML." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ZVcHl-SLrH-u" + }, + "source": [ + "Note: To prevent [training-testing skew](https://developers.google.com/machine-learning/guides/rules-of-ml#training-serving_skew) (also known as training-serving skew), it is important to preprocess the data identically at train and test time. To facilitate this, the `TextVectorization` layer can be included directly inside your model, as shown later in this tutorial." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "SDRI_s_tX1Hk" + }, + "outputs": [], + "source": [ + "def custom_standardization(input_data):\n", + " lowercase = tf.strings.lower(input_data)\n", + " stripped_html = tf.strings.regex_replace(lowercase, '
', ' ')\n", + " return tf.strings.regex_replace(stripped_html,\n", + " '[%s]' % re.escape(string.punctuation),\n", + " '')" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "d2d3Aw8dsUux" + }, + "source": [ + "Next, you will create a `TextVectorization` layer. You will use this layer to standardize, tokenize, and vectorize our data. You set the `output_mode` to `int` to create unique integer indices for each token.\n", + "\n", + "Note that you're using the default split function, and the custom standardization function you defined above. You'll also define some constants for the model, like an explicit maximum `sequence_length`, which will cause the layer to pad or truncate sequences to exactly `sequence_length` values." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-c76RvSzsMnX" + }, + "outputs": [], + "source": [ + "max_features = 10000\n", + "sequence_length = 250\n", + "\n", + "vectorize_layer = layers.TextVectorization(\n", + " standardize=custom_standardization,\n", + " max_tokens=max_features,\n", + " output_mode='int',\n", + " output_sequence_length=sequence_length)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "vlFOpfF6scT6" + }, + "source": [ + "Next, you will call `adapt` to fit the state of the preprocessing layer to the dataset. This will cause the model to build an index of strings to integers." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lAhdjK7AtroA" + }, + "source": [ + "Note: It's important to only use your training data when calling adapt (using the test set would leak information)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "GH4_2ZGJsa_X" + }, + "outputs": [], + "source": [ + "# Make a text-only dataset (without labels), then call adapt\n", + "train_text = raw_train_ds.map(lambda x, y: x)\n", + "vectorize_layer.adapt(train_text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "SHQVEFzNt-K_" + }, + "source": [ + "Let's create a function to see the result of using this layer to preprocess some data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "SCIg_T50wOCU" + }, + "outputs": [], + "source": [ + "def vectorize_text(text, label):\n", + " text = tf.expand_dims(text, -1)\n", + " return vectorize_layer(text), label" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "XULcm6B3xQIO" + }, + "outputs": [], + "source": [ + "# retrieve a batch (of 32 reviews and labels) from the dataset\n", + "text_batch, label_batch = next(iter(raw_train_ds))\n", + "first_review, first_label = text_batch[0], label_batch[0]\n", + "print(\"Review\", first_review)\n", + "print(\"Label\", raw_train_ds.class_names[first_label])\n", + "print(\"Vectorized review\", vectorize_text(first_review, first_label))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6u5EX0hxyNZT" + }, + "source": [ + "As you can see above, each token has been replaced by an integer. You can lookup the token (string) that each integer corresponds to by calling `.get_vocabulary()` on the layer." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "kRq9hTQzhVhW" + }, + "outputs": [], + "source": [ + "print(\"1287 ---> \",vectorize_layer.get_vocabulary()[1287])\n", + "print(\" 313 ---> \",vectorize_layer.get_vocabulary()[313])\n", + "print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XD2H6utRydGv" + }, + "source": [ + "You are nearly ready to train your model. As a final preprocessing step, you will apply the TextVectorization layer you created earlier to the train, validation, and test dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2zhmpeViI1iG" + }, + "outputs": [], + "source": [ + "train_ds = raw_train_ds.map(vectorize_text)\n", + "val_ds = raw_val_ds.map(vectorize_text)\n", + "test_ds = raw_test_ds.map(vectorize_text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YsVQyPMizjuO" + }, + "source": [ + "### Configure the dataset for performance\n", + "\n", + "These are two important methods you should use when loading data to make sure that I/O does not become blocking.\n", + "\n", + "`.cache()` keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.\n", + "\n", + "`.prefetch()` overlaps data preprocessing and model execution while training.\n", + "\n", + "You can learn more about both methods, as well as how to cache data to disk in the [data performance guide](https://www.tensorflow.org/guide/data_performance)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "wMcs_H7izm5m" + }, + "outputs": [], + "source": [ + "AUTOTUNE = tf.data.AUTOTUNE\n", + "\n", + "train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)\n", + "val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)\n", + "test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "LLC02j2g-llC" + }, + "source": [ + "### Create the model\n", + "\n", + "It's time to create your neural network:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "dkQP6in8yUBR" + }, + "outputs": [], + "source": [ + "embedding_dim = 16" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "xpKOoWgu-llD" + }, + "outputs": [], + "source": [ + "model = tf.keras.Sequential([\n", + " layers.Embedding(max_features, embedding_dim),\n", + " layers.Dropout(0.2),\n", + " layers.GlobalAveragePooling1D(),\n", + " layers.Dropout(0.2),\n", + " layers.Dense(1, activation='sigmoid')])\n", + "\n", + "model.summary()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6PbKQ6mucuKL" + }, + "source": [ + "The layers are stacked sequentially to build the classifier:\n", + "\n", + "1. The first layer is an `Embedding` layer. This layer takes the integer-encoded reviews and looks up an embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`. To learn more about embeddings, check out the [Word embeddings](https://www.tensorflow.org/text/guide/word_embeddings) tutorial.\n", + "2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.\n", + "3. The last layer is densely connected with a single output node." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "L4EqVWg4-llM" + }, + "source": [ + "### Loss function and optimizer\n", + "\n", + "A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), you'll use `losses.BinaryCrossentropy` loss function.\n", + "\n", + "Now, configure the model to use an optimizer and a loss function:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Mr0GP-cQ-llN" + }, + "outputs": [], + "source": [ + "model.compile(loss=losses.BinaryCrossentropy(),\n", + " optimizer='adam',\n", + " metrics=[tf.metrics.BinaryAccuracy(threshold=0.5)])" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "35jv_fzP-llU" + }, + "source": [ + "### Train the model\n", + "\n", + "You will train the model by passing the `dataset` object to the fit method." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "tXSGrjWZ-llW" + }, + "outputs": [], + "source": [ + "epochs = 10\n", + "history = model.fit(\n", + " train_ds,\n", + " validation_data=val_ds,\n", + " epochs=epochs)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9EEGuDVuzb5r" + }, + "source": [ + "### Evaluate the model\n", + "\n", + "Let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "zOMKywn4zReN" + }, + "outputs": [], + "source": [ + "loss, accuracy = model.evaluate(test_ds)\n", + "\n", + "print(\"Loss: \", loss)\n", + "print(\"Accuracy: \", accuracy)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z1iEXVTR0Z2t" + }, + "source": [ + "This fairly naive approach achieves an accuracy of about 86%." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ldbQqCw2Xc1W" + }, + "source": [ + "### Create a plot of accuracy and loss over time\n", + "\n", + "`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-YcvZsdvWfDf" + }, + "outputs": [], + "source": [ + "history_dict = history.history\n", + "history_dict.keys()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1_CH32qJXruI" + }, + "source": [ + "There are four entries: one for each monitored metric during training and validation. You can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2SEMeQ5YXs8z" + }, + "outputs": [], + "source": [ + "acc = history_dict['binary_accuracy']\n", + "val_acc = history_dict['val_binary_accuracy']\n", + "loss = history_dict['loss']\n", + "val_loss = history_dict['val_loss']\n", + "\n", + "epochs = range(1, len(acc) + 1)\n", + "\n", + "# \"bo\" is for \"blue dot\"\n", + "plt.plot(epochs, loss, 'bo', label='Training loss')\n", + "# b is for \"solid blue line\"\n", + "plt.plot(epochs, val_loss, 'b', label='Validation loss')\n", + "plt.title('Training and validation loss')\n", + "plt.xlabel('Epochs')\n", + "plt.ylabel('Loss')\n", + "plt.legend()\n", + "\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Z3PJemLPXwz_" + }, + "outputs": [], + "source": [ + "plt.plot(epochs, acc, 'bo', label='Training acc')\n", + "plt.plot(epochs, val_acc, 'b', label='Validation acc')\n", + "plt.title('Training and validation accuracy')\n", + "plt.xlabel('Epochs')\n", + "plt.ylabel('Accuracy')\n", + "plt.legend(loc='lower right')\n", + "\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hFFyCuJoXy7r" + }, + "source": [ + "In this plot, the dots represent the training loss and accuracy, and the solid lines are the validation loss and accuracy.\n", + "\n", + "Notice the training loss *decreases* with each epoch and the training accuracy *increases* with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration.\n", + "\n", + "This isn't the case for the validation loss and accuracy—they seem to peak before the training accuracy. This is an example of overfitting: the model performs better on the training data than it does on data it has never seen before. After this point, the model over-optimizes and learns representations *specific* to the training data that do not *generalize* to test data.\n", + "\n", + "For this particular case, you could prevent overfitting by simply stopping the training when the validation accuracy is no longer increasing. One way to do so is to use the `tf.keras.callbacks.EarlyStopping` callback." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-to23J3Vy5d3" + }, + "source": [ + "## Export the model\n", + "\n", + "In the code above, you applied the `TextVectorization` layer to the dataset before feeding text to the model. If you want to make your model capable of processing raw strings (for example, to simplify deploying it), you can include the `TextVectorization` layer inside your model. To do so, you can create a new model using the weights you just trained." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "FWXsMvryuZuq" + }, + "outputs": [], + "source": [ + "export_model = tf.keras.Sequential([\n", + " vectorize_layer,\n", + " model,\n", + " layers.Activation('sigmoid')\n", + "])\n", + "\n", + "export_model.compile(\n", + " loss=losses.BinaryCrossentropy(from_logits=False), optimizer=\"adam\", metrics=['accuracy']\n", + ")\n", + "\n", + "# Test it with `raw_test_ds`, which yields raw strings\n", + "_, _, loss, accuracy = export_model.evaluate(raw_test_ds)\n", + "print(accuracy)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "TwQgoN88LoEF" + }, + "source": [ + "### Inference on new data\n", + "\n", + "To get predictions for new examples, you can simply call `model.predict()`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "QW355HH5L49K" + }, + "outputs": [], + "source": [ + "examples = tf.constant([\n", + " \"The movie was great!\",\n", + " \"The movie was okay.\",\n", + " \"The movie was terrible...\"\n", + "])\n", + "\n", + "export_model.predict(examples)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MaxlpFWpzR6c" + }, + "source": [ + "Including the text preprocessing logic inside your model enables you to export a model for production that simplifies deployment, and reduces the potential for [train/test skew](https://developers.google.com/machine-learning/guides/rules-of-ml#training-serving_skew).\n", + "\n", + "There is a performance difference to keep in mind when choosing where to apply your TextVectorization layer. Using it outside of your model enables you to do asynchronous CPU processing and buffering of your data when training on GPU. So, if you're training your model on the GPU, you probably want to go with this option to get the best performance while developing your model, then switch to including the TextVectorization layer inside your model when you're ready to prepare for deployment.\n", + "\n", + "Visit this [tutorial](https://www.tensorflow.org/tutorials/keras/save_and_load) to learn more about saving models." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "eSSuci_6nCEG" + }, + "source": [ + "## Exercise: multi-class classification on Stack Overflow questions\n", + "\n", + "This tutorial showed how to train a binary classifier from scratch on the IMDB dataset. As an exercise, you can modify this notebook to train a multi-class classifier to predict the tag of a programming question on [Stack Overflow](http://stackoverflow.com/).\n", + "\n", + "A [dataset](https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz) has been prepared for you to use containing the body of several thousand programming questions (for example, \"How can I sort a dictionary by value in Python?\") posted to Stack Overflow. Each of these is labeled with exactly one tag (either Python, CSharp, JavaScript, or Java). Your task is to take a question as input, and predict the appropriate tag, in this case, Python.\n", + "\n", + "The dataset you will work with contains several thousand questions extracted from the much larger public Stack Overflow dataset on [BigQuery](https://console.cloud.google.com/marketplace/details/stack-exchange/stack-overflow), which contains more than 17 million posts.\n", + "\n", + "After downloading the dataset, you will find it has a similar directory structure to the IMDB dataset you worked with previously:\n", + "\n", + "```\n", + "train/\n", + "...python/\n", + "......0.txt\n", + "......1.txt\n", + "...javascript/\n", + "......0.txt\n", + "......1.txt\n", + "...csharp/\n", + "......0.txt\n", + "......1.txt\n", + "...java/\n", + "......0.txt\n", + "......1.txt\n", + "```\n", + "\n", + "Note: To increase the difficulty of the classification problem, occurrences of the words Python, CSharp, JavaScript, or Java in the programming questions have been replaced with the word *blank* (as many questions contain the language they're about).\n", + "\n", + "To complete this exercise, you should modify this notebook to work with the Stack Overflow dataset by making the following modifications:\n", + "\n", + "1. At the top of your notebook, update the code that downloads the IMDB dataset with code to download the [Stack Overflow dataset](https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz) that has already been prepared. As the Stack Overflow dataset has a similar directory structure, you will not need to make many modifications.\n", + "\n", + "1. Modify the last layer of your model to `Dense(4)`, as there are now four output classes.\n", + "\n", + "1. When compiling the model, change the loss to `tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)`. This is the correct loss function to use for a multi-class classification problem, when the labels for each class are integers (in this case, they can be 0, *1*, *2*, or *3*). In addition, change the metrics to `metrics=['accuracy']`, since this is a multi-class classification problem (`tf.metrics.BinaryAccuracy` is only used for binary classifiers).\n", + "\n", + "1. When plotting accuracy over time, change `binary_accuracy` and `val_binary_accuracy` to `accuracy` and `val_accuracy`, respectively.\n", + "\n", + "1. Once these changes are complete, you will be able to train a multi-class classifier." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "F0T5SIwSm7uc" + }, + "source": [ + "## Learning more\n", + "\n", + "This tutorial introduced text classification from scratch. To learn more about the text classification workflow in general, check out the [Text classification guide](https://developers.google.com/machine-learning/guides/text-classification/) from Google Developers.\n" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "name": "text_classification.ipynb", + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} From bd582f3330cdec34adab3867e4005014a90bbade Mon Sep 17 00:00:00 2001 From: Mark Daoust Date: Thu, 29 Aug 2024 13:53:21 -0700 Subject: [PATCH 4/4] Apply suggestions from code review Use `return_dict=True` for evaluate. --- site/en/tutorials/keras/text_classification.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/site/en/tutorials/keras/text_classification.ipynb b/site/en/tutorials/keras/text_classification.ipynb index 7b44c8e521..02e768d741 100644 --- a/site/en/tutorials/keras/text_classification.ipynb +++ b/site/en/tutorials/keras/text_classification.ipynb @@ -861,8 +861,8 @@ ")\n", "\n", "# Test it with `raw_test_ds`, which yields raw strings\n", - "_, _, loss, accuracy = export_model.evaluate(raw_test_ds)\n", - "print(accuracy)" + "metrics = export_model.evaluate(raw_test_ds, return_dict=True)\n", + "print(metrics)" ] }, {