{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to Machine Learning with TensorFlow" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "#conda install -c conda-forge tensorflow=1.0\n", "# API guide is at https://www.tensorflow.org/api_guides/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## You might think of TensorFlow Core programs as consisting of two discrete sections:\n", "\n", "

Building the computational graph.\n", "

Running the computational graph." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Tensor(\"A:0\", shape=(2,), dtype=float32)\n", "Tensor(\"mul:0\", shape=(2,), dtype=float32)\n", "(2,)\n", "[ 2. 6.]\n", "[ 2. 6.]\n", "[ 1. 9.]\n", "[ 9. 81.]\n", "b'Hello, TensorFlow!'\n", "[ 2. 6.]\n" ] } ], "source": [ "import tensorflow as tf\n", "sess = tf.InteractiveSession()\n", "\n", "# Some tensor we want to print the value of\n", "a = tf.constant([1.0, 3.0],name = \"A\")\n", "b = a*3\n", "hello = tf.constant('Hello, TensorFlow!')\n", "\n", "print(a)\n", "print(b)\n", "#Explicitly call the shape function\n", "print(a.get_shape())\n", "\n", "#Execute the statements above\n", "print(sess.run(a+a))\n", "print(sess.run(a*2))\n", "print(sess.run(a**2))\n", "print(sess.run(b**2))\n", "print(sess.run(hello))\n", "# ANother way to do the same above\n", "print((a*2).eval())\n", "\n", "sess.close() # Because it is an interactive session we have to close it" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "3 # a rank 0 tensor; this is a scalar with shape []\n", "[1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3]\n", "[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assign type" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Tensor(\"Const_1:0\", shape=(), dtype=float64) Tensor(\"Const_2:0\", shape=(), dtype=float32)\n" ] } ], "source": [ "node1 = tf.constant(3.0, tf.float64) # contant values once assigned cannot be changed\n", "node2 = tf.constant(4.0) # also tf.float32 implicitly\n", "print(node1, node2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Notice that printing the nodes does not output the values 3.0 and 4.0 as you might expect. Instead, they are nodes that, when evaluated, would produce 3.0 and 4.0, respectively. To actually evaluate the nodes, we must run the computational graph within a session. A session encapsulates the control and state of the TensorFlow runtime." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[3.0, 4.0]\n", "[3.0, 4.0]\n" ] } ], "source": [ "# Evaluate multiple values with one sess.run call\n", "sess = tf.InteractiveSession()\n", "print(sess.run([node1, node2]))\n", "sess.close()\n", "\n", "# Another way to create sessions\n", "\n", "with tf.Session() as sess:\n", " print(sess.run([node1,node2]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Let us create our first real program, a linear model" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Add scalar a and b\n", "7.5\n", "Add vector a and b\n", "[ 3. 7.]\n", "Add and multiply by 3\n", "22.5\n", "Linear model results with vector inputs\n", "[ 0. 0.30000001 0.60000002 0.90000004]\n", "Print the op for linear_model name: \"add_1\"\n", "op: \"Add\"\n", "input: \"mul_1\"\n", "input: \"Variable_1/read\"\n", "attr {\n", " key: \"T\"\n", " value {\n", " type: DT_FLOAT\n", " }\n", "}\n", "\n" ] } ], "source": [ "# Reset the graph, used mostly when working in Jupyter notebook environments\n", "tf.reset_default_graph()\n", "\n", "#Tensorflow can be parameterized to accept external inputs, known as placeholders. A placeholder is a promise to \n", "#provide a value later.\n", "\n", "sess = tf.InteractiveSession()\n", "a = tf.placeholder(tf.float32)\n", "b = tf.placeholder(tf.float32)\n", "adder_node = a + b # + provides a shortcut for tf.add(a, b)\n", "\n", "#We can evaluate this graph with multiple inputs by using the feed_dict parameter to specify Tensors that provide \n", "#concrete values to these placeholders:\n", "print(\"Add scalar a and b\")\n", "print(sess.run(adder_node, feed_dict={a: 3, b:4.5}))\n", "print(\"Add vector a and b\")\n", "print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))\n", "\n", "# Let us make this a little more involved and add another operation\n", "add_and_triple = adder_node * 3.\n", "print(\"Add and multiply by 3\")\n", "print(sess.run(add_and_triple, {a: 3, b:4.5}))\n", "\n", "#In machine learning we will typically want a model that can take arbitrary inputs, such as the one above. To make the \n", "#model trainable, we need to be able to modify the graph to get new outputs with the same input. Variables allow us to \n", "#add trainable parameters to a graph. They are constructed with a type and initial value:\n", "\n", "W = tf.Variable([.3], tf.float32)\n", "b = tf.Variable([-.3], tf.float32)\n", "x = tf.placeholder(tf.float32)\n", "linear_model = W * x + b\n", "\n", "#Constants are initialized when you call tf.constant, and their value can never change. By contrast, variables are \n", "#not initialized when you call tf.Variable. To initialize all the variables in a TensorFlow program, you must explicitly \n", "#call a special operation as follows:\n", "\n", "init = tf.global_variables_initializer()\n", "sess.run(init)\n", "\n", "#Since x is a placeholder, we can evaluate linear_model for several values of x simultaneously as follows:\n", "print(\"Linear model results with vector inputs\")\n", "print(sess.run(linear_model, {x:[1,2,3,4]}))\n", "print(\"Print the op for linear_model\",linear_model.op)\n", "#show_graph(add_and_triple)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Determine the error - L2 norm squared" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "23.66\n" ] } ], "source": [ "# Let us determine the error now, given the correct values are in a variable or 'placeholder' y\n", "y = tf.placeholder(tf.float32)\n", "squared_deltas = tf.square(linear_model - y)\n", "loss = tf.reduce_sum(squared_deltas)\n", "print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reassign values to variables W and b" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[array([-1.], dtype=float32), array([ 1.], dtype=float32)]\n", "0.0\n" ] } ], "source": [ "# Now that is pretty high, so let us reassign the values for our parameters to W = -1 and b = 1\n", "fixW = tf.assign(W, [-1.]) # used to reassign values to variables\n", "fixb = tf.assign(b, [1.])\n", "print(sess.run([fixW, fixb]))\n", "print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Given inputs and outputs, let us train our model to obtain weights and biases" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Weights and biases in iteration 0 -0.22 -0.456\n", "Weights and biases in iteration 50 -0.712702 0.155309\n", "Weights and biases in iteration 100 -0.842705 0.537533\n", "Weights and biases in iteration 150 -0.913881 0.7468\n", "Weights and biases in iteration 200 -0.95285 0.861373\n", "Weights and biases in iteration 250 -0.974185 0.924102\n", "Weights and biases in iteration 300 -0.985867 0.958446\n", "Weights and biases in iteration 350 -0.992262 0.977249\n", "Weights and biases in iteration 400 -0.995763 0.987544\n", "Weights and biases in iteration 450 -0.99768 0.99318\n", "Final Weights and biases\n", "[array([-0.99871475], dtype=float32), array([ 0.99622124], dtype=float32)]\n" ] } ], "source": [ "# In the real world we don't have the answers, so now let us train this to find the right weights and biases\n", "optimizer = tf.train.GradientDescentOptimizer(0.01)\n", "train = optimizer.minimize(loss)\n", "sess.run(init) # reset values to incorrect defaults.\n", "# Iterate to find the minimum\n", "for i in range(500):\n", " return_val = sess.run([train,W,b], {x:[1,2,3,4], y:[0,-1,-2,-3]})\n", " if(not(i%50)):\n", " print(\"Weights and biases in iteration \",i,return_val[1][0],return_val[2][0]) # Use this to see how the optimzer progresses\n", "\n", "print(\"Final Weights and biases\")\n", "print(sess.run([W, b]))\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Let us summarize the code for a linear regression model here" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Weights and biases in iteration 0 -0.22 -0.456\n", "Weights and biases in iteration 50 -0.712702 0.155309\n", "Weights and biases in iteration 100 -0.842705 0.537533\n", "Weights and biases in iteration 150 -0.913881 0.7468\n", "Weights and biases in iteration 200 -0.95285 0.861373\n", "Weights and biases in iteration 250 -0.974185 0.924102\n", "Weights and biases in iteration 300 -0.985867 0.958446\n", "Weights and biases in iteration 350 -0.992262 0.977249\n", "Weights and biases in iteration 400 -0.995763 0.987544\n", "Weights and biases in iteration 450 -0.99768 0.99318\n", "Weights and biases in iteration 500 -0.99873 0.996266\n", "Weights and biases in iteration 550 -0.999305 0.997956\n", "Weights and biases in iteration 600 -0.999619 0.998881\n", "Weights and biases in iteration 650 -0.999792 0.999387\n", "Weights and biases in iteration 700 -0.999886 0.999665\n", "Weights and biases in iteration 750 -0.999938 0.999816\n", "Weights and biases in iteration 800 -0.999966 0.999899\n", "Weights and biases in iteration 850 -0.999981 0.999945\n", "Weights and biases in iteration 900 -0.99999 0.99997\n", "Weights and biases in iteration 950 -0.999994 0.999983\n", "W: [-0.9999969] b: [ 0.99999082] loss: 5.69997e-11\n" ] } ], "source": [ "#To summarize and evaluate accuracy\n", "import numpy as np\n", "import tensorflow as tf\n", "\n", "# Model parameters\n", "W = tf.Variable([.3], tf.float32)\n", "b = tf.Variable([-.3], tf.float32)\n", "# Model input and output\n", "x = tf.placeholder(tf.float32)\n", "linear_model = W * x + b\n", "y = tf.placeholder(tf.float32)\n", "# loss\n", "loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares\n", "# optimizer\n", "optimizer = tf.train.GradientDescentOptimizer(0.01)\n", "train = optimizer.minimize(loss)\n", "# training data\n", "x_train = [1,2,3,4]\n", "y_train = [0,-1,-2,-3]\n", "# training loop\n", "init = tf.global_variables_initializer()\n", "sess = tf.InteractiveSession()\n", "sess.run(init) # reset values to wrong\n", "for i in range(1000):\n", " return_val = sess.run([train,W,b], {x:x_train, y:y_train})\n", " if(not(i%50)):\n", " print(\"Weights and biases in iteration \",i,return_val[1][0],return_val[2][0]) # Use this to see how the optimzer progresses\n", "\n", "# evaluate training accuracy\n", "curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x:x_train, y:y_train})\n", "print(\"W: %s b: %s loss: %s\"%(curr_W, curr_b, curr_loss))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Let us see tf.contrib.learn makes all of this a lot easier (Older way). Estimators allow you to work at a higher level, without having to deal with sessions." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using default config.\n", "WARNING:tensorflow:Using temporary folder as model directory: /var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp9dvbw1p5\n", "INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': , '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_tf_config': gpu_options {\n", " per_process_gpu_memory_fraction: 1.0\n", "}\n", ", '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': 600, '_log_step_count_steps': 100, '_session_config': None, '_save_checkpoints_steps': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': '/var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp9dvbw1p5'}\n", "WARNING:tensorflow:From /Users/srijithraj/anaconda3/envs/tensorflow_class/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py:642: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.\n", "Instructions for updating:\n", "Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.\n", "INFO:tensorflow:Create CheckpointSaverHook.\n", "INFO:tensorflow:Saving checkpoints for 1 into /var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp9dvbw1p5/model.ckpt.\n", "INFO:tensorflow:loss = 3.5, step = 1\n", "INFO:tensorflow:global_step/sec: 1221.54\n", "INFO:tensorflow:loss = 0.0681773, step = 101 (0.085 sec)\n", "INFO:tensorflow:global_step/sec: 1203.12\n", "INFO:tensorflow:loss = 0.010784, step = 201 (0.084 sec)\n", "INFO:tensorflow:global_step/sec: 1297.31\n", "INFO:tensorflow:loss = 0.00190325, step = 301 (0.076 sec)\n", "INFO:tensorflow:global_step/sec: 1367.9\n", "INFO:tensorflow:loss = 0.000394827, step = 401 (0.072 sec)\n", "INFO:tensorflow:global_step/sec: 1348.16\n", "INFO:tensorflow:loss = 0.000139627, step = 501 (0.074 sec)\n", "INFO:tensorflow:global_step/sec: 1487.63\n", "INFO:tensorflow:loss = 1.31018e-05, step = 601 (0.066 sec)\n", "INFO:tensorflow:global_step/sec: 1531.94\n", "INFO:tensorflow:loss = 4.48527e-06, step = 701 (0.066 sec)\n", "INFO:tensorflow:global_step/sec: 1617.21\n", "INFO:tensorflow:loss = 5.85451e-07, step = 801 (0.061 sec)\n", "INFO:tensorflow:global_step/sec: 1770.41\n", "INFO:tensorflow:loss = 1.85369e-07, step = 901 (0.056 sec)\n", "INFO:tensorflow:Saving checkpoints for 1000 into /var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp9dvbw1p5/model.ckpt.\n", "INFO:tensorflow:Loss for final step: 2.97624e-08.\n", "WARNING:tensorflow:From /Users/srijithraj/anaconda3/envs/tensorflow_class/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py:642: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.\n", "Instructions for updating:\n", "Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.\n", "INFO:tensorflow:Starting evaluation at 2018-07-21-00:43:40\n", "INFO:tensorflow:Restoring parameters from /var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp9dvbw1p5/model.ckpt-1000\n", "INFO:tensorflow:Finished evaluation at 2018-07-21-00:43:41\n", "INFO:tensorflow:Saving dict for global step 1000: global_step = 1000, loss = 2.89383e-08\n" ] }, { "data": { "text/plain": [ "{'global_step': 1000, 'loss': 2.8938343e-08}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import tensorflow as tf\n", "# NumPy is often used to load, manipulate and preprocess data.\n", "import numpy as np\n", "\n", "# Declare list of features. We only have one real-valued feature. There are many\n", "# other types of columns that are more complicated and useful.\n", "features = [tf.contrib.layers.real_valued_column(\"x\", dimension=1)]\n", "\n", "# An estimator is the front end to invoke training (fitting) and evaluation\n", "# (inference). There are many predefined types like linear regression,\n", "# logistic regression, linear classification, logistic classification, and\n", "# many neural network classifiers and regressors. The following code\n", "# provides an estimator that does linear regression.\n", "estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)\n", "\n", "# TensorFlow provides many helper methods to read and set up data sets.\n", "# Here we use `numpy_input_fn`. We have to tell the function how many batches\n", "# of data (num_epochs) we want and how big each batch should be.\n", "x = np.array([1., 2., 3., 4.])\n", "y = np.array([0., -1., -2., -3.])\n", "input_fn = tf.contrib.learn.io.numpy_input_fn({\"x\":x}, y, batch_size=4,\n", " num_epochs=1000)\n", "\n", "# We can invoke 1000 training steps by invoking the `fit` method and passing the\n", "# training data set.\n", "estimator.fit(input_fn=input_fn, steps=1000)\n", "\n", "# Here we evaluate how well our model did. In a real example, we would want\n", "# to use a separate validation and testing data set to avoid overfitting.\n", "estimator.evaluate(input_fn=input_fn)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['global_step', 'linear/bias_weight', 'linear/bias_weight/ear/bias_weight/part_0/Ftrl', 'linear/bias_weight/ear/bias_weight/part_0/Ftrl_1', 'linear/x/weight', 'linear/x/weight/linear/x/weight/part_0/Ftrl', 'linear/x/weight/linear/x/weight/part_0/Ftrl_1']\n", "[[-0.99987441]]\n", "[ 0.99959004]\n" ] } ], "source": [ "#print(help(estimator))\n", "print(estimator.get_variable_names())\n", "print(estimator.get_variable_value('linear/x/weight'))\n", "print(estimator.get_variable_value('linear/bias_weight'))\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### New way using tf.estimator" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using default config.\n", "WARNING:tensorflow:Using temporary folder as model directory: /var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp1gh91_oc\n", "INFO:tensorflow:Using config: {'_model_dir': '/var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp1gh91_oc', '_tf_random_seed': 1, '_save_summary_steps': 100, '_save_checkpoints_secs': 600, '_save_checkpoints_steps': None, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100}\n", "INFO:tensorflow:Create CheckpointSaverHook.\n", "INFO:tensorflow:Saving checkpoints for 1 into /var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp1gh91_oc/model.ckpt.\n", "INFO:tensorflow:loss = 14.0, step = 1\n", "INFO:tensorflow:global_step/sec: 1238.97\n", "INFO:tensorflow:loss = 0.13438, step = 101 (0.082 sec)\n", "INFO:tensorflow:global_step/sec: 1578.44\n", "INFO:tensorflow:loss = 0.0124377, step = 201 (0.063 sec)\n", "INFO:tensorflow:global_step/sec: 1572.64\n", "INFO:tensorflow:loss = 0.00116163, step = 301 (0.064 sec)\n", "INFO:tensorflow:global_step/sec: 1515.62\n", "INFO:tensorflow:loss = 0.000108581, step = 401 (0.066 sec)\n", "INFO:tensorflow:global_step/sec: 1655.3\n", "INFO:tensorflow:loss = 1.01511e-05, step = 501 (0.060 sec)\n", "INFO:tensorflow:global_step/sec: 1663.45\n", "INFO:tensorflow:loss = 9.49033e-07, step = 601 (0.061 sec)\n", "INFO:tensorflow:global_step/sec: 1551.11\n", "INFO:tensorflow:loss = 8.90863e-08, step = 701 (0.065 sec)\n", "INFO:tensorflow:global_step/sec: 1672.83\n", "INFO:tensorflow:loss = 8.3906e-09, step = 801 (0.060 sec)\n", "INFO:tensorflow:global_step/sec: 1595.56\n", "INFO:tensorflow:loss = 8.06452e-10, step = 901 (0.062 sec)\n", "INFO:tensorflow:Saving checkpoints for 1000 into /var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp1gh91_oc/model.ckpt.\n", "INFO:tensorflow:Loss for final step: 7.92717e-11.\n", "INFO:tensorflow:Starting evaluation at 2018-07-21-00:43:42\n", "INFO:tensorflow:Restoring parameters from /var/folders/qh/fr09lp897f7360g0yvw8dsl40000gn/T/tmp1gh91_oc/model.ckpt-1000\n", "INFO:tensorflow:Evaluation [1/1]\n", "INFO:tensorflow:Finished evaluation at 2018-07-21-00:43:42\n", "INFO:tensorflow:Saving dict for global step 1000: average_loss = 1.94058e-11, global_step = 1000, loss = 7.76232e-11\n", "Result is {'average_loss': 1.940581e-11, 'loss': 7.7623241e-11, 'global_step': 1000}\n" ] } ], "source": [ "import tensorflow as tf\n", "# NumPy is often used to load, manipulate and preprocess data.\n", "import numpy as np\n", "\n", "# Declare list of features. We only have one real-valued feature. There are many\n", "# other types of columns that are more complicated and useful.\n", "\n", "# Info on feature columns at https://www.tensorflow.org/guide/feature_columns\n", "features = [tf.feature_column.numeric_column(key=\"x\")]\n", "\n", "# An estimator is the front end to invoke training (fitting) and evaluation\n", "# (inference). There are many predefined types like linear regression,\n", "# logistic regression, linear classification, logistic classification, and\n", "# many neural network classifiers and regressors. The following code\n", "# provides an estimator that does linear regression.\n", "model = tf.estimator.LinearRegressor(feature_columns=features)\n", "\n", "\n", "# Define an input data feeder function, the same can be done for test data as well\n", "# More info here https://www.tensorflow.org/guide/premade_estimators#create_input_functions\n", "def input_fn():\n", " features = {'x': np.array([1., 2., 3., 4.])}\n", " labels = np.array([0., -1., -2., -3.])\n", " return features, labels\n", "\n", "#input_fn = tf.contrib.learn.io.numpy_input_fn({\"x\":x}, y, batch_size=4,\n", "# num_epochs=1000)\n", "\n", "# We can invoke 1000 training steps by invoking the `fit` method and passing the\n", "# training data set.\n", "model.train(input_fn=input_fn, steps=1000)\n", "\n", "# Here we evaluate how well our model did. In a real example, we would want\n", "# to use a separate validation and testing data set to avoid overfitting.\n", "eval_result = model.evaluate(input_fn=input_fn,steps=1) # Evaluate 'steps' batches of data, in this case that is 1\n", "print(\"Result is \",eval_result)\n", "\n", "\n", "# Newer versions of tensorflow have access to variables using get_variable_names and get_variable_value" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Let us look at the Iris dataset and train a Neural Network on it. This has 4 features per observation and 3 potential classes for each observation." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", "from __future__ import print_function\n", "\n", "import os\n", "from urllib.request import urlopen\n", "\n", "import tensorflow as tf\n", "import numpy as np\n", "\n", "IRIS_TRAINING = \"iris_training.csv\"\n", "IRIS_TRAINING_URL = \"http://download.tensorflow.org/data/iris_training.csv\"\n", "\n", "IRIS_TEST = \"iris_test.csv\"\n", "IRIS_TEST_URL = \"http://download.tensorflow.org/data/iris_test.csv\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "!wget http://download.tensorflow.org/data/iris_training.csv\n", "!wget http://download.tensorflow.org/data/iris_test.csv\n", "\n", "![title](https://www.tensorflow.org/images/iris_three_species.jpg)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Skip the download part since I already have it available locally\n", "training_set = tf.contrib.learn.datasets.base.load_csv_with_header(\n", " filename='iris_training.csv',\n", " target_dtype=np.int,\n", " features_dtype=np.float32)\n", "test_set = tf.contrib.learn.datasets.base.load_csv_with_header(\n", " filename='iris_training.csv',\n", " target_dtype=np.int,\n", " features_dtype=np.float32)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using config: {'_model_dir': '/tmp/iris_model', '_tf_random_seed': 1, '_save_summary_steps': 50, '_save_checkpoints_secs': None, '_save_checkpoints_steps': 300, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 50}\n", "INFO:tensorflow:Create CheckpointSaverHook.\n", "INFO:tensorflow:Saving checkpoints for 1 into /tmp/iris_model/model.ckpt.\n", "INFO:tensorflow:loss = 303.975, step = 1\n", "INFO:tensorflow:global_step/sec: 583.343\n", "INFO:tensorflow:global_step/sec: 825.338\n", "INFO:tensorflow:loss = 9.06935, step = 101 (0.147 sec)\n", "INFO:tensorflow:global_step/sec: 782.817\n", "INFO:tensorflow:global_step/sec: 775.64\n", "INFO:tensorflow:loss = 13.2485, step = 201 (0.128 sec)\n", "INFO:tensorflow:global_step/sec: 750.919\n", "INFO:tensorflow:Saving checkpoints for 301 into /tmp/iris_model/model.ckpt.\n", "INFO:tensorflow:global_step/sec: 461.59\n", "INFO:tensorflow:loss = 9.514, step = 301 (0.175 sec)\n", "INFO:tensorflow:global_step/sec: 813.077\n", "INFO:tensorflow:global_step/sec: 704.295\n", "INFO:tensorflow:loss = 7.63298, step = 401 (0.133 sec)\n", "INFO:tensorflow:global_step/sec: 758.886\n", "INFO:tensorflow:global_step/sec: 810.033\n", "INFO:tensorflow:loss = 8.8085, step = 501 (0.127 sec)\n", "INFO:tensorflow:global_step/sec: 762.437\n", "INFO:tensorflow:Saving checkpoints for 601 into /tmp/iris_model/model.ckpt.\n", "INFO:tensorflow:global_step/sec: 488.677\n", "INFO:tensorflow:loss = 9.00421, step = 601 (0.168 sec)\n", "INFO:tensorflow:global_step/sec: 849.589\n", "INFO:tensorflow:global_step/sec: 805.983\n", "INFO:tensorflow:loss = 1.98124, step = 701 (0.121 sec)\n", "INFO:tensorflow:global_step/sec: 765.124\n", "INFO:tensorflow:global_step/sec: 736.432\n", "INFO:tensorflow:loss = 5.85242, step = 801 (0.134 sec)\n", "INFO:tensorflow:global_step/sec: 753.769\n", "INFO:tensorflow:Saving checkpoints for 901 into /tmp/iris_model/model.ckpt.\n", "INFO:tensorflow:global_step/sec: 440.436\n", "INFO:tensorflow:loss = 6.7792, step = 901 (0.180 sec)\n", "INFO:tensorflow:global_step/sec: 701.687\n", "INFO:tensorflow:global_step/sec: 664.204\n", "INFO:tensorflow:loss = 13.1764, step = 1001 (0.148 sec)\n", "INFO:tensorflow:global_step/sec: 547.255\n", "INFO:tensorflow:global_step/sec: 655.626\n", "INFO:tensorflow:loss = 7.70901, step = 1101 (0.167 sec)\n", "INFO:tensorflow:global_step/sec: 677.894\n", "INFO:tensorflow:Saving checkpoints for 1201 into /tmp/iris_model/model.ckpt.\n", "INFO:tensorflow:global_step/sec: 426.105\n", "INFO:tensorflow:loss = 5.41436, step = 1201 (0.191 sec)\n", "INFO:tensorflow:global_step/sec: 740.873\n", "INFO:tensorflow:global_step/sec: 802.117\n", "INFO:tensorflow:loss = 3.38481, step = 1301 (0.130 sec)\n", "INFO:tensorflow:global_step/sec: 839.643\n", "INFO:tensorflow:global_step/sec: 919.727\n", "INFO:tensorflow:loss = 8.35951, step = 1401 (0.114 sec)\n", "INFO:tensorflow:global_step/sec: 863.47\n", "INFO:tensorflow:Saving checkpoints for 1501 into /tmp/iris_model/model.ckpt.\n", "INFO:tensorflow:global_step/sec: 471.707\n", "INFO:tensorflow:loss = 11.136, step = 1501 (0.164 sec)\n", "INFO:tensorflow:global_step/sec: 856.12\n", "INFO:tensorflow:global_step/sec: 899.718\n", "INFO:tensorflow:loss = 2.85253, step = 1601 (0.114 sec)\n", "INFO:tensorflow:global_step/sec: 617.46\n", "INFO:tensorflow:global_step/sec: 852.151\n", "INFO:tensorflow:loss = 9.34013, step = 1701 (0.139 sec)\n", "INFO:tensorflow:global_step/sec: 877.682\n", "INFO:tensorflow:Saving checkpoints for 1801 into /tmp/iris_model/model.ckpt.\n", "INFO:tensorflow:global_step/sec: 495.797\n", "INFO:tensorflow:loss = 6.59153, step = 1801 (0.158 sec)\n", "INFO:tensorflow:global_step/sec: 936.032\n", "INFO:tensorflow:global_step/sec: 936.643\n", "INFO:tensorflow:loss = 5.76084, step = 1901 (0.107 sec)\n", "INFO:tensorflow:global_step/sec: 904.307\n", "INFO:tensorflow:Saving checkpoints for 2000 into /tmp/iris_model/model.ckpt.\n", "INFO:tensorflow:Loss for final step: 4.61897.\n", "INFO:tensorflow:Starting evaluation at 2018-07-21-00:43:46\n", "INFO:tensorflow:Restoring parameters from /tmp/iris_model/model.ckpt-2000\n", "INFO:tensorflow:Finished evaluation at 2018-07-21-00:43:46\n", "INFO:tensorflow:Saving dict for global step 2000: accuracy = 0.991667, average_loss = 0.0420791, global_step = 2000, loss = 5.04949\n", "\n", "Test Accuracy: 0.991667\n", "\n", "INFO:tensorflow:Restoring parameters from /tmp/iris_model/model.ckpt-2000\n", "New Samples, Class Predictions: [array([b'1'], dtype=object), array([b'2'], dtype=object)]\n", "\n" ] } ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", "from __future__ import print_function\n", "\n", "import os\n", "import urllib\n", "\n", "import numpy as np\n", "import tensorflow as tf\n", "\n", "# Data sets\n", "IRIS_TRAINING = \"iris_training.csv\"\n", "IRIS_TRAINING_URL = \"http://download.tensorflow.org/data/iris_training.csv\"\n", "\n", "IRIS_TEST = \"iris_training.csv\"\n", "IRIS_TEST_URL = \"http://download.tensorflow.org/data/iris_test.csv\"\n", "\n", "def main():\n", " \n", " \n", " # Load datasets.\n", " training_set = tf.contrib.learn.datasets.base.load_csv_with_header(\n", " filename=IRIS_TRAINING,\n", " target_dtype=np.int,\n", " features_dtype=np.float32)\n", " test_set = tf.contrib.learn.datasets.base.load_csv_with_header(\n", " filename=IRIS_TEST,\n", " target_dtype=np.int,\n", " features_dtype=np.float32)\n", "\n", " # Specify that all features have real-value data\n", " feature_columns = [tf.feature_column.numeric_column(\"x\", shape=[4])]\n", "\n", " # Configure behavior for logging - see here https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/estimator/RunConfig\n", " config = tf.estimator.RunConfig()\n", " config = config.replace(save_checkpoints_steps = 300, log_step_count_steps = 50, save_summary_steps = 50)\n", " \n", " # Build 3 layer DNN with 10, 20, 10 units respectively.\n", " classifier = tf.estimator.DNNClassifier(feature_columns=feature_columns,\n", " hidden_units=[10, 20, 10],\n", " n_classes=3,\n", " config = config, # configuration\n", " optimizer=tf.train.ProximalAdagradOptimizer( #optimizer\n", " learning_rate=0.1,\n", " l1_regularization_strength=0.001\n", " ),\n", " activation_fn=tf.nn.relu, #activation function\n", " model_dir=\"/tmp/iris_model\")\n", "\n", " # You should delete this folder before you start if you don't want the model to restart from the saved state.\n", " \n", " os.system('rm -rf /tmp/iris_model')\n", "\n", " # Define the training inputs, another way to feed the data\n", " train_input_fn = tf.estimator.inputs.numpy_input_fn(\n", " x={\"x\": np.array(training_set.data)},\n", " y=np.array(training_set.target),\n", " num_epochs=None,\n", " shuffle=True)\n", "\n", "\n", " # Train model.\n", " classifier.train(input_fn=train_input_fn, steps=2000)\n", "\n", " # Define the test inputs\n", " test_input_fn = tf.estimator.inputs.numpy_input_fn(\n", " x={\"x\": np.array(test_set.data)},\n", " y=np.array(test_set.target),\n", " num_epochs=1,\n", " shuffle=False)\n", "\n", " # Evaluate accuracy.\n", " accuracy_score = classifier.evaluate(input_fn=test_input_fn)[\"accuracy\"]\n", "\n", " print(\"\\nTest Accuracy: {0:f}\\n\".format(accuracy_score))\n", "\n", " # What about predictions? Classify two new flower samples.\n", " new_samples = np.array(\n", " [[6.4, 3.2, 4.5, 1.5],\n", " [5.8, 3.1, 5.0, 1.7]], dtype=np.float32)\n", " \n", " predict_input_fn = tf.estimator.inputs.numpy_input_fn(\n", " x={\"x\": new_samples},\n", " num_epochs=1,\n", " shuffle=False)\n", "\n", " predictions = list(classifier.predict(input_fn=predict_input_fn))\n", " predicted_classes = [p[\"classes\"] for p in predictions]\n", "\n", " print(\n", " \"New Samples, Class Predictions: {}\\n\"\n", " .format(predicted_classes))\n", "\n", "if __name__ == \"__main__\":\n", " main()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MNIST digits classification\n", "\n", "MNIST is a simple computer vision dataset. It consists of images of handwritten digits like these:\n", "\n", "![title](https://www.tensorflow.org/images/MNIST.png)\n", "\n", "It also includes labels for each image, telling us which digit it is.\n", "\n", "

Learn about the MNIST data and softmax regressions\n", "

Create a function that is a model for recognizing digits, based on looking at every pixel in the image\n", "

Use TensorFlow to train the model to recognize digits by having it \"look\" at thousands of examples (and run our first TensorFlow session to do so)\n", "

Check the model's accuracy with our test data\n", "\n", "The MNIST Data\n", "\n", "The MNIST data is split into three parts: 55,000 data points of training data (mnist.train), 10,000 points of test data (mnist.test), and 5,000 points of validation data (mnist.validation). This split is very important: it's essential in machine learning that we have separate data which we don't learn from so that we can make sure that what we've learned actually generalizes!\n", "\n", "![title](https://www.tensorflow.org/images/mnist-train-xs.png)\n", "\n", "As mentioned earlier, every MNIST data point has two parts: an image of a handwritten digit and a corresponding label. We'll call the images \"x\" and the labels \"y\". Both the training set and test set contain images and their corresponding labels; for example the training images are mnist.train.images and the training labels are mnist.train.labels.\n", "\n", "Each image is 28 pixels by 28 pixels. We can interpret this as a big array of numbers:\n", "\n", "We can flatten this array into a vector of 28x28 = 784 numbers. It doesn't matter how we flatten the array, as long as we're consistent between images. From this perspective, the MNIST images are just a bunch of points in a 784-dimensional vector space, with a very rich structure.\n", "\n", "Flattening the data throws away information about the 2D structure of the image. Isn't that bad? Well, the best computer vision methods do exploit this structure, and we will in later tutorials. But the simple method we will be using here, a softmax regression (defined below), won't.\n", "\n", "The result is that mnist.train.images is a tensor (an n-dimensional array) with a shape of [55000, 784]. The first dimension is an index into the list of images and the second dimension is the index for each pixel in each image. Each entry in the tensor is a pixel intensity between 0 and 1, for a particular pixel in a particular image.\n", "\n", "\n", "Each image in MNIST has a corresponding label, a number between 0 and 9 representing the digit drawn in the image.\n", "\n", "For the purposes of this tutorial, we're going to want our labels as \"one-hot vectors\". A one-hot vector is a vector which is 0 in most dimensions, and 1 in a single dimension. In this case, the nth digit will be represented as a vector which is 1 in the nth dimension. For example, 3 would be [0,0,0,1,0,0,0,0,0,0]. Consequently, mnist.train.labels is a [55000, 10] array of floats.\n", "\n", "![title](https://www.tensorflow.org/images/softmax-regression-vectorequation.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cross entropy cost function\n", "\n", "Because of the shape of the sigmoid function, extreme values as inputs to the sigmoid function tend to result in smaller partial derivatives for that cost function. This results in slower learning when the inputs are really far away from zero if the objective is to bring the cost function to zero.\n", "\n", "It turns out that we can solve the problem by replacing the quadratic cost with a different cost function, known as the cross-entropy.\n", "\n", "For a simple neural network output given by 'a'\n", "\n", "$a = sigma(z)$\n", "\n", "where \n", "\n", "$z = w*x + b$\n", "\n", "Let the desired output be y, then the Cross entropy cost is given by\n", "\n", "$cost = y*ln(a) + (1-y)*ln(1 -a)$\n", "\n", "This satisfies all the properties of a cost function: when y = 1 and a close to 1, cost = 0\n", "and when y = 0 and a is close to 0, cost = 0\n", "\n", "Partial derivative of this cost is given by $x * (sigma(z) - y)$\n", "This is great because now the learning rate is proportional to (a - y), so the farther a is from y, the larger the gradient.\n", "\n", "See here for details http://neuralnetworksanddeeplearning.com/chap3.html" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\n", "Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz\n", "Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\n", "Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz\n", "Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\n", "Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz\n", "Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\n", "Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz\n", "Cross entropy at step 0 = 2.30259\n", "Cross entropy at step 100 = 0.633378\n", "Cross entropy at step 200 = 0.639661\n", "Cross entropy at step 300 = 0.326474\n", "Cross entropy at step 400 = 0.372144\n", "Cross entropy at step 500 = 0.375368\n", "Cross entropy at step 600 = 0.489319\n", "Cross entropy at step 700 = 0.262653\n", "Cross entropy at step 800 = 0.441976\n", "Cross entropy at step 900 = 0.244338\n", "Cross entropy at step 999 = 0.186102\n", "Accuracy is\n", "0.9187\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAD8CAYAAABn919SAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAIABJREFUeJzt3XmYFNXVBvD3DAMICAI6IoII7hpw\nYzSoiRvuGjWKRtxQMcQYP1wTt3xR82kSd01cibviCor7iuIuOgiyI5viIMuw78sw5/vjVFHV1dXL\ndPcwc3ve3/P0U13V1VW3urpPnbr3VrWoKoiIyH0l9V0AIiIqDAZ0IqIiwYBORFQkGNCJiIoEAzoR\nUZFgQCciKhIM6ERERYIBnYioSDCgExEVidJNubKtttpKu3btuilXSUTkvFGjRi1Q1bJM823SgN61\na1dUVFRsylUSETlPRH7MZj5WuRARFQkGdCKiIsGATkRUJBjQiYiKBAM6EVGRYEAnIioSDOhEREXC\nnYA+fxLw4xf1XQoiogZrk15YlJcHetnwxqX1Ww4iogbKnQydiIjSYkAnIioSDOhEREWCAZ2IqEi4\nF9BV67sEREQNknsBvXptfZeAiKhBcjCgr6nvEhARNUgOBnRm6EREcdwL6KsW1ncJiIgaJHcCeutt\nbfjDp/VbDiKiBipjQBeRx0RkvoiMD027XUQmi8hYEXlFRNrWbTEBtNzShhvW1/mqiIhclE2G/gSA\nYyLT3gfQXVX3BPA9gGsLXK402G2RiChOxoCuqp8AWBSZ9p6qVnujXwHoXAdli5bEG9TU/aqIiBxU\niDr0CwC8nepFERkgIhUiUlFVVZX7WvwLinhhERFRrLwCuohcD6AawOBU86jqIFUtV9XysrKyPNam\nkSEREYXlfD90EekH4AQAvVU3YdrMDJ2IKFZOAV1EjgFwNYBDVHVVYYuUgjJDJyJKJ5tui88B+BLA\nriJSKSL9AdwHoDWA90VkjIg8VMflDDBDJyKKlTFDV9W+MZMfrYOyZMBGUSKidNy5UpRVLkREabkT\n0JmhExGl5U5AZ4ZORJSWOwGdGToRUVruBHRm6EREaTkU0GsSh0RElMDBgM4MnYgojjsBnfdyISJK\ny52AzrstEhGl5VBA9+vOGdCJiOK4F9DZKEpEFMvBgM4MnYgojnsBnVUuRESx3AvozNCJiGK5F9CJ\niCiWewGdGToRUSyHArrfD52ZOhFRHPcCOhtFiYhiORTQWeVCRJSOewGdGToRUSz3AjozdCKiWO4F\ndGboRESxMgZ0EXlMROaLyPjQtPYi8r6ITPWG7eq2mEDwF3Ts5UJEFCebDP0JAMdEpl0DYLiq7gxg\nuDdet3j7XCKitDIGdFX9BMCiyOSTADzpPX8SwMkFLldcSSJDIiIKy7UOvYOqzgEAb7h1qhlFZICI\nVIhIRVVVVY6rAzN0IqIM6rxRVFUHqWq5qpaXlZXls6TIkIiIwnIN6PNEpCMAeMP5hStSjHBWzkZR\nIqJYuQb01wD08573A/BqYYqTQkJAr9M1ERE5K5tui88B+BLAriJSKSL9AfwLwJEiMhXAkd54HdIU\nz4mIyFeaaQZV7Zvipd4FLku6QsQ/JyKijRy5UpQZOhFRJm4EdGboREQZuRHQwV4uRESZuBHQlVUu\nRESZuBHQwSoXIqJM3AjozNCJiDJyI6AzQyciysiNgM4MnYgoIzcCOnu5EBFl5EZAD2foE19ltQsR\nUQw3Anq0mmVF3d7ckYjIRW4E9GhGXtKkfspBRNSAuRHQoxm6OFJsIqJNyI3IyDpzIqKM3AjoUezp\nQkSUxI2AHs3QGdCJiJI4EtBr0o8TEZEjAT3aKMqATkSUxI2AzioXIqKM3AjozNCJiDJyI6AzQyci\nyiivgC4il4vIBBEZLyLPichmhSpYIgZ0IqJMcg7oItIJwEAA5araHUATAGcUqmAJkjJ0XmhERBSV\nb5VLKYAWIlIKoCWAn/MvUhxm6EREmeQc0FV1NoA7AMwCMAfAUlV9r1AFi6wsMs6ATkQUlU+VSzsA\nJwHoBmBbAK1E5OyY+QaISIWIVFRVVeW4NgZ0IqJM8qlyOQLATFWtUtX1AF4GcGB0JlUdpKrlqlpe\nVlaW25r8DL3llt44AzoRUVQ+AX0WgF4i0lJEBEBvAJMKU6woL6B33t8bZaMoEVFUPnXoIwEMAfAt\ngHHesgYVqFzRldnQ/2MLZuhERElK83mzqt4A4IYClSXdmmzg/7EFAzoRURK3rhRlhk5ElJIbAd0n\nfkBnHToRUZQbAZ0ZOhFRRm4E9I116AzoRESpuBHQN2bobBQlIkrFjYDODJ2IKCM3Ajrr0ImIMnIk\noHsBvKQ0cZyIiDZyI6CzyoWIKCM3AjqrXIiIMnIjoCdd+s8Li4iIotwI6MzQiYgyciOgRzP06B9e\nEBGRIwFd2ShKRJSJGwEdrHIhIsrEjYDODJ2IKCM3Ajp4LxciokzcCOh+GygzdCKilNwI6KxDJyLK\nyI2AnlSHzm6LRERRbgR0ZuhERBm5EdA1euk/AzoRUVReAV1E2orIEBGZLCKTROSAQhUsETN0IqJM\nSvN8/70A3lHVPiLSDEDLApQpGfuhExFllHNAF5E2AA4GcB4AqOo6AOsKU6woZuhERJnkU+WyA4Aq\nAI+LyGgReUREWhWoXImYoRMRZZRPQC8FsC+AB1V1HwArAVwTnUlEBohIhYhUVFVV5bgqZuhERJnk\nE9ArAVSq6khvfAgswCdQ1UGqWq6q5WVlZbmtib1ciIgyyjmgq+pcAD+JyK7epN4AJhakVMlrs0EJ\nLywiIkol314u/wNgsNfDZQaA8/MvUgw/I+eVokREKeUV0FV1DIDyApUl3YpsyDp0IqKU3LhSFOzl\nQkSUiRsBnRk6EVFGbgT06J9EM6ATESVxI6D7baDM0ImIUnIjoG/stui14TKgExElcSOgJ3VbZEAn\nIopyJKBHM3T2QyciinIkoHsZeQkbRYmIUnEjoLMfOhFRRm4E9I0ZehMAwoBORBTDrYAOsb7oDOhE\nREkcCeihC4uEGToRURxHArrfbbGEGToRUQqOBXSvygXstkhEFOVGQPcxQyciSsmNgJ7UKMoMnYgo\nypGA7jeKspcLEVEqjgT0cKMoe7kQEcVxLKAzQyciSsWNgB7+gwsGdCKiWG4EdF4pSkSUkSMBnRk6\nEVEmeQd0EWkiIqNF5I1CFCgWrxQlIsqoEBn6pQAmFWA5qbFRlIgoo7wCuoh0BnA8gEcKU5xUolUu\nvLCIiCgq3wz9HgB/AVC3KTPvtkhElFHOAV1ETgAwX1VHZZhvgIhUiEhFVVVVbitjLxcioozyydAP\nAnCiiPwA4HkAh4vIM9GZVHWQqparanlZWVluawpf+s9/LCIiipVzQFfVa1W1s6p2BXAGgA9V9eyC\nlSxhZdFGUdahExFFudEPPalRdEP9FoeIqAEqLcRCVHUEgBGFWFb8CkL90EuasMqFiCiGGxl6tFG0\nhhk6EVGUIwE9XOXShHXoREQxHAno4UZRYR06EVEMNwJ6uFGUdehERLHcCOjRm3OxDp2IKIkjAd2v\nM+eVokREqbgV0Dc2ijJDJyKKciSg80pRIqJM3AvobBQlIorlRkCHWmYOWFBnoygRURI3ArrWABB7\nLszQiYjiOBLQwxk6b85FRBTHkYBeEwR01qETEcVyKKD7VS68sIiIKI4bAT2hUZQ35yIiiuNGQFdF\n0CjKm3MREcVxJ6CzDp2IKC1HAnpNYi8X1qETESVxKKB7z9kPnYgolhsBHeyHTkSUiRsBnf3QiYgy\nKq3vAmSl8/5ASVN7LiVADQM6EVFUzhm6iGwnIh+JyCQRmSAilxayYAn2+h1w7L+8FfMPLoiI4uST\noVcDuFJVvxWR1gBGicj7qjqxQGWLxzp0IqJYOWfoqjpHVb/1ni8HMAlAp0IVLCXWoRMRxSpIo6iI\ndAWwD4CRhVhe+pWxHzoRUZy8A7qIbA5gKIDLVHVZzOsDRKRCRCqqqqryXR37oRMRpZBXQBeRprBg\nPlhVX46bR1UHqWq5qpaXlZXlszpvpWwUJSKKk08vFwHwKIBJqnpX4YqUAevQiYhi5ZOhHwTgHACH\ni8gY73FcgcqV2qb8T9HqdbxVLxE5I59eLp+pqqjqnqq6t/d4q5CFi9WkObAhEmgnvwksmVXY9VSv\nA24uAz64obDLJSKqI25c+h/WtIX1Q9+wPpj2/JnAwwcXdj3Va2w4clBhl0tEVEccDOgtbbh+pQ39\n6pfViwu7nppqG25YV9jlEhHVEQcDegsbrl9tw+q1dbMeP6AX81Wp1WuBqR/UdymIqEAcDOh+hu4F\n9A11FNDDVTrF6p1rgMGnAnPG1ndJiKgAHAzofoa+yoapMvS1y4F1K3NfT00jCOizv7VhYzh4UWEt\nmQWMeqK+S0ERDgb0SIbuN15G/bMzcOfuidMWTMu+G+KG6tzK55K13oW9TZrWbznIPU+cALx+aX5J\nExWcgwHdy9D9L1I0Q3/3euDf+9jztUuD6dM/Au7rCYx9Ibv1NIYMfY0X0BvDtlJhLZ9rQ95XqUFx\nL6C3aGfDF86x4f37J77+5X3AohnJ75s3wYY/j8luPY2hGmJjO0Qj2NZCWloJVE2p71LUL7+zQE0j\nOJN1iBv/WBTWaisbrltuFxRlzatqEUk/m68xBDn/FgqNYVsL6e5f2PDGpennK2Z+Zs5uvQ2Kgxl6\n++D582cmvnbjFqnf5wcvyXKTM1VDqALrVmW3rFzN+iqoFqkL6siPcuWC+i4BJfESJCYDDYp7Ab1J\njicVWuAM/ZM7gH90BFYvya08maxZCjx2NDDkgrpZPhAc5OritHnyW8CPX+a/nB8+A27f0ZZHDQ/b\nXxoU9wI6APT+W/bzzh1nw41fvCwC+oJpwPC/p5/ni3/bcN2K7MtSG2u80/nZo/JfVvVaYOH05On+\nQa4uMvTn+wKPH5P/cuZ8Z8OZn+S/LCo8F3qDNaIb7LkZ0Ftvm/28D/3KhrNH2zCbVvnHjgYqv04/\nj9/lrzbBcNSTwIRXgsbIdAqZ+b95BfCffZOXubHKJU2WVVMDjHkOGP+yVWkVqlyzvrLlVX2ffr4m\nzbwy1tEFZPlgdUPDr65bPhe4qS0w+pn6Lskm4WhA3yZxvOWW6effUA3M8zL1r+63Hi/vXAusCP2D\n0qpFFlzevhpYVYs622x/1BuqgdcHAi+dB7xykU2rmpK6WmKNHzgVGDckv+Dxw2c2DNdFhzOrj28F\n7u8FrJif/N7RTwHDLgKGnG/jcZl+NqJZkt99dObHoTKtTw7wJV4VW3WWgWPCMOCZU+uufaMmdC/+\nbPpgr1sFrK2js7iGoKFXufg93r59qm7Xs2I+8OBBwOIf6nY9GbgZ0LsckDhe2iL9/GNfAFaFbt71\n4IHAVw8Ab1wWmnYQcP9+wMiHkt8/chBwU3vgh8+t+iIc+Px+8CNuBb74T+L7Vi4IfvSLZwbTp39o\nw/v3T10t4a9j9WJgaP/EL+T37wGLf0y9vVHNWnvLWmTDVYuAz+8JXq+aDFRNAia9lvzeBVMjE9S6\nfn77VOJBobLCpm+oBu7ukfiWhdMtS5r0RjDNP0CVhNpE3vur7YOls5Pni8vQ169JPNCpAi/1A6Z9\nAIx7KXn+VH4eDYx8OLt51y0PPc8ioN/TA/hnJ/scV0T+gnHVoqA/d5xFMxr+WcCyn9OfcdbUBNWH\n9cKrYl38Q+2qXqrXAW9eFd8FOs74ocC88cCX99e6hIXkZkBvuhmw5c72fMAI4OQH0s9fvTrxh+hb\nOM1+MBNeAZb/nPr9b//ZqieeOA64eWvgjp2D15bPBWaMAEb8wwLSxunzrDFv0GE2PuurUPlbAtOG\npy/z9+8kjr95hX0pv3oQePY0YNChwL+6AC/2syAavh/LupXAzE+D8eZeQPcPErd1Az78v+R1zh0f\nPK+pAYb+3oJj2PrVwKBDgNf+B/jw78DowcArfwQe6W3T1ywFlobuTT9uSNAbaUqoYdNviA0H9Ble\nth6+c+bGNoqYto9bOgCPHBGM+/XtgJ0NLZ8HzJ8c3w5ROcqCEWCf5dt/SfzBr14CPHCgVQuFzw7C\nwWlllf3o0wUs/2zvvnLggV8mvnZbN+DOXePft3yuXSD3wY2pl+2b+gGwcmHm+aJmfZX4WY99Cbhl\nW2DS69mfiT1/JvDcGYnTwp/j5/fY9zR6MNsUJgwL2tBWzAu+f5WjbL9WVgTzzpuY+Bud9gHwzX+B\nD25KXu64IUHS8fV/LSGI3pKknrjXD9139hDgm0eBbfYCSjIcl1LVn61aCLx6CTD2+eTXOpUDm3cA\npmTo6/7sacnTvv4v8NZV9nzBFODhQxJvUdCspZXdN2OEdafs5t3TXRWYPyl5uffuFTz3s+2Jw4DW\nHYGRD9r4vucG2fyfvgHKdgFaeVVSL5wF/CEU6KMmvwmUX2BVUh32AMa9mDzP4ND2TnodWHRv4uvR\nA+fQ/sFz/6IwIAjoYwbbrQf2CgWF+ROBZ04Bzn0NmOVVSY17Eej9v0DbLonLnzPGPssfvwB2i/xh\n1op5wMO/tudXTALabGtnWdv0AB453A6sF4YOrOtXAc1aWQPsk78Jpq9eDLTuYM/DB7iRDwHfPWc/\n/GtmAZul6TYL2PetsgIYeiFQ2jyYvrTSDgwnPwC09Lrl+gEjXCXlq6kBPvgbsO95QPPN7QZrnfcH\nLnw/cb7lc63sW+8OfP8usEVna0PquKcdpB47GtiuF9D/XTvTfPlCe98LZ9swrp/9lHeSbxUxY4R9\nZ6vXWqb66sXAVdOAzcvsOwIAi6bbeHQ7Vi2wg7q/3TU11vtq9WIAavujSTNb57gh9t3uF3MmuXA6\nUPkN8ItTgNJmwdlamF8dMs37nKa+B3Qut+cPHpC4zf4f5jSNnP2vWxl8p6+bE/zOT/Z+f35V38e3\nAR/dAvzm30DPSDnqkLsBvV1X4KhQltmiXep7ov/sNYh2ORCY9UUwfWVVfDAHbEf6QbM24vrCz4lc\nnbpoRuKp3FMn2fDkh+wAMuWd2tVN+j8aILFqZtaXFtD9+98A6W99sHJ+EABTqQ6dXnfcO/mUdFWa\nz2zMYOCom63rqF+VMOtLe3T9ddDA9vLvbRjNaIf/Heh9g51xhDPIN6+woX8mEt4e3127A3+eYWdZ\nO/a2aetXBT9kwA5kzVsnBnMAuHMXoHsfoM+jwBuXB9PDVQ0/fQ3sdAQwuI8Fth59gJ7nJX8GL52f\neAYDBBcqjXoc+PWVlm0vq7Rpc8cBn94F7HehnRUdeo0dDL74j2XmR3oZZOXXwLI5QJuOdlby/t+A\nqe/aazcsAZ49PVjftvsAW3lnBrO9LDXbKrznfhc/fWh/q1LzD3xLZlkA36yNjb9+qfVOWzTTDjCd\n9rVqrhH/DJax89GW+PgHsc7722e8Zilw0adBIP34dutl9tuHgV2Otqq+Rw6311q0s2RhauTgBthZ\n7LTh2Hi253fb/fGL5Hn9Nqxpwy3Ref5MYKcjsbH/PWDdln1+9dvMT6zN6qNbvO0eyICekysm2Y/q\nqRODaSfeB7x2iT0/+2X7Et3aNbvlNW1hBw0/Q2zRPrcAXxvDLsrtff6PP+rn0ZYxhbO8qsnx827R\nJTnQZDLh5eRp76fpUrp6MfD0ycAJ9yTXu969R+b1jXspfd14tFG34vHE8fkTbTg9RXXXo0cCnfeL\nf238EDtrCicO/u0kAAvkh14XZPA/fArsfmLyctKdTW6oBia+Brx4TuL04TfZA7Azsl4X2/O1yxMb\nuv3yRA+ESyLB+ufRQZIDWPXcsD8ml2fcEAtOo5+2IFneP3ke3/ihNvSrseZPBCoeDfbJ0tl2TYV/\nptpx7+T95R+AfOGeZrduHzz/6GYbPt8X+MVvEw/k4QNX1Njn7XHY9Ta+ZpmdXfgJFWBnLlIS7OOV\n84Mqw2kxBwnf0p9suGoB8MTxia+tXpx4dlqHRDdhH83y8nKtqKjIPGOuqtdaHXfP84GDLgXadwNu\n29GOnld9b9lCuqtJw7brZad2N29t4wf/Gfjk9sKWN5cgCljQqfwm83xtOgHLQg2MbTrbl2t9TGNe\ndN58pTtjAqxhe1YBLjyqDyWlDeMeJs1aA4f/FXjnahv/xSnxB9ljb7M2gsaspGlw1rtj79QH9bOG\nWhVWoR12PXBI7vtAREapanmm+dxsFE2ltDlw3c/A8XdaMAeAgd8C180OTv22immE6vdG8rRWW9ny\nSjez8Z7nB6fQe54B9E9ztA47NVRXftk4oHnogNK8tS03LJzVXTEZOP6uxNdbd7Rqh3Q67wccODA5\nQG++dXIw//2H9tj+wPTLBKzKytc1VDVzwbvA2UMT591ql/TLKkgwF+CwvyZPa1UWO3fBFCqYh6vC\ncrFueRDMgfhgDmTfg6eh2vvs/Jfh/46B1MEcSAzmW+4MbLFd/usGrAomm+tP8lRcAR2wRpSSJsH4\nZlskjp/1ErDn76y+DgA6dAe6/ip4/dRHgZPuDxo5OnS3YasyoP0OwTrCX5Dz3gQueC8YF299u52Q\nGCjbdklc1+lPJu/k3z4EXDoW+MtMqw/drz9w+tPAmS/aWccF79gy+jxu6w3r0B3o/4HNe+DAyAcj\nwAKvj3f4oNappz1OeiCoW/a16ZQ4b3noNgStyqxxEbC6zjad7XnXX1tjkV8VcNYQpHTAJbYfdg2d\noh5za+r5AWDLnYLn+55jdc7nvhpMu3IKcPmE5PdlcnqKfsrH3m5nUhePDKaV7R4/b22c9gRwbSVw\n2fjk+wv1eQzYqy/Qw2uAbp7lWWUqi6bbd/eiz+w7mc6Oh2de3jmvxCdB2dh8G8tWE7oap7l6u/wC\n4OAr7fmWOyXuh7C+L1iDZt9Qm9hv7gWOugUY8DFwYZq/WvzTN8AlMT2huvQCLv0OOO6O+PftdKTF\ni/28xuR9+wFXhq6jaNctcf4lOZyN11JedegicgyAewE0AfCIqv6rIKWqS+22B04ZZPWV414Eepxu\njXRnPAu03xHYerfE+c96yS4AKm0GdD/VupHt+bugt8VpTwZB+sLhVtWwWVvL4vwGopMewMbGlJPu\nA+7/xurm2m4PHHadNRIdODCoX23WKrEMe3hZ+y5HB9O6nxJ0D+t2sFUR7d03OOgA9uNZMdcOaue+\nCkx8FfjsbvthLIjc/rW0GbD3mZa97HMOcMSNwZ0t79jFeozsdrydISyfY3WXux5rDcslJfa5XfSZ\nHVREbJsWTQe2P8ga5W5qG6xrwAigQ4/gvjyrl1id82ZtgW6/DrLOVmUW2PY4OWj08rsQHnWL/ZBK\nSoAdDgUGjrb+/f5nfvlEawdZv9rq53uel/gPO1dMtgasYX8EBnwEdAz1IDrpAXtv9Rr7TH45wKZf\nN8fqqbfpYfWpL//BznhuXGo/1nt6AHufZXXS3Q4GOve06pl9z7VrB2Z8ZNsDDdbXdjvg2tnAj5/b\ntRGHXG2BpPuptn97nBa0HRx/px1gt+hsPXbu7p7Y8AtYVrlwavC98G+ZsPuJVu4efYDJb9hy2+8I\nHHat1fs/42Wmh1wdXCdR3t/aXxZOA/q97jXEvgds90v7ju59NjDG60G29R62Td89l9zeFK7iOPoW\nK0OvP1qPm8Uz7Tvzf953bYsuwB8+tiuxP7/X2gva7wD870I78JWU2EFpcuSA4vca2vXYoIppq12B\n7UON3tfPs7aDjnvaAeXvXr122S7JfdTb72iBvKQJsP/vgeZtgFcGJM5z/B3Wztajjx2kWrSz7/7B\nf7b5u/7K2lY2VNt/Myz+EShL0U21UFQ1pwcsiE8HsAOAZgC+A7BHuvf07NlTSVVralSr1xVmWcvn\nq65dGf/a3AmqDx9i8/hmfqq6apHqm1epjh6cOH/1etVP71JdOjtx+uolqotnBeOLZmYu15plqvMm\nBeOT31L94n7V0c9mfm/V9zb/hg3BtDljVWd9rfrhLao3tFFd8lPm5fgWz7Jtq6ywz2Llwvj5XjhX\n9YkTsl/usrmqMz8LxpdUqq5Znv37szX5bdvmBdMSpw8+3abf0Eb19l1s+M1jqh/fpvrhP2yeVYtU\nRw5K3OaamuR1zBmnOn2EPa9enzjPigU2XL3E5vNtqFad9qHqt08nLmvNMtXvXlSd+Jrtr7UrVP+z\nn+qL59l74iybqzr728yfhe/bZ+w7UDlK9f0brMxh61ZlXsakN1UnDAvGh11sn+FrA+P34+S3VYf9\nSXXi66qPHKm6fk12ZV2z3H53q5dkN38MABWaRVzOuVFURA4AcKOqHu2NX+sdIP6Z6j113ihKxa96\nnWXOfptIY6GafKfQpZXWUN/rYjvrmvAKsPtvEvu4U/bWr7ZulR2y6HG1iWXbKJpPlUsnAD+FxisB\n/DI6k4gMADAAALp06RJ9mah2SpvZo7GJu+3zFp2tntjXo8+mK08xatqiQQbz2sinUTSuJSMp3VfV\nQaparqrlZWV13PuAiKgRyyegVwII9+npDCDNDVGIiKgu5RPQvwGws4h0E5FmAM4AEHOTBSIi2hRy\nrkNX1WoRuQTAu7AeL4+pag4dgImIqBDy6oeuqm8B4J89EhE1AMV3pSgRUSPFgE5EVCQY0ImIisQm\nvX2uiFQBqMWfYSbYCkAt/r25KHCbGwduc+OQzzZvr6oZL+TZpAE9HyJSkc2lr8WE29w4cJsbh02x\nzaxyISIqEgzoRERFwqWAPqi+C1APuM2NA7e5cajzbXamDp2IiNJzKUMnIqI0nAjoInKMiEwRkWki\nck19l6cQRGQ7EflIRCaJyAQRudSb3l5E3heRqd6wnTddROTf3mcwVkT2rd8tyJ2INBGR0SLyhjfe\nTURGetv8gnezN4hIc298mvd61/osd65EpK2IDBGRyd7+PqDY97OIXO59r8eLyHMislmx7WcReUxE\n5ovI+NC0Wu9XEennzT9VRPrlU6YGH9BFpAmA+wEcC2APAH1FxO270JtqAFeq6u4AegH4k7dd1wAY\nrqo7AxjujQO2/Tt7jwEAHtz0RS6YSwFMCo3fCuBub5sXA+jvTe8PYLGq7gTgbm8+F90L4B1V3Q3A\nXrBtL9r9LCKdAAwEUK6q3WE37zsDxbefnwBwTGRarfariLQHcAPsz4H2B3CDfxDISTb/U1efDwAH\nAHg3NH4tgGvru1x1sJ2vAjjPz3ThAAACwklEQVQSwBQAHb1pHQFM8Z4/DKBvaP6N87n0gN03fziA\nwwG8AfujlAUASqP7G3YnzwO856XefFLf21DL7W0DYGa03MW8nxH8m1l7b7+9AeDoYtzPALoCGJ/r\nfgXQF8DDoekJ89X20eAzdMT/1V2neipLnfBOMfcBMBJAB1WdAwDecGtvtmL5HO4B8BcANd74lgCW\nqGq1Nx7ero3b7L2+1JvfJTsAqALwuFfN9IiItEIR72dVnQ3gDgCzAMyB7bdRKO797Kvtfi3o/nYh\noGf1V3euEpHNAQwFcJmqLks3a8w0pz4HETkBwHxVHRWeHDOrZvGaK0oB7AvgQVXdB8BKBKfhcZzf\nZq/K4CQA3QBsC6AVrMohqpj2cyaptrGg2+5CQC/av7oTkaawYD5YVV/2Js8TkY7e6x0BzPemF8Pn\ncBCAE0XkBwDPw6pd7gHQVkT8e/OHt2vjNnuvbwFg0aYscAFUAqhU1ZHe+BBYgC/m/XwEgJmqWqWq\n6wG8DOBAFPd+9tV2vxZ0f7sQ0Ivyr+5ERAA8CmCSqt4Veuk1AH5Ldz9Y3bo//VyvtbwXgKX+qZ0r\nVPVaVe2sql1h+/FDVT0LwEcA/L+sj26z/1n08eZ3KnNT1bkAfhKRXb1JvQFMRBHvZ1hVSy8Rael9\nz/1tLtr9HFLb/fougKNEpJ13ZnOUNy039d2okGXDw3EAvgcwHcD19V2eAm3Tr2CnVmMBjPEex8Hq\nDocDmOoN23vzC6y3z3QA42A9COp9O/LY/kMBvOE93wHA1wCmAXgJQHNv+mbe+DTv9R3qu9w5buve\nACq8fT0MQLti388AbgIwGcB4AE8DaF5s+xnAc7A2gvWwTLt/LvsVwAXetk8DcH4+ZeKVokRERcKF\nKhciIsoCAzoRUZFgQCciKhIM6ERERYIBnYioSDCgExEVCQZ0IqIiwYBORFQk/h90hukgHXxhmQAA\nAABJRU5ErkJggg==\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%matplotlib inline\n", "\"\"\"A very simple MNIST classifier.\n", "\n", "See extensive documentation at\n", "https://www.tensorflow.org/get_started/mnist/beginners\n", "\"\"\"\n", "from __future__ import absolute_import\n", "from __future__ import division\n", "from __future__ import print_function\n", "\n", "import argparse\n", "import sys\n", "\n", "from tensorflow.examples.tutorials.mnist import input_data\n", "from bokeh.plotting import figure, output_notebook, show\n", "import matplotlib.pyplot as plt\n", "\n", "\n", "import tensorflow as tf\n", "\n", "FLAGS = None\n", "\n", "\n", "\n", "# Import data\n", "mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)\n", "#mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)\n", "\n", "# Create the model\n", "x = tf.placeholder(tf.float32, [None, 784])\n", "W = tf.Variable(tf.zeros([784, 10]))\n", "b = tf.Variable(tf.zeros([10]))\n", "y = tf.matmul(x, W) + b\n", "\n", "# Define loss and optimizer\n", "y_ = tf.placeholder(tf.float32, [None, 10])\n", "\n", "# The raw formulation of cross-entropy,\n", "#\n", "# tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)),\n", "# reduction_indices=[1]))\n", "#\n", "# can be numerically unstable.\n", "#\n", "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n", "# outputs of 'y', and then average across the batch.\n", "cross_entropy = tf.reduce_mean(\n", " tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n", "#train_step = tf.train.GradientDescentOptimizer(0.8).minimize(cross_entropy)\n", "train_step = tf.train.AdagradOptimizer(0.8).minimize(cross_entropy)\n", "\n", "sess = tf.InteractiveSession()\n", "tf.global_variables_initializer().run()\n", "steps = 1000\n", "cross_entropy_array = []\n", "accuracy_array = []\n", "ACCURACY_DURING_TRAINING = 0\n", "\n", "# Train\n", "for i in range(steps):\n", " batch_xs, batch_ys = mnist.train.next_batch(100) # batch_xs is input and batch_ys is the corresponding classifier \n", " #result\n", " cross_ent = sess.run([train_step,cross_entropy], feed_dict={x: batch_xs, y_: batch_ys})\n", " cross_entropy_array.append(cross_ent)\n", " #print(cross_ent)\n", " # Print out accuracy as we train\n", " if(ACCURACY_DURING_TRAINING):\n", " correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n", " accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", " accuracy_array.append(sess.run(accuracy, feed_dict={x: mnist.test.images,\n", " y_: mnist.test.labels}))\n", " if(i%100 == 0):\n", " if(ACCURACY_DURING_TRAINING):\n", " print(\"Accuracy at step \",i,\" = \",accuracy_array[i])\n", " print(\"Cross entropy at step \",i,\" = \",cross_entropy_array[i][1])\n", " \n", "print(\"Cross entropy at step \",i,\" = \",cross_entropy_array[i][1]) \n", "\n", "# Test trained model\n", "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", "print(\"Accuracy is\")\n", "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n", " y_: mnist.test.labels}))\n", "\n", "#Plot the training accuracy and cross_entropy\n", "plt.plot(np.arange(0,steps),cross_entropy_array)\n", "if(ACCURACY_DURING_TRAINING):\n", " plt.plot(np.arange(0,steps),accuracy_array)\n", "\n", "sess.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Same example with Gradient Descent Optimizer " ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz\n", "Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz\n", "Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz\n", "Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz\n", "Cross entropy at step 0 = 2.30259\n", "Cross entropy at step 100 = 0.534261\n", "Cross entropy at step 200 = 0.364402\n", "Cross entropy at step 300 = 0.270909\n", "Cross entropy at step 400 = 0.294025\n", "Cross entropy at step 500 = 0.321305\n", "Cross entropy at step 600 = 0.320606\n", "Cross entropy at step 700 = 0.449078\n", "Cross entropy at step 800 = 0.354322\n", "Cross entropy at step 900 = 0.313897\n", "Cross entropy at step 999 = 0.398348\n", "Accuracy is\n", "0.9141\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD8CAYAAACMwORRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAIABJREFUeJztnXmYFNXVxt8zCyCLIDIisoqgRnEN\nKqhJjCtRozFRgzFRUeNn4ppoFo1RNBqTuEaNGBONmkRjRKOIRFyjgooOiIgsMq4g26DIsMMw5/vj\n1p2urr619Ez3NNW8v+fpp6urbnXd6up677nnnntKVBWEEELKi4pSV4AQQkjhobgTQkgZQnEnhJAy\nhOJOCCFlCMWdEELKEIo7IYSUIRR3QggpQyjuhBBShlDcCSGkDKkq1YF79OihAwYMKNXhCSEklUyd\nOnWZqtbElSuZuA8YMAC1tbWlOjwhhKQSEfk4STm6ZQghpAyhuBNCSBlCcSeEkDIkVtxFpIOIvCEi\nb4vIuyJytaNMexF5WETqRGSKiAwoRmUJIYQkI4nlvh7Aoaq6F4C9AYwQkWGBMmcBWK6qgwDcAuD3\nha0mIYSQfIgVdzWs8j5We6/gEz6OB3C/tzwWwGEiIgWrJSGEkLxI5HMXkUoRmQ5gKYBnVXVKoEhv\nAPMBQFUbAawAsK3je84RkVoRqa2vr29dzQkhhISSSNxVdZOq7g2gD4D9RWRIoIjLSs95fp+q3q2q\nQ1V1aE1NbAx+NDMeAdY1tO47CCGkTMkrWkZVvwDwPwAjApsWAOgLACJSBaArgM8LUD83i2YAj50N\njLugaIcghJA0kyRapkZEunnLWwE4HMCcQLFxAE73lk8E8IIW88nbG9eY94aFRTsEIYSkmSTpB3oB\nuF9EKmEag3+r6ngRuQZAraqOA3APgL+LSB2MxT6yaDUGALFtUvHaD0IISTOx4q6qMwDs41h/pW95\nHYCTClu1KDwXvza13SEJISRFcIYqIYSUIekUdxtCX0S3PiGEpJl0intz5CXFnRBCXKRT3Ju1neJO\nCCEu0inutNwJISSSdIo7fe6EEBJJOsWdEEJIJCkXd1ruhBDiIp3ibt0x1HZCCHGSTnFvVnWqOyGE\nuEinuDdb7hR3QghxkVJxtzllKO6EEOIi5eJOCCHERbrFnW4ZQghxklJxtz53WvCEEOIipeJOUSeE\nkChSLu50yxBCiIt0izt97oQQ4iTd4k7LnRBCnKRT3MFJTIQQEkU6xV2ZfoAQQqJIqbgzWoYQQqJI\nt7jTLUMIIU5SKu6cxEQIIVGkVNwp6oQQEkWsuItIXxF5UURmi8i7InKRo8whIrJCRKZ7ryuLU10P\nhkISQkgkVQnKNAK4RFWniUgXAFNF5FlVnRUo94qqHlv4KjpoFndpk8MRQkjaiLXcVXWRqk7zllcC\nmA2gd7ErFl0pT9yF4k4IIS7y8rmLyAAA+wCY4tg8XETeFpH/isjuBahbBHTHEEJIFEncMgAAEekM\n4FEAF6tqQ2DzNAD9VXWViBwN4HEAgx3fcQ6AcwCgX79+La50c7SMpHM8mBBCik0idRSRahhh/6eq\nPhbcrqoNqrrKW54AoFpEejjK3a2qQ1V1aE1NTctrTZ87IYREkiRaRgDcA2C2qt4cUmZ7rxxEZH/v\nez8rZEWzoM+dEEIiSeKWOQjADwC8IyLTvXWXA+gHAKp6F4ATAfxIRBoBrAUwUrWI00dpuRNCSCSx\n4q6qkxCjoqp6B4A7ClWpWDasbrNDEUJIGknniOSES8073TKEEOIkneLeDMWdEEJcpFvcGQpJCCFO\n0q2OdMsQQoiTdIt706ZS14AQQjZLUi7ujaWuASGEbJakU9y3HWTet96htPUghJDNlPSJ+/KPgM/q\nSl0LQgjZrEmfuC98K7PMZ6gSQoiT9Im7VPo+UNwJIcRF+sS9ojK+DCGEbOGkT9z9ljvdMoQQ4iR9\n4l5BtwwhhMSRPnH3pxyg5U4IIU7SJ+603AkhJJb0iTt97oQQEkv6xJ2WOyGExJI+cReGQhJCSBzp\nE/cKumUIISSO9Il71gM6KO6EEOIifeJOy50QQmJJn7gztwwhhMSSPnGn5U4IIbGkT9xpuRNCSCzp\nE3da7oQQEku6xZ0QQoiTWHEXkb4i8qKIzBaRd0XkIkcZEZHbRKRORGaIyL7FqS7oliGEkARUJSjT\nCOASVZ0mIl0ATBWRZ1V1lq/MNwAM9l4HABjjvReeLLdMUY5ACCGpJ9ZyV9VFqjrNW14JYDaA3oFi\nxwN4QA2vA+gmIr0KXluAljshhCQgL5+7iAwAsA+AKYFNvQHM931egNwGACJyjojUikhtfX19fjW1\ncECVEEJiSSzuItIZwKMALlbVhuBmxy45yquqd6vqUFUdWlNTk19Nm4/E9AOEEBJHInEXkWoYYf+n\nqj7mKLIAQF/f5z4AFra+es7KZJZpuRNCiJMk0TIC4B4As1X15pBi4wCc5kXNDAOwQlUXFbCe/hr5\nlinuhBDiIkm0zEEAfgDgHRGZ7q27HEA/AFDVuwBMAHA0gDoAawCMKnxVPRjnTgghscSKu6pOgtun\n7i+jAM4rVKUiad8FOOYm4K1/ABvWtMkhCSEkbaRvhioA7Hc20K0/6JYhhBA36RR3wAysckCVEEKc\npFfcIaDlTgghbtIr7rTcCSEklPSKOy13QggJJb3iLpEBPIQQskWTXnEH6JYhhJAQUizudMsQQkgY\n6RV3DqgSQkgo6RV3Wu6EEBJKesVdhNpOCCEhpFfcabkTQkgo6RV3+twJISSU9Ip7dKJKQgjZokmx\nuAN0yxBCiJv0iruAbhlCCAkhveLOAVVCCAklveLOAVVCCAklveJOy50QQkJJr7jTcieEkFDSK+4M\nhSSEkFBSLO4A3TKEEOImveJOtwwhhISSXnHngCohhISSXnGn5U4IIaHEiruI3CsiS0VkZsj2Q0Rk\nhYhM915XFr6aziODljshhLipSlDmPgB3AHggoswrqnpsQWqUFFruhBASSqzlrqovA/i8DeqSJ7Tc\nCSEkjEL53IeLyNsi8l8R2b1A3xmNMM6dEELCSOKWiWMagP6qukpEjgbwOIDBroIicg6AcwCgX79+\nrT8yDXdCCHHSastdVRtUdZW3PAFAtYj0CCl7t6oOVdWhNTU1rTwy3TKEEBJGq8VdRLYXMT4SEdnf\n+87PWvu9CQ7MAVVCCAkh1i0jIg8BOARADxFZAOAqANUAoKp3ATgRwI9EpBHAWgAjVdtCdWm5E0JI\nGLHirqqnxGy/AyZUsm2h5U4IIaGkd4YqAFruhBDiJr3izlBIQggJJb3iDtAtQwghIaRY3DmgSggh\nYaRX3DmgSgghoaRX3Gm5E0JIKOkVd1ruhBASSnrFnZY7IYSEkl5xp+VOCCGhpFfcwTh3QggJI8Xi\nDtAtQwghbtIr7nTLEEJIKCkW9woASoEnhBAH6RX3Ci+hZdOm0taDEEI2Q8pA3BtLWw9CCNkMKQNx\n31jaehBCyGZIesW9stq8b6K4E0JIkPSKO33uhBASShmIOy13QggJkl5xt24ZDqgSQkgO6RV3a7nT\n504IITmkX9xpuRNCSA4Ud0IIKUPSK+4MhSSEkFDSK+4VdkCVoZCEEBIkxeJead4ZCkkIITnEiruI\n3CsiS0VkZsh2EZHbRKRORGaIyL6Fr6YDumUIISSUJJb7fQBGRGz/BoDB3uscAGNaX60EcECVEEJC\niRV3VX0ZwOcRRY4H8IAaXgfQTUR6FaqCoVRwEhMhhIRRCJ97bwDzfZ8XeOtyEJFzRKRWRGrr6+tb\nd9RmnzvFnRBCghRC3F1PqnY+HklV71bVoao6tKampnVHbRZ3RssQQkiQQoj7AgB9fZ/7AFhYgO+N\nRjxxV4o7IYQEKYS4jwNwmhc1MwzAClVdVIDvjYaWOyGEhFIVV0BEHgJwCIAeIrIAwFUAqgFAVe8C\nMAHA0QDqAKwBMKpYlc2C+dwJISSUWHFX1VNitiuA8wpWo6SI1+mgW4YQQnIogxmqFHdCCAmSXnHn\ngCohhISSXnH3W+4rFgANxQ/QIYSQtBDrc99s8Vvut+xulkevKF19CCFkM6IMLPem0taDEEI2Q9Ir\n7oyWIYSQUNIr7q5omdlPAi9eX5r6EELIZkR6xd0VLfPw94GXflea+hBCyGZEesU9Kisk/fCEkC2c\n9Iq7tdzfvCd323pGzRBCtmzSK+7Wcm/4NHfbmqhnixBCSPmTXnGXiKpvWBW//0PfA27c2SyvcDQQ\nhBCSYlIs7q5nhHg0bojff+5TwKolwIcvA7fsBsx8rHB1I4SQEpNecY9i0/rsz02bgDf+AmzamFt2\nySzz/slrxa8XIYS0EeUp7o0BcZ92PzDhUuDV23PLVnoP2t6UwNonhJCUUJ7iHhTqdV70zNrluWUr\n23n7OKz6QrKuofjHIIQQj/IU96Dl3vwMb8dzu9tK3H/XF3hoZHGPQQghHuUj7jvsm1kOWu528FVd\n4l7l3qcY1D1X/GOkian3A1d3BzY5JqKRLZdNjcAT5wOfvV/qmqSa8hH3vXxPA8yx3CNoK8ud5DLx\nVyZ9xMbVpa5JNqqcK1FKFrwJvPV34PEflbomqaY8xP2iGUBVu8znYLQMIsImbbx8vpb7uhXA4nfy\n24dkE9WjKiWTbgb+sCPnP5SciPuWxJLeh3UAwIjfAz13A7bpD1S2z6wPxrlHiYjNKpmvuD/wLWDh\ntC3jASEb1wLaBLTrVOAv9q6LKz9QKZn9pHlfuRjo2ru0dUkztgdd1T66XA6bWWOfUtJtuQ87F9jx\nq2bZb7mvWhwQ8ogBVXWIe1MT8K9TgY9f9coo8NIN5nF+loXTWlv7zZ9NjcD6lcBNuwC/3aHw328v\ny+bmErONTUW6b4+Sc8Mg4Pq+Ld8/aqIiiaV8/r0V1ZnlyX8EJt+abD9ruW9YbSx+VRMyOWc88K/v\nmW2fvQ+8eK1JKZyzf0wGyvr3gGXzktUljI8ml8YHPPYM4Po+mVDSQtNSl1ixsde0It0d25KzvsHh\nIiVtRfmIezCfzHOjM9Z7lFvm0bPM+xefANfWAK/dkSlvu5X288K3cl0+GhD3cRcCY8/KfP7TfsAd\nQ/M6lSyaNgH3HQ08cHzLv6OlWPdE0dhM3TK2PjbzKGlb2noMZt2KsozYKh9xd01QWv6RF5HhCfDn\nEaFV674w79MfzNzcG9cA//tddpKyZ3+dvZ//YSENi8xs2JljvTp9kdcp5LB4pmlsAGDxjPByjeuB\n8T8FVtW37nhtjW0029Its2gGMLorMP/N8DLNjU2EyCydA7z9r2THXPwO8M7YxFVsU9Y15BddBgCz\nxgF3H9IGIlwkt8zs8UYbAHMOv+sHjDu/OMcqIYnEXURGiMhcEakTkV86tp8hIvUiMt17nV34qsaw\n53eB/gdlrxt7phHHT143n+c9E+9GUc22JP93PbBqaeaz/S6L/zF///1Z9jaXuyFMEJ75da7b566D\ngGevjK4vYG622ntyG562YuUSYPVnLdjRWu5tKO6znjDv778QXsY22FE9ijsPAP7zf8mOedfBmR7i\n5sbv+gL3HZPfPmNHmV5s0RrlIjcaD58KjPG0whp+bz9U3GOWgFhxF5FKAH8C8A0AuwE4RUR2cxR9\nWFX39l5/LXA94+nUAxg1IXuda9Dzs3nAxnXh36NNuX9av9W8uj63fBhNjod3/+f/zGDt6K7Z61+9\nzbhBWtI9bPTORyqA9QnSHReam3YGbhiY/37Fttyfv8a4yfysWWbeO20bvp81AFzXrxxZENGLiaIl\nD6dfuxx49GwTnLCuIXvbhjXZn4s5oGrduP5rfMse+V3z0V2BV24qbL0KSBLLfX8Adar6gapuAPAv\nACVwACdk8JG56+aMzyyvawD+/q3w/bUp12LzDygGxfzjyZnuaVCkwiw/Wx9XL2K110uY/0Z4HYPY\nHsKMh4HrewPL6tzlJv9x85r11zygWgRxX/yOufGm3Z+93j8wPbor8PINufva6xbVcFuCrokowyEN\nrF5m/iMfvAQsmBpSqBWN8utjgHceAW7Z3fQaLMvmAb/tBUwvsgXtv15rl2c3UCs+MYEVSbCNwPPX\nFK5uBSaJuPcGMN/3eYG3Lsh3RGSGiIwVkVbEP7WSk+4HjrsjfHvjuuj0vtqU23pvXBte/sGTgYmX\nm7DJ957O3hY3ULjO4ZO3x8onOsbeZPZ4nznEfe0XxsVz/zfd3/HeM8CUu4Hrehm/dJtQQLdM43rg\ntTszPZ/nRrvL2d/KNoAvXJs7SG5/x/Ur44/r/6/MfhK4riew5N3E1S4pruce3LYPcPu+wAPHAX89\nNHr/lgyEh0Ug2d9s7lP5+fJV87O2/Q327wcAawLuxKTnlO84RQlIIu6uvlHw138SwABV3RPAcwDu\nz90FEJFzRKRWRGrr64s0+NeuI9ClV/j2xjjLSnPFZqO/uyjAc1dnb3/9TuD9Fx1fFWP5rV4WXr8k\nVqMl6NuvqATqns/uGdjvC7NMHjzJjBlsXAO88efwYzVtMk+xyqdnEUZr3DKjuwJ/8/mKX74RmHgZ\nMMMb02hYFLKj99fd4BPuYLieteZcPTzVbPF5424TaQUAC6eb93cecR96js9tqGp6FisXh9SzDXCl\nfVgfcJUs/zi3jL1uLXFbVYRFINnfVLKX4/4bL/0BuKZ7rktH1Rgz9XOz1wfFO3gPJg3LjdWR0pNE\n3BcA8FvifQAs9BdQ1c9U1d4hfwHwZdcXqerdqjpUVYfW1NS0pL7JqO4Qvi3OGtMmYMyB2ev84r5m\nmZmenoXk+h+XzIq3AmyEj/8GjxJ3V2MA5P4h33sa+Me3gSl3ZdYFw0KjaNpkom9cNHxqrKt7jgDu\nPy537MDPkxcB/zzZvW3jOmClJ8Bxv1PDIuCL+bnrP56UWbbfZcUg7Drb38E/NhFmubu4uhvw9GWZ\nzxMvA/5xolnu0DW7DkH+5ct/tHiG6dI/9sPwY4WxcgnwmxpgQS3w3kTgliHRluT6Ve5GJCiILv64\nZ/i2Qlru9v8ukmk0Pp4E/KaHGbxd+4W7vm/+xbwHr/fKRcYNaa+NJcd1GrhvVy01+8UFXth7zh9J\nt/id5G6dNiCJuL8JYLCI7Cgi7QCMBDDOX0BE/KbycQBmF66KLWDAwcBpT7i3jR0Vva9LVKfel1l2\ntewVlbn7jRlush5GYXsI86dk1m2MEPcbdgKmPZC7PviHbfDa3s8/MO+v3gHcOSy6LsH9a+9xb/PX\n68OXor9n6n3AvInubf/9eWZ5+UfAf84NF6ibdwVuHRJ9LNsoVm9l3sPaMFcP5vZ9TCNlc8mEWaR2\n/ZQx2evXLgf+fXomWimJWyGuEYriw5fM//D1MeZ3XDE/90Hx9XMzobj3HGFmGQfxGy0Lp0e7H7PI\nc37C9X2Bpy4xfvXYiWEOQ+mTKcDv+7v/w2G/tb3OwTrmfA7cOxMuNRZ/WDTV6s/MuIA/iAEwBsJd\nB5tgiSAf/M88Ca6NiRV3VW0EcD6AiTCi/W9VfVdErhGR47xiF4rIuyLyNoALAZxRrAonZuAhwEEX\nRZfp6RCMlnQ1mxqBTx2DT0tiEovNexa479hsq7TRu8HC3DJv/TN3XbDBaRZ774//zK8yA7UuggnQ\nokS7NREkG9dmXBdLZmbWT7jUhKJdu13Lv9vebDaPid+iyopA8n6TVT5L1g6YL3rbCGLYOYZZ5E0b\ngVmP+w4RY/WtqveJUgsiQpqt3Ijb90/7A/eOMMtLZ7nL+FNQ3/01k2Y3H5KK+/oG4M2/mv962MQw\nf88yzGr+wuEistfTL9JzJmREVsQYDTYyJ2c8LdAbWLXEvIe5XR77IfD4uWasBjCN1dT7gT97aVA+\ncLhnHzje/MfbmERx7qo6QVV3VtWdVPU6b92VqjrOW75MVXdX1b1U9euqOqeYlU6MPw2wi52+nrsu\naAEl5cOXc9dVtstd52fyrcBHrwBLfR2diVcYK3K6Q8QBYP7ruYOtQesj0m8YEJOVS4zFkZTWDCQ9\ncb4RkdXLwi2u9auAF683QrpiAXC7z8NnB0HnPJW7n+3xVHkuOb/wNfosUntcV0bPSbcYC7HRYcGu\nXxXe68uZQBdhuX/wEnDjoNzB9yhWLja9oOkPmcbRivvHkzNl6p7P3a8+pgNtcyV16Gbeg3M44ggK\n5RefREcLrVqc29AEf7t3/5OZuBfFugbgv7/MHG+lJ8oL3zLur0WeESEVxn1oI3OCDVKwt9LsonNc\nw/cmZsaa7LiKVAJPXhj/WwPGyGjDNCLlM0PVRZy4Vhc6y2Hw+Amz4fmtSPsnmfdMeHm/a2bSrcbi\n9GPF3SWgQZ/7TTsnq6PFJXxJ+egV7zsiBOCl35nXQyNN99gf+fOiZy3ZnD9Z9Qq6s3znaW/gxvVm\nnkMYCyIGiWc8DMydEL7dj63DnAnADYMDx/Biyu1vETUGsnGdsfLHnmXGLx4/1zSO9vsbPs3MtPRb\nhqGuCgWu3sZkNAUyPZbm9BwJe2XNA6o+o6KpCbh1j/jJWsHQ1Em35NY5zt0HAK/caNxjdlDYRvbk\nRHqJMYjsMeLE3Q6uL5uXm1PpwZOzB+KB6N7T5Nuyx6T++R2TSjrOn18gylzcq6O3W/9ssZjrsDBd\nrF2esTiTYP2WmzYCz12V22soViIu1dZZ7laAG9cj1Lq1Pui654CZj2Zv6z4wfCasPWfrOvHfdLbr\n/eTFmciWpDx+nhHpuP9SVl28Ojx5Ya47zIpY8+8YIe5/G2Gs/PUBkYlz+/gt6gZf7MOmDWZf6zqw\n4mUHKsPcUWs+zwiSXwz9QmlFMWkDaGmObMtzVmqoiywg3v7Gc8PqeHG3oZHPX23SErxwHfBpRAbY\nqMyhr96e/fmD/5n32/c1wl9kylzcA5b7zz8Ejrw283m/ts+S4GTdCqBzHv5mG04W9Bda7EDa0lm5\nCcfWfNbyiTZNm6KtblUjAmGzZK2gNa6Lti7DWDond27Ae89k6gaYm7dxffZNbc/3fYfrIo7p/zDd\n/LheoB/b0ARnMwMZYQ6K09+OAf5ymFmefBvw7FXGxQAA7bpkl33tTvdxP/R6A36L+nZf0roXfpNd\n3oq7FeYwy/0POwLPXGFCN6/bPvMf8Atl8/9CTB6dpEaAbYQjB6Fd20IaxaB4+/30q+tzt8fllHn5\nD8BfDg23tl1jCPXvmfewwePlH4bfuwWkzHOaBv4AHbtn/It7nAy075zf13XtZ2axFZp1K4BtBiS3\nKqUSuOcooGPIFHqbIC1sstZn84DuO7kHf6JoaoxuGCbfanyhk24GznYIqb3hF89Ei/KHzH0K2DaQ\n5uDBk4DTxmWE6flrcl0DG2Ms0yTkk/43SthsXax769NaMzjnD+sM5ggK/k/D/Lv3Hwt8/1Ggz36+\n4/migoKWZNDtEPX7vP6n3HX+8tZXrZvM779nwofBBwf/k/DSDWZ+QeT3OVj8DlDjiBqKRYFrtnFv\n2rq3N27gq/+jZwLnToqI6UcmbLaIlLe4d6oBvnxGdihjsHs99KzwsL8gpzxo/sRxM/fyZXV9ftZ0\nRUXGj9gS1q8yU73zRWMsd/+s0L8ellm+80DgpL+h+Qb4zznhf+6PJrnXW4ICBQBrP88IjSuiYtIt\n5gZsTWrhfCZabVqfnWzOj33OgN+95E+B4LJg83kC1uTbgG794sutXpYZdLTkG6PtcstYZoQkyAsy\ne5xxXdXsGl4mmM77xWvd5V66IXwbYCK0tt0pWb2S0rG76dX5z9/+jlH++Hx6gi2kvMW9ogL45h+B\nXY81icUA5Fjzx9wE7LA3MO6C+O+r7gh0akW4XhQbVgL9DgQ+eTW+7FOXtO5Yax0j9t36h4Sa+fdb\nbmKq82XpuyY0z0/YA0Ci0jKH8cgZQM89wrfP9qZlBN0b+ZBP1sBNG93RU36CfnRLcDo8kJ8QJBmM\nBNyRP/mmgchyy7RwLGb+FPOKShnygk+wX40oFyXsgKljoZ8d0NSYK+7WLRkl7m3wDIPy9rlbBh8B\n7LCP9yFgGYlkP8XJ0mk74PJFJl7eUlGZn288X/aOCd0sFMHZnsffCex8VPx+t+yeLAVxKXANMm4b\niFQJRjrkQ1LRBEyY4+xx8eVc3OSwYIsx1b1hYXyZOAoh7pakvYZnftXyY2zaUCRxD+jH6qVmoDaq\nt9e+FYZGQrYMcXcRNw3/wAtMnprvP5ZZV1Fl9nNlniwEQ06ML1MInv5F9ud2HdP/1KFg1x1owYOZ\nAfTZP76Mn92/7V4/KzBD+rCEjaLLeg5+VyFwDfbmy2t/MgO2k241zx5oDWETrQrJ63dmJnYVik0b\n3T2r/12f66oadl5mec/vFrYeDrZccffT4Hvw9WlPmAGpvbwBIf+gSDtvYMtvSe17unumq5/dEmZI\nbtcx+/OJ9ybbz8Ve3wNOdqQqcB63c/Z52oHa1hy/rXG5lJIOgo74fWZ520GB76iOdsXt+4Nkx4j7\nj7Q1hXgu7rxnzOD8c1flv+/RN2Z/Dsa/F4tChwkvnJY9T8Wyckl2b+YrlwJfOtYsV1RFD7YWCIo7\nAGzlGwnvOQQ4+zm3+6X91ubdf9FEgFH/jf7+ljxouf/BwM4jgPPeAA5LcPMMOdH0Mjp0NWJ9wpg8\nGpVO2d3Vrt5sPhtZFGSbAcm+18WhVyQvm3QSWBhhg7bb7QYcdHHm87BzMxPaKgPXqmljdIx7Un94\n1z6Z5WJPnismO+xbmO/ZvwUJ0zY3tnM9s8ijqRHo58uFU93B15Ms4kNIfGx54u6KRvjyKOCMCcAV\nS30Drw7shIUs/6IAHTzR336PjIW23e6ZIrs5UscGrcEzAwm2Rj1lRLdmF2DXBI9BO/hiYNBhwKXz\ngJ8lGJTs7gspbNcpk1MDAL41Bhh+fvZ4g5+wEMwkDE7g2z/yOpOX/9cR+XCSEBba16EbcMTVppdm\nf/efzDTzICx7+WbBRjXOUYNmfrbeIbPcfcdk+2yOnPZ4fJm25qjrS3PcKCNn3RfZ8z2qtsoYTW30\nWMktT9yz8kZ7VFQCAw5K7qMNWu4AcNmnJrbbWsAn+sIre+2ZG/cb9MdZd8CAr+Qez1qH7Rxx+SMf\nAn7xkWlYAHMO/pTH37zNRAsF8Vv17TpnD/507Q0cdZ35Xc54CrjwLWAfn/sh6LrIB38v6QjfU2z8\n3fSKSmD3iKdlWfrGZLrcqpv6+0zvAAAN+0lEQVTpAQWxgjzwkIx11bG7eVl6+qyyfLrQwd7OT2YB\nl9Zlrw8aEN/9R7Lv/lrO44vzp6W+3u8/Box6Oll89qg88uYUgqA7s63YJqKRrnsuO/Ktqr3xBgz4\nSq5LqkhsgeLeAgZ+3UxWsOz4Vd9GT9zbdzYX0A62dvLlq+/YI3f231d/nm1xWAH//qPALwL+Yxvn\n7EqENviIbMEM8uXTgb0daUj9lnt1x2y3jN9SHXCwKXvsraZuh48G9g88GHp0hP82eAP4J+T4s3Z+\n6bjMcliMuJ9hPwaOi5nC3Xtf0wPyu2CAZDnt7W/ecwiwfURO82BPcNiPshvTrr2BzjXZx/Snmtj7\nVFP+m45z8Teip40DDvklcME04MrPTaNxfiATaZJQz2E/ji/jomsfoP9ws+wfo3CxdQvmULSG6oC4\nH+eYCxHHCREPqAGAY242PcorlmaCD/LpgdlUJ2eMbzOX1BYo7t5NlrQ7DZiu6E99o/lHXWeiaYBc\noTh8NHDJ3GzrrH3nXBdB5+2A4b4bzfYaqtobizNY9vypwDcCN9VR1yfLeeIqs+bzjP+0eqvsgSaX\nG6KyChh0OHDwT3L90n6+HXg2+nlTsj+HCVCXnsD5tWb5S46eRpAR12c3uBZ/3a2oB8Mkkz54+dI6\n47Y53jE7M4z2WwMjQzJ6DjwE2OmwTEPefSDwrTtNfbo6zsUfLteusym37U6mJ9G1d24PIEl4nav3\nlwS/gEaJ0x4nmzkTQY74DdBlh9z1APDDPGZKXzDNvCw7fyMjnF12MIZG5+2Tf58l7j6q2QU48Hxz\nf1pDzXWeYeSTO6pAbHnivvsJJsLF7xLIl8pq34UNCEVFJdDF+3N9eVRmKvihVwD9DwIO+JH57Pdx\nA/EDcz0Gme8+y5eDe3hCKyzoVhh+vrHov/ew8W1v1S3bLRM3ANxrL+DUR4FdjkHz+dfsalw3nQL+\n+OAcgmDDMOrpjAXaY7C5OXv70vyeO9kcq4cve2UPbwp5dUfTU9r3NPO5/8HALz8x1vDF72TOe+DX\nso+ZqGEXY3G375w7/f80L4a97wG54w9R333aE8APHsvc6HE9FH/j5Zql2r6Lse7twHPUdavuZF6d\nesQ3Vr32MuMufrr4BDPKTXXkbzKNp3+saeiZwCWzTfBBsMfR2zdI2z7G7dN9oGngrP/65Ad8DY/X\ni0pi8AR7MK7cMWf6MrO67s8ueTQixU5S6GDLE/fqDqY737mVj/nrtbd539HhI7d881YTeQOYP+So\nCcBhvzbxrkFXSVJrsu9+wOnjMwKThE6Bcz3qOuPK6bxdxrftF/ck4jf4cJOOYbSXyOu8KcDxd2S7\nKXrtlZ01z1/nPbzH7/UfbhquMLYfYo51/puZdbY3UFEBnPqIaTQBc7O162SsYf8U/EGHA5cvBE7w\n8pG0Zur3CX82jcUV9UaottvVDMZb7HU8+wXTcLqw4uiPzbc/m3/MxZ8HxZVSoKISuGBqJgFe8Pmn\nlu/cA1z+qXlt1c09n8Lvehp8lHEBjnwwsy5MMHc/ITtazP62l31qjttcV6/h6X9g5np/+6/G1QcY\n913PIcAvPjQuyx+HpNewv++5rxgLvqpdJorNZngM9k788eWWET6X6ME/MfdB8J7sd0DmPvc3nNb1\n2nNItsvvJ7OM+9IfRTPocHed2oDyTj9QTPruZ6Ir/INwSWjXCRjx28zns5/Pb/YjEN2guNh+D+Bi\n++SjkARNdgT/1LHJGxon3vf32CVb9ICMBR3lo4/ie4+YfOjB+nXuad57RfjG23UChnzHxCXHPaEr\nCjv/ocrXQAzwT+Dx6tbny+blwjaArqykldXA1n3M3Iu+B5gBzI49ogcNrRsvmDHT0muv7N+suoNx\nk/gTlA35jmlsPv8AgJryux5jGmRXD6NdFzOr+cRAXibbKwn2dlzW/p4nZZYv8uW5OTRkFmo/37ON\nt9omM9Zk88XY8/f3cirbmfvtK5cY1+PNu+YGLRw+2ry7ol9chs7JD5hJYBUVJurK5gvq2hsYOgoY\nf3Fm3xP/BswZb8au2hiKe2vIV9hd9BlqXsWmW9/o7dZyD1r5eSOZ4+WbdTOOnY80ryA7fd00JP2G\nR+9fWZU7blEoDrwAmPFvYB/H4HUOnrj73U/+KK7TngBeux3Y6VAzDhHXpR/4deDF68z5uzKButw1\ngw7LFveKShP++eK12WMUQZeW5fIF7vVhEWeFmAG9w97u9R27m9DiA7yBfivuFdWmxwZk3IVX1Gc3\nNP45BwddbFw8/hQHrpTE1VtFJ2c77EqTnbSyvQmT3tvxcJk2gOJODF16AYtntD7nxY5fMxbpwT8t\nTL2SMqCV09/jOO+NaJE98trsZwUkwmdNW3dGh67GbfHNP5rPSXIZ9d3PiJhUmqdyLXvP7Peg5/py\niXvP3Y1VWfe8yVkvFZnqJHnAdxhh/vioh1rE0WUH4JgbMy4OFz/zPWHL/oddeV/8Pa5L52W76Kra\nmUHTzj0zPdlmcY94QMqJ92aPjwz7sRH31jy1rABQ3InhhLvME99bmxK1sspk2kwr+/3QWODBRGot\nygMegv9h0JYdv2oG+e3gcL5Ya7XfAeaVdbyQyVxDvm0sy+n/MFZ/Sx5mEscRvzGPxGsJ+51tXFN7\nnpzffs1umZhGKqzh9LuL7ATFKFflkO9kf67eytQ96QzxIkFxJ4aO3YE9ipi4rCXhaaWg157AFUvi\ny7WGYT8CZj+ZPQNYpHVjAS4GHQHUPRsdRTPocODXy4yFawd449xb+XDQhebVElpqJFhr3I7FtIZv\njTF5b3qHjJ+EsRkYOKKt6YK1gqFDh2ptbW1Jjk3amBWfGmsqGL9Pisu6FcBHk4Fdj06+z5rPWzaW\nZB8E3dLB8kIz92njeooba0ohIjJVVWMH6mi5k+LjmqBDik+HrvkJO9DyIIEzJxpf/+bCLgVO7ZtC\nKO6EkNbTb1h2FkRScra8SUyEELIFQHEnhJAyJJG4i8gIEZkrInUikpN3VETai8jD3vYpIjKg0BUl\nhBCSnFhxF5FKAH8C8A0AuwE4RUSCjyA5C8ByVR0E4BYARZoGSAghJAlJLPf9AdSp6gequgHAvwAE\no/OPB2CzJI0FcJhIqxKUEEIIaQVJxL03gPm+zwu8dc4yqtoIYAWAnGexicg5IlIrIrX19QV4+joh\nhBAnCRNb5xCc+ZSkDFT1blUdqqpDa2pam6CKEEJIGEnEfQEA/zSvPgAWhpURkSoAXQF8XogKEkII\nyZ8kk5jeBDBYRHYE8CmAkQCCOSzHATgdwGsATgTwgsbkNZg6deoyEfk4qkwEPQAsa+G+aYXnvGXA\nc94yaM05J3q+X6y4q2qjiJwPYCKASgD3quq7InINgFpVHQfgHgB/F5E6GIt9ZILvbbFfRkRqk+RW\nKCd4zlsGPOctg7Y450TpB1R1AoAJgXVX+pbXATgpuB8hhJDSwBmqhBBShqRV3O8udQVKAM95y4Dn\nvGVQ9HMuWT53QgghxSOtljshhJAIUifucUnM0oqI9BWRF0Vktoi8KyIXeeu7i8izIjLPe9/GWy8i\ncpv3O8wQkX1LewYtQ0QqReQtERnvfd7RSz43z0tG185bXzbJ6USkm4iMFZE53vUeXs7XWUR+4v2n\nZ4rIQyLSoRyvs4jcKyJLRWSmb13e11VETvfKzxOR01tan1SJe8IkZmmlEcAlqvolAMMAnOed2y8B\nPK+qgwE8730GzG8w2HudA2BM21e5IFwEYLbv8+8B3OKd73KYpHRAeSWn+yOAp1V1VwB7wZx/WV5n\nEekN4EIAQ1V1CEw49UiU53W+D0DwEVB5XVcR6Q7gKgAHwOT1uso2CHmjqql5ARgOYKLv82UALit1\nvYp0rk8AOALAXAC9vHW9AMz1lv8M4BRf+eZyaXnBzHZ+HsChAMbDpLFYBqAqeL1h5lkM95arvHJS\n6nNowTlvDeDDYN3L9Tojk3equ3fdxgM4qlyvM4ABAGa29LoCOAXAn33rs8rl80qV5Y5kScxSj9cV\n3QfAFAA9VXURAHjv23nFyuG3uBXAzwE0eZ+3BfCFmuRzQPY5JUpOlwIGAqgH8DfPHfVXEemEMr3O\nqvopgBsBfAJgEcx1m4ryv86WfK9rwa532sQ9UYKyNCMinQE8CuBiVW2IKupYl5rfQkSOBbBUVaf6\nVzuKaoJtaaIKwL4AxqjqPgBWI9NVd5Hq8/ZcCscD2BHADgA6wbgkgpTbdY4j7DwLdv5pE/ckScxS\ni4hUwwj7P1X1MW/1EhHp5W3vBWCptz7tv8VBAI4TkY9gnhFwKIwl381LPgdkn1O5JKdbAGCBqk7x\nPo+FEftyvc6HA/hQVetVdSOAxwAciPK/zpZ8r2vBrnfaxL05iZk3uj4SJmlZ6hERgcnRM1tVb/Zt\nsknZ4L0/4Vt/mjfqPgzACtv9SwOqepmq9lHVATDX8QVVPRXAizDJ54Dc87W/Q6LkdJsjqroYwHwR\n2cVbdRiAWSjT6wzjjhkmIh29/7g937K+zj7yva4TARwpItt4vZ4jvXX5U+oBiBYMWBwN4D0A7wP4\nVanrU8DzOhim+zUDwHTvdTSMv/F5APO89+5eeYGJHHofwDsw0QglP48WnvshAMZ7ywMBvAGgDsAj\nANp76zt4n+u87QNLXe9WnO/eAGq9a/04gG3K+ToDuBrAHAAzAfwdQPtyvM4AHoIZV9gIY4Gf1ZLr\nCuBM7/zrAIxqaX04Q5UQQsqQtLllCCGEJIDiTgghZQjFnRBCyhCKOyGElCEUd0IIKUMo7oQQUoZQ\n3AkhpAyhuBNCSBny/7dQz3WiTxv0AAAAAElFTkSuQmCC\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%matplotlib inline\n", "\"\"\"A very simple MNIST classifier.\n", "\n", "See extensive documentation at\n", "https://www.tensorflow.org/get_started/mnist/beginners\n", "\"\"\"\n", "from __future__ import absolute_import\n", "from __future__ import division\n", "from __future__ import print_function\n", "\n", "import argparse\n", "import sys\n", "\n", "from tensorflow.examples.tutorials.mnist import input_data\n", "from bokeh.plotting import figure, output_notebook, show\n", "import matplotlib.pyplot as plt\n", "\n", "\n", "import tensorflow as tf\n", "\n", "FLAGS = None\n", "\n", "\n", "\n", "# Import data\n", "mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)\n", "#mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)\n", "\n", "# Create the model\n", "x = tf.placeholder(tf.float32, [None, 784])\n", "W = tf.Variable(tf.zeros([784, 10]))\n", "b = tf.Variable(tf.zeros([10]))\n", "y = tf.matmul(x, W) + b\n", "\n", "# Define loss and optimizer\n", "y_ = tf.placeholder(tf.float32, [None, 10])\n", "\n", "# The raw formulation of cross-entropy,\n", "#\n", "# tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)),\n", "# reduction_indices=[1]))\n", "#\n", "# can be numerically unstable.\n", "#\n", "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n", "# outputs of 'y', and then average across the batch.\n", "cross_entropy = tf.reduce_mean(\n", " tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n", "train_step = tf.train.GradientDescentOptimizer(0.8).minimize(cross_entropy)\n", "#train_step = tf.train.AdagradOptimizer(0.8).minimize(cross_entropy)\n", "\n", "sess = tf.InteractiveSession()\n", "tf.global_variables_initializer().run()\n", "steps = 1000\n", "cross_entropy_array = []\n", "accuracy_array = []\n", "ACCURACY_DURING_TRAINING = 0\n", "\n", "# Train\n", "for i in range(steps):\n", " batch_xs, batch_ys = mnist.train.next_batch(100)\n", " cross_ent = sess.run([train_step,cross_entropy], feed_dict={x: batch_xs, y_: batch_ys})\n", " cross_entropy_array.append(cross_ent)\n", " #print(cross_ent)\n", " # Print out accuracy as we train\n", " if(ACCURACY_DURING_TRAINING):\n", " correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n", " accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", " accuracy_array.append(sess.run(accuracy, feed_dict={x: mnist.test.images,\n", " y_: mnist.test.labels}))\n", " if(i%100 == 0):\n", " if(ACCURACY_DURING_TRAINING):\n", " print(\"Accuracy at step \",i,\" = \",accuracy_array[i])\n", " print(\"Cross entropy at step \",i,\" = \",cross_entropy_array[i][1])\n", " \n", "print(\"Cross entropy at step \",i,\" = \",cross_entropy_array[i][1])\n", "\n", "# Test trained model\n", "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", "print(\"Accuracy is\")\n", "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n", " y_: mnist.test.labels}))\n", "\n", "#Plot the training accuracy and cross_entropy\n", "plt.plot(np.arange(0,steps),cross_entropy_array)\n", "if(ACCURACY_DURING_TRAINING):\n", " plt.plot(np.arange(0,steps),accuracy_array)\n", " \n", "\n", "sess.close()" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'1.3.0'" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tf.__version__" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.3" } }, "nbformat": 4, "nbformat_minor": 1 }