分享

TensorFlow ML cookbook 第二章8节 评估模型

本帖最后由 levycui 于 2018-4-4 13:47 编辑
问题导读:
1、什么是评估模型?
2、如何评估模型效果?
3、如何评估简单回归模型?
4、
TensorFlow如何声明损失函数和优化算法?



上一篇:TensorFlow ML cookbook 第二章6、7节 使用批次随机训练及把所有内容结合在一起

评估模型

我们已经学会了如何在TensorFlow中训练回归和分类算法。 完成后,我们必须能够评估模型的预测结果,以确定它的效果。

准备好
评估模型非常重要,每一个后续模型都会有某种形式的模型评估。使用TensorFlow,我们必须将这个特征构建到计算图中,并在模型训练期间和/或之后调用它。

在训练过程中对模型进行评估可以让我们深入了解算法,并可能给我们提示调试,改进或完全改变模型。虽然培训中的评估并不总是必要的,但我们将展示如何使用回归和分类进行评估。

在训练之后,我们需要量化模型在数据上的表现。理想情况下,我们有单独的训练和测试集(甚至是验证集),我们可以在其上评估模型。

当我们想要评估一个模型时,我们希望在大量数据点上这样做。如果我们已经实施了批量培训,我们可以重新使用我们的模型对这样的批次进行预测。如果我们已经实施了随机培训,那么我们可能需要创建一个可以分批处理数据的单独评估器。

如果我们在损失函数中的模型输出中包含转换,例如sigmoid_cross_entropy_with_logits(),则在计算精度计算的预测时,我们必须考虑到这一点。 不要忘记在我们对模型的评估中包含这一点。

分类模型根据数值输入预测类别。 实际的目标是1s和0s的序列,我们必须衡量我们与预测的真相有多接近。 分类模型的损失函数通常不会有助于解释我们模型的效果。 通常,我们需要某种分类准确性,这通常是正确预测类别的百分比。 在本例中,我们将使用本章前面实现后向传播配方中的分类示例。

怎么运行
首先我们将展示如何评估简单回归模型,该模型简单地适合10的目标的恒定乘法,如下所示:

1、首先,我们加载库,创建图,数据,变量和占位符。 本节还有一个非常重要的部分。 在我们创建数据之后,我们将数据随机分为训练和测试数据集。 这很重要,因为如果他们预测得好,我们会一直测试我们的模型。 在训练数据和测试数据上评估模型也可以让我们看到模型是否过度配合:
[mw_shl_code=python,true]import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
sess = tf.Session()
x_vals = np.random.normal(1, 0.1, 100)
y_vals = np.repeat(10., 100)
x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
batch_size = 25
train_indices = np.random.choice(len(x_vals), round(len(x_
vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_
indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
A = tf.Variable(tf.random_normal(shape=[1,1]))[/mw_shl_code]

2、现在我们声明我们的模型,损失函数和优化算法。 我们还将初始化模型变量A.使用以下代码:
[mw_shl_code=python,true]my_output = tf.matmul(x_data, A)
loss = tf.reduce_mean(tf.square(my_output - y_target))
init = tf.initialize_all_variables()
sess.run(init)
my_opt = tf.train.GradientDescentOptimizer(0.02)
train_step = my_opt.minimize(loss)
[/mw_shl_code]
3、我们像以前一样运行训练循环,如下所示:
[mw_shl_code=python,true]for i in range(100):
    rand_index = np.random.choice(len(x_vals_train), size=batch_ size)
    rand_x = np.transpose([x_vals_train[rand_index]])
    rand_y = np.transpose([y_vals_train[rand_index]])
    sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
        if (i+1)%25==0:
            print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))
            print('Loss = ' + str(sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})))
Step #25 A = [[ 6.39879179]]
Loss = 13.7903
Step #50 A = [[ 8.64770794]]
Loss = 2.53685
Step #75 A = [[ 9.40029907]]
Loss = 0.818259
Step #100 A = [[ 9.6809473]]
Loss = 1.10908 [/mw_shl_code]
4、现在,为了评估模型,我们将在训练和测试集上输出MSE(损失函数),如下所示:
[mw_shl_code=python,true]mse_test = sess.run(loss, feed_dict={x_data: np.transpose([x_vals_ test]), y_target: np.transpose([y_vals_test])})
mse_train = sess.run(loss, feed_dict={x_data: np.transpose([x_ vals_train]), y_target: np.transpose([y_vals_train])})
print('MSE' on test:' + str(np.round(mse_test, 2)))
print('MSE' on train:' + str(np.round(mse_train, 2)))
MSE on test:1.35
MSE on train:0.88
[/mw_shl_code]
5、对于分类的例子,我们会做一些非常相似的事情。 这一次,我们需要创建我们自己的精确度函数,我们可以在最后调用它。 其中一个原因是因为我们的损失函数内置了sigmoid,我们需要单独调用sigmoid并测试它以查看我们的类是否正确。

6、在同一个脚本中,我们可以重新加载图形并创建我们的数据,变量和占位符。 请记住,我们还需要将数据和目标分为训练和测试集。 使用下面的代码:
[mw_shl_code=python,true]from tensorflow.python.framework import ops
ops.reset_default_graph()
sess = tf.Session()
batch_size = 25
x_vals = np.concatenate((np.random.normal(-1, 1, 50), np.random. normal(2, 1, 50)))
y_vals = np.concatenate((np.repeat(0., 50), np.repeat(1., 50)))
x_data = tf.placeholder(shape=[1, None], dtype=tf.float32)
y_target = tf.placeholder(shape=[1, None], dtype=tf.float32)
train_indices = np.random.choice(len(x_vals), round(len(x_ vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_ indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
A = tf.Variable(tf.random_normal(mean=10, shape=[1]))[/mw_shl_code]

7、现在我们将模型和损失函数添加到图中,初始化变量并创建优化过程,如下所示:
[mw_shl_code=python,true]my_output = tf.add(x_data, A)
init = tf.initialize_all_variables()
sess.run(init)
xentropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_ logits(my_output, y_target))
my_opt = tf.train.GradientDescentOptimizer(0.05)
train_step = my_opt.minimize(xentropy) [/mw_shl_code]

8、现在我们运行我们的训练循环,如下所示:
[mw_shl_code=python,true]for i in range(1800):
    rand_index = np.random.choice(len(x_vals_train), size=batch_ size)
    rand_x = [x_vals_train[rand_index]]
    rand_y = [y_vals_train[rand_index]]
    sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
    if (i+1)%200==0:
        print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))
        print('Loss = ' + str(sess.run(xentropy, feed_dict={x_ data: rand_x, y_target: rand_y})))
Step #200 A = [ 6.64970636]
Loss = 3.39434
Step #400 A = [ 2.2884655]
Loss = 0.456173
Step #600 A = [ 0.29109824]
Loss = 0.312162
Step #800 A = [-0.20045301]
Loss = 0.241349
Step #1000 A = [-0.33634067]
Loss = 0.376786
Step #1200 A = [-0.36866501]
Loss = 0.271654
Step #1400 A = [-0.3727718]
Loss = 0.294866
Step #1600 A = [-0.39153299]
Loss = 0.202275
Step #1800 A = [-0.36630616]
Loss = 0.358463 [/mw_shl_code]

9、要评估模型,我们将创建我们自己的预测操作。 我们将预测操作包装在挤压函数中,因为我们想要进行预测并且以相同的形状为目标。 然后我们用相等的函数来检验相等性。 在那之后,我们留下了一系列真实的和虚假的价值观,我们将这些价值观念浮出水面32并采取它们的意思。 这将导致准确度值。 我们将针对培训和测试集评估此功能,如下所示:
[mw_shl_code=python,true]y_prediction = tf.squeeze(tf.round(tf.nn.sigmoid(tf.add(x_data, A))))
correct_prediction = tf.equal(y_prediction, y_target)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
acc_value_test = sess.run(accuracy, feed_dict={x_data: [x_vals_ test], y_target: [y_vals_test]})
acc_value_train = sess.run(accuracy, feed_dict={x_data: [x_vals_ train], y_target: [y_vals_train]})
print('Accuracy' on train set: ' + str(acc_value_train))
print('Accuracy' on test set: ' + str(acc_value_test))
Accuracy on train set: 0.925
Accuracy on test set: 0.95[/mw_shl_code]

10、很多时候,看到模型结果(准确性,MSE等)将帮助我们评估模型。 我们可以在这里轻松地绘制模型和数据,因为它是一维的。 以下是如何使用matplotlib用两个单独的直方图可视化模型和数据:
[mw_shl_code=python,true]A_result = sess.run(A)
bins = np.linspace(-5, 5, 50)
plt.hist(x_vals[0:50], bins, alpha=0.5, label='N'(-1,1)', color='white')
plt.hist(x_vals[50:100], bins[0:50], alpha=0.5, label='N'(2,1)', color='red')
plt.plot((A_result, A_result), (0, 8), 'k--', linewidth=3, label='A = '+ str(np.round(A_result, 2)))
plt.legend(loc='upper right')
plt.title('Binary' Classifier, Accuracy=' + str(np.round(acc_ value, 2)))
plt.show()
[/mw_shl_code]

2018-04-04_133342.jpg

图8:数据和最终模型的可视化A,两个正常值集中在-1和2,使理论上的最佳分裂为0.5。 这里模型发现最接近这个数字的最好的分割。

原文:
Evaluating Models

We have learned how to train a regression and classification algorithm in TensorFlow. After this is accomplished, we must be able to evaluate the model's predictions to determine how well it did.

Getting ready
Evaluating models is very important and every subsequent model will have some form of model evaluation. Using TensorFlow, we must build this feature into the computational graph and call it during and/or after our model is training.

Evaluating models during training gives us insight into the algorithm and may give us hints to debug it, improve it, or change models entirely. While evaluation during training isn't always necessary, we will show how to do this with both regression and classification.

After training, we need to quantify how the model performs on the data. Ideally, we have a separate training and test set (and even a validation set) on which we can evaluate the model.

When we want to evaluate a model, we will want to do so on a large batch of data points. If we have implemented batch training, we can reuse our model to make a prediction on such a batch. If we have implemented stochastic training, we may have to create a separate evaluator that can process data in batches.

If we included a transformation on our model output in the loss function, for example, sigmoid_cross_entropy_with_logits(), we must take that into account when computing predictions for accuracy calculations. Don't forget to include this in our evaluation of the model.

Classification models predict a category based on numerical inputs. The actual targets
are a sequence of 1s and 0s and we must have a measure of how close we are to the
truth from our predictions. The loss function for classification models usually isn't that
helpful in interpreting how well our model is doing. Usually, we want some sort of classification
accuracy, which is commonly the percentage of correctly predicted categories. For this
example, we will use the classification example from the prior Implementing Back Propagation
recipe in this chapter.

How it works…
First we will show how to evaluate the simple regression model that simply fits a constant
multiplication to the target of 10, as follows:
1. First we start by loading the libraries, creating the graph, data, variables, and placeholders. There is an additional part to this section that is very important. After we create the data, we will split the data into training and testing datasets randomly.
This is important because we will always test our models if they are predicting well or not. Evaluating the model both on the training data and test data also lets us see whether the model is overfitting or not:
[mw_shl_code=python,true]import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
sess = tf.Session()
x_vals = np.random.normal(1, 0.1, 100)
y_vals = np.repeat(10., 100)
x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
batch_size = 25
train_indices = np.random.choice(len(x_vals), round(len(x_
vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_
indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
A = tf.Variable(tf.random_normal(shape=[1,1]))[/mw_shl_code]

2、Now we declare our model, loss function, and optimization algorithm. We will also initialize the model variable A. Use the following code:
[mw_shl_code=python,true]my_output = tf.matmul(x_data, A)
loss = tf.reduce_mean(tf.square(my_output - y_target))
init = tf.initialize_all_variables()
sess.run(init)
my_opt = tf.train.GradientDescentOptimizer(0.02)
train_step = my_opt.minimize(loss) [/mw_shl_code]

3、We run the training loop just as we would before, as follows:
[mw_shl_code=python,true]for i in range(100):
  rand_index = np.random.choice(len(x_vals_train), size=batch_ size)
  rand_x = np.transpose([x_vals_train[rand_index]])
  rand_y = np.transpose([y_vals_train[rand_index]])
  sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
    if (i+1)%25==0:
      print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))
      print('Loss = ' + str(sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})))
Step #25 A = [[ 6.39879179]]
Loss = 13.7903
Step #50 A = [[ 8.64770794]]
Loss = 2.53685
Step #75 A = [[ 9.40029907]]
Loss = 0.818259
Step #100 A = [[ 9.6809473]]
Loss = 1.10908 [/mw_shl_code]

4、Now, to evaluate the model, we will output the MSE (loss function) on the training and test sets, as follows:
[mw_shl_code=python,true]mse_test = sess.run(loss, feed_dict={x_data: np.transpose([x_vals_ test]), y_target: np.transpose([y_vals_test])})
mse_train = sess.run(loss, feed_dict={x_data: np.transpose([x_ vals_train]), y_target: np.transpose([y_vals_train])})
print('MSE' on test:' + str(np.round(mse_test, 2)))
print('MSE' on train:' + str(np.round(mse_train, 2)))
MSE on test:1.35
MSE on train:0.88[/mw_shl_code]

5、For the classification example, we will do something very similar. This time, we will need to create our own accuracy function that we can call at the end. One reason for this is because our loss function has the sigmoid built in and we will need to call the sigmoid separately and test it to see if our classes are correct.

6、In the same script, we can just reload the graph and create our data, variables, and placeholders. Remember that we will also need to separate the data and targets into training and testing sets. Use the following code:
[mw_shl_code=python,true]from tensorflow.python.framework import ops
ops.reset_default_graph()
sess = tf.Session()
batch_size = 25
x_vals = np.concatenate((np.random.normal(-1, 1, 50), np.random. normal(2, 1, 50)))
y_vals = np.concatenate((np.repeat(0., 50), np.repeat(1., 50)))
x_data = tf.placeholder(shape=[1, None], dtype=tf.float32)
y_target = tf.placeholder(shape=[1, None], dtype=tf.float32)
train_indices = np.random.choice(len(x_vals), round(len(x_ vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_ indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
A = tf.Variable(tf.random_normal(mean=10, shape=[1])) [/mw_shl_code]

7、We will now add the model and the loss function to the graph, initialize variables, and create the optimization procedure, as follows:
[mw_shl_code=python,true]my_output = tf.add(x_data, A)
init = tf.initialize_all_variables()
sess.run(init)
xentropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_ logits(my_output, y_target))
my_opt = tf.train.GradientDescentOptimizer(0.05)
train_step = my_opt.minimize(xentropy) [/mw_shl_code]

8、Now we run our training loop, as follows:
[mw_shl_code=python,true]for i in range(1800):
  rand_index = np.random.choice(len(x_vals_train), size=batch_ size)
  rand_x = [x_vals_train[rand_index]]
  rand_y = [y_vals_train[rand_index]]
  sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
  if (i+1)%200==0:
    print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))
    print('Loss = ' + str(sess.run(xentropy, feed_dict={x_ data: rand_x, y_target: rand_y})))
Step #200 A = [ 6.64970636]
Loss = 3.39434
Step #400 A = [ 2.2884655]
Loss = 0.456173
Step #600 A = [ 0.29109824]
Loss = 0.312162
Step #800 A = [-0.20045301]
Loss = 0.241349
Step #1000 A = [-0.33634067]
Loss = 0.376786
Step #1200 A = [-0.36866501]
Loss = 0.271654
Step #1400 A = [-0.3727718]
Loss = 0.294866
Step #1600 A = [-0.39153299]
Loss = 0.202275
Step #1800 A = [-0.36630616]
Loss = 0.358463 [/mw_shl_code]

9、To evaluate the model, we will create our own prediction operation. We wrap the prediction operation in a squeeze function because we want to make the predictions and targets the same shape. Then we test for equality with the equal function. After that, we are left with a tensor of true and false values that we cast to float32 and take the mean of them. This will result in an accuracy value. We will evaluate this function for both the training and testing sets, as follows:
[mw_shl_code=python,true]y_prediction = tf.squeeze(tf.round(tf.nn.sigmoid(tf.add(x_data, A))))
correct_prediction = tf.equal(y_prediction, y_target)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
acc_value_test = sess.run(accuracy, feed_dict={x_data: [x_vals_ test], y_target: [y_vals_test]})
acc_value_train = sess.run(accuracy, feed_dict={x_data: [x_vals_ train], y_target: [y_vals_train]})
print('Accuracy' on train set: ' + str(acc_value_train))
print('Accuracy' on test set: ' + str(acc_value_test))
Accuracy on train set: 0.925
Accuracy on test set: 0.95[/mw_shl_code]

10、Many times, seeing the model results (accuracy, MSE, and so on) will help us to evaluate the model. We can easily graph the model and data here because it is one-dimensional. Here is how to visualize the model and data with two separate histograms using matplotlib:
[mw_shl_code=python,true]A_result = sess.run(A)
bins = np.linspace(-5, 5, 50)
plt.hist(x_vals[0:50], bins, alpha=0.5, label='N'(-1,1)', color='white')
plt.hist(x_vals[50:100], bins[0:50], alpha=0.5, label='N'(2,1)', color='red')
plt.plot((A_result, A_result), (0, 8), 'k--', linewidth=3, label='A = '+ str(np.round(A_result, 2)))
plt.legend(loc='upper right')
plt.title('Binary' Classifier, Accuracy=' + str(np.round(acc_ value, 2)))
plt.show()[/mw_shl_code]
2018-04-04_133342.jpg
Figure 8: Visualization of data and the end model, A. The two normal values are centered at -1 and 2, making the theoretical best split at 0.5. Here the model found the best split very close to that number.


没找到任何评论,期待你打破沉寂

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条