报了这个错是为啥
来源:4-22 动手实现CNN卷积神经网络(四)
用代码把梦想照进现实
2018-06-20
D:\ProgramData\Anaconda3\envs\python3.6\python.exe C:\Users\Administrator\.IntelliJIdea2018.1\config\plugins\python\helpers\pydev\pydev_run_in_console.py 61990 61991 D:/IdeaProjects/TensorFlow_Exercises/HelloWorld/cnn_mnist.py Running D:/IdeaProjects/TensorFlow_Exercises/HelloWorld/cnn_mnist.py import sys; print('Python %s on %s' % (sys.version, sys.platform)) sys.path.extend(['D:\\IdeaProjects\\TensorFlow_Exercises', 'D:/IdeaProjects/TensorFlow_Exercises/HelloWorld']) WARNING:tensorflow:From D:/IdeaProjects/TensorFlow_Exercises/HelloWorld/cnn_mnist.py:6: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use alternatives such as official/mnist/dataset.py from tensorflow/models. WARNING:tensorflow:From D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. Instructions for updating: Please write your own downloading logic. WARNING:tensorflow:From D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.data to implement this functionality. Extracting mnist_data\train-images-idx3-ubyte.gz Extracting mnist_data\train-labels-idx1-ubyte.gz WARNING:tensorflow:From D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.data to implement this functionality. Extracting mnist_data\t10k-images-idx3-ubyte.gz WARNING:tensorflow:From D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.one_hot on tensors. Extracting mnist_data\t10k-labels-idx1-ubyte.gz WARNING:tensorflow:From D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use alternatives such as official/mnist/dataset.py from tensorflow/models. 2018-06-20 17:49:02.072827: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 Traceback (most recent call last): File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1322, in _do_call return fn(*args) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be broadcastable: logits_size=[100,10] labels_size=[50,10] [[Node: softmax_cross_entropy_loss/xentropy = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_loss/xentropy/Reshape, softmax_cross_entropy_loss/xentropy/Reshape_1)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Administrator\.IntelliJIdea2018.1\config\plugins\python\helpers\pydev\pydev_run_in_console.py", line 52, in run_file pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Users\Administrator\.IntelliJIdea2018.1\config\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:/IdeaProjects/TensorFlow_Exercises/HelloWorld/cnn_mnist.py", line 159, in <module> {input_x: batch[0], output_y: batch[1]} File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 900, in run run_metadata_ptr) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run run_metadata) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be broadcastable: logits_size=[100,10] labels_size=[50,10] [[Node: softmax_cross_entropy_loss/xentropy = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_loss/xentropy/Reshape, softmax_cross_entropy_loss/xentropy/Reshape_1)]] Caused by op 'softmax_cross_entropy_loss/xentropy', defined at: File "C:\Users\Administrator\.IntelliJIdea2018.1\config\plugins\python\helpers\pydev\pydev_run_in_console.py", line 150, in <module> globals = run_file(file, None, None, is_module) File "C:\Users\Administrator\.IntelliJIdea2018.1\config\plugins\python\helpers\pydev\pydev_run_in_console.py", line 52, in run_file pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Users\Administrator\.IntelliJIdea2018.1\config\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:/IdeaProjects/TensorFlow_Exercises/HelloWorld/cnn_mnist.py", line 124, in <module> onehot_labels=output_y, logits=logits File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py", line 749, in softmax_cross_entropy labels=onehot_labels, logits=logits, name="xentropy") File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1873, in softmax_cross_entropy_with_logits_v2 precise_logits, labels, name=name) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 7168, in softmax_cross_entropy_with_logits name=name) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\framework\ops.py", line 3392, in create_op op_def=op_def) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access InvalidArgumentError (see above for traceback): logits and labels must be broadcastable: logits_size=[100,10] labels_size=[50,10] [[Node: softmax_cross_entropy_loss/xentropy = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_loss/xentropy/Reshape, softmax_cross_entropy_loss/xentropy/Reshape_1)]] PyDev console: starting. Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 12:30:02) [MSC v.1900 64 bit (AMD64)] on win32
写回答
3回答
-
我的代码被你自己改过了,然后有好几个地方错误:
你的代码开始处没有 # -*- coding: UTF-8 -*-
pool2 的 inputs 应该是 conv2,你写了 conv1
conv2 的 inputs 应该是 pool1,你写了 input_x_images
test_output = sess.run(logits, {input_x: test_x[:20]}) 这句里 input_x 后面是冒号(:),你写了 逗号(,)
最后那个测试的代码块(“# 测试:打印 20 个预测值 和 真实值”到最后),应该没有缩进才对。
正确的代码应该是:
# -*- coding: UTF-8 -*- import tensorflow as tf import numpy as np # 下载并载入MNIST 手写数字库(55000 * 28 * 28) 55000张训练图像 from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('mnist_data', one_hot=True) # one_hot 独热码的编码(encoding) 心事 # 0,1,2,3,4,5,6,7,8,9 的十位数字 # 独热码会把这10位数字以一种独特的形式来表示 # 0 : 1000000000 # 1 : 0100000000 # 2 : 0010000000 # 3 : 0001000000 # 4 : 0000100000 # 5 : 0000010000 # 省略... # 构建测试的数据 # None 表示张量(Tensor)的第一个维度可以是任何长度 # 除以255 是因为黑白图片(灰度图像)有0-255的一个灰度值的范围 input_x = tf.placeholder(tf.float32, [None, 28 * 28]) / 255. # 输出:10个数字的标签 output_y = tf.placeholder(tf.int32, [None, 10]) # 改变形状之后的输入 input_x_images = tf.reshape(input_x, [-1, 28, 28, 1]) # 训练的数据库,测试的数据库,验证的数据库 # 这里用到训练的和测试的数据库 # 从 Test(测试)数据集里选取3000个手写数字的图片和对应标签 # 图片 test_x = mnist.test.images[:3000] # 标签 test_y = mnist.test.labels[:3000] # 构建我们的卷积神经网络 # conv:卷积 2d:二维 # 第 1 层卷积 conv1 = tf.layers.conv2d( # 形状是28*28*1 inputs=input_x_images, # 32个过滤器,输出的深度(depth) 是32 filters=32, # 过滤器在二维的大小是(5 * 5) kernel_size=[5, 5], # 步长是1(每跨一步采样一次) strides=1, # same表示输出的大小不变,因此需要在外围补零2圈 padding='same', # 激活函数是Relu activation=tf.nn.relu # 形状变成28*28*32 ) # 池化层(亚采样):只采集一部分数据 # Pool:2x2 stride:2 # 第 1 层池化(亚采样) pool1 = tf.layers.max_pooling2d( # 形状 28*28*32 inputs=conv1, # 过滤器在二维的大小是(2 * 2) pool_size=[2, 2], # 步长是2 strides=2 # 形状 [14, 14, 32] ) # 第 2 层卷积 conv2 = tf.layers.conv2d( # 形状是14*14*32 inputs=pool1, # 64个过滤器,输出的深度(depth) 是64 filters=64, # 过滤器在二维的大小是(5 * 5) kernel_size=[5, 5], # 步长是1(每跨一步采样一次) strides=1, # same表示输出的大小不变,因此需要在外围补零2圈 padding='same', # 激活函数是Relu activation=tf.nn.relu # 形状变成14*14*64 ) # Pool:2x2 stride:2 # 第 2 层池化(亚采样) pool2 = tf.layers.max_pooling2d( # 形状 14*14*64 inputs=conv2, # 过滤器在二维的大小是(2 * 2) pool_size=[2, 2], # 步长是2 strides=2 # 形状 [7, 7, 64] ) # 平坦化(flat) 压成1*1 # 形状 [7 * 7 * 64] flat = tf.reshape(pool2, [-1, 7 * 7 * 64]) # 1024 个神经元的全连接层 # dense : 密集 dense = tf.layers.dense( inputs=flat, # 1024个神经元 units=1024, # 激活函数 activation=tf.nn.relu ) # Dropout : 丢弃 50%, 丢弃率: 0.5 dropout = tf.layers.dropout( inputs=dense, # 丢弃率 rate=0.5 ) # 10 个神经元的全连接层, 这里不用激活函数来做非线性化了 # 输出。形状[1, 1, 10] logits = tf.layers.dense(inputs=dropout, units=10) # 计算误差 (计算 Cross entropy(交叉熵), 再用 softmax 激活函数计算出 # 百分比概率) loss = tf.losses.softmax_cross_entropy( onehot_labels=output_y, logits=logits ) # Adam优化器来最小化误差, 学习率 0.001 # minimize 最小化预测值和真实值的误差,越小越好 train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) # 精度。计算 预测值 和 实际标签的匹配程度 # (accuracy, update_op), 会创建2个局部变量 # 返回,会返回2个值:第一个是精度,第二个是不断更新的操作(不断改变的精度) accuracy = tf.metrics.accuracy( # argmax 返回所有维度的轴上面的最大的那个下标 labels=tf.argmax(output_y, axis=1), # 预估 predictions=tf.argmax(logits, axis=1) )[1] # 创建一个会话 with tf.Session() as sess: # 初始化变量、创建组 init = tf.group( # 全局初始化 tf.global_variables_initializer(), # 局部初始化 tf.local_variables_initializer() ) sess.run(init) # 训练2w次 for i in range(20000): # 从 Train (训练) 数据集里取下一个50个样本 batch = mnist.train.next_batch(50) # 计算误差 和 Adam 优化器 train_loss, train_op_ = sess.run( [loss, train_op], {input_x: batch[0], output_y: batch[1]} ) if i % 100 == 0: test_accuracy = sess.run(accuracy, {input_x: test_x, output_y: test_y }) print("Step=%d, Train loss=%.4f, [Test accuracy=%.2f]" % (i, train_loss, test_accuracy)) # 测试 : 打印 20 个预测值 和 真实值的对 test_output = sess.run(logits, {input_x: test_x[:20]}) inferenced_y = np.argmax(test_output, 1) # 推测的数字 print(inferenced_y, 'Inferenced numbers') # 真实的数字 print(np.argmax(test_y[:20], 1), 'Real numbers')
012018-06-20 -
Oscar
2018-06-20
为什么你的代码还是老版本的代码?
新版代码我2018年3月更新的。新版代码应该是这样的:
如果你是很早之前买我的课程的,请重置一下项目,参考这个回答:
012018-06-20 -
用代码把梦想照进现实
提问者
2018-06-20
代码
import tensorflow as tf import numpy as np # 下载并载入MNIST 手写数字库(55000 * 28 * 28) 55000张训练图像 from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('mnist_data', one_hot=True) # one_hot 独热码的编码(encoding) 心事 # 0,1,2,3,4,5,6,7,8,9 的十位数字 # 独热码会把这10位数字以一种独特的形式来表示 # 0 : 1000000000 # 1 : 0100000000 # 2 : 0010000000 # 3 : 0001000000 # 4 : 0000100000 # 5 : 0000010000 # 省略... # 构建测试的数据 # None 表示张量(Tensor)的第一个维度可以是任何长度 # 除以255 是因为黑白图片(灰度图像)有0-255的一个灰度值的范围 input_x = tf.placeholder(tf.float32, [None, 28 * 28]) / 255. # 输出:10个数字的标签 output_y = tf.placeholder(tf.int32, [None, 10]) # 改变形状之后的输入 input_x_images = tf.reshape(input_x, [-1, 28, 28, 1]) # 训练的数据库,测试的数据库,验证的数据库 # 这里用到训练的和测试的数据库 # 从 Test(测试)数据集里选取3000个手写数字的图片和对应标签 # 图片 test_x = mnist.test.images[:3000] # 标签 test_y = mnist.test.labels[:3000] # 构建我们的卷积神经网络 # conv:卷积 2d:二维 # 第 1 层卷积 conv1 = tf.layers.conv2d( # 形状是28*28*1 inputs=input_x_images, # 32个过滤器,输出的深度(depth) 是32 filters=32, # 过滤器在二维的大小是(5 * 5) kernel_size=[5, 5], # 步长是1(每跨一步采样一次) strides=1, # same表示输出的大小不变,因此需要在外围补零2圈 padding='same', # 激活函数是Relu activation=tf.nn.relu # 形状变成28*28*32 ) # 池化层(亚采样):只采集一部分数据 # Pool:2x2 stride:2 # 第 1 层池化(亚采样) pool1 = tf.layers.max_pooling2d( # 形状 28*28*32 inputs=conv1, # 过滤器在二维的大小是(2 * 2) pool_size=[2, 2], # 步长是2 strides=2 # 形状 [14, 14, 32] ) # 第 2 层卷积 conv2 = tf.layers.conv2d( # 形状是14*14*32 inputs=input_x_images, # 64个过滤器,输出的深度(depth) 是64 filters=64, # 过滤器在二维的大小是(5 * 5) kernel_size=[5, 5], # 步长是1(每跨一步采样一次) strides=1, # same表示输出的大小不变,因此需要在外围补零2圈 padding='same', # 激活函数是Relu activation=tf.nn.relu # 形状变成14*14*64 ) # Pool:2x2 stride:2 # 第 2 层池化(亚采样) pool2 = tf.layers.max_pooling2d( # 形状 14*14*64 inputs=conv1, # 过滤器在二维的大小是(2 * 2) pool_size=[2, 2], # 步长是2 strides=2 # 形状 [7, 7, 64] ) # 平坦化(flat) 压成1*1 # 形状 [7 * 7 * 64] flat = tf.reshape(pool2, [-1, 7 * 7 * 64]) # 1024 个神经元的全连接层 # dense : 密集 dense = tf.layers.dense( inputs=flat, # 1024个神经元 units=1024, # 激活函数 activation=tf.nn.relu ) # Dropout : 丢弃 50%, 丢弃率: 0.5 dropout = tf.layers.dropout( inputs=dense, # 丢弃率 rate=0.5 ) # 10 个神经元的全连接层, 这里不用激活函数来做非线性化了 # 输出。形状[1, 1, 10] logits = tf.layers.dense(inputs=dropout, units=10) # 计算误差 (计算 Cross entropy(交叉熵), 再用 softmax 激活函数计算出 # 百分比概率) loss = tf.losses.softmax_cross_entropy( onehot_labels=output_y, logits=logits ) # Adam优化器来最小化误差, 学习率 0.001 # minimize 最小化预测值和真实值的误差,越小越好 train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) # 精度。计算 预测值 和 实际标签的匹配程度 # (accuracy, update_op), 会创建2个局部变量 # 返回,会返回2个值:第一个是精度,第二个是不断更新的操作(不断改变的精度) accuracy = tf.metrics.accuracy( # argmax 返回所有维度的轴上面的最大的那个下标 labels=tf.argmax(output_y, axis=1), # 预估 predictions=tf.argmax(logits, axis=1) )[1] # 创建一个会话 with tf.Session() as sess: # 初始化变量、创建组 init = tf.group( # 全局初始化 tf.global_variables_initializer(), # 局部初始化 tf.local_variables_initializer() ) sess.run(init) # 训练2w次 for i in range(20000): # 从 Train (训练) 数据集里取下一个50个样本 batch = mnist.train.next_batch(50) # 计算误差 和 Adam 优化器 train_loss, train_op_ = sess.run( [loss, train_op], {input_x: batch[0], output_y: batch[1]} ) if i % 100 == 0: test_accuracy = sess.run(accuracy, {input_x: test_x, output_y: test_y }) print("Step=%d, Train loss=%.4f, [Test accuracy=%.2f]" % (i, train_loss, test_accuracy)) # 测试 : 打印 20 个预测值 和 真实值的对 test_output = sess.run(logits, {input_x, test_x[:20]}) inferenced_y = np.argmax(test_output, 1) # 推测的数字 print(inferenced_y, 'Inferenced numbers') # 真实的数字 print(np.argmax(test_y[:20], 1), 'Real numbers')
00
相似问题