test_y 只能设置成10维,不然报错??
来源:4-23 动手实现CNN卷积神经网络(五)
慕桂英雄
2019-07-01
mnist = input_data.read_data_sets(‘mnist_data’,one_hot=True)
input_x = tf.placeholder(tf.float32,[None,28*28])/255
output_y = tf.placeholder(tf.int32,[None,10])
input_x_images = tf.reshape(input_x,[-1,28,28,1])
MNIST 55000张 28281 的图片 图片分类:0-9
测试集有10000张
test_x = mnist.test.images[:10] #图片 ?????
test_y = mnist.test.labels[:10] #标签 ???这里如果写3000,会报错。维度不匹配???
会出现报错 :InvalidArgumentError (see above for traceback): Incompatible shapes: [3000] vs. [10]
[[node Equal (defined at <ipython-input-3-5e1ce78fd86f>:71) ]]
##构建卷积神经网络
##第一层卷积
conv1 = tf.layers.conv2d(inputs = input_x_images,
filters = 32, #32个过滤器/Kernel,输出深度为32
kernel_size = [5,5],#kernel大小
strides = 1, #步长为1
padding = “same”, #same表示输出的大小不变,因此需要在外围补零若干圈
activation = tf.nn.relu #激活函数
) # 形状 [28,28,32]
print(conv1.shape)
pool1 = tf.layers.max_pooling2d(inputs=conv1,
strides = 2,
pool_size = [2,2],
)#形状 [14,14,32]
conv2 = tf.layers.conv2d(inputs = pool1,
filters = 64, #64个过滤器/Kernel,输出深度为64
kernel_size = [5,5],#kernel大小
strides = 1, #步长为1
padding = “same”, #same表示输出的大小不变,因此需要在外围补零若干圈
activation = tf.nn.relu #激活函数
) # [14,14,64]
print(conv2.shape)
pool2 = tf.layers.max_pooling2d(inputs=conv2,
strides = 2,
pool_size = [2,2],
)#形状 [7,7,64]
##平坦化。。
flat = tf.reshape(pool2,[-1,7764])
1024个神经元的全连接层…
dense = tf.layers.dense(inputs = flat,
units = 1024,
activation = tf.nn.relu
)
#Dropout: 0.5丢弃率
dropout = tf.layers.dropout(inputs=dense,
rate = 0.5)
##10个神经元的全连接层,这里不用激活函数来做非线性化了
logits = tf.layers.dense(inputs = dropout , units= 10) #输出 。形状[None,10]
print(logits.shape)
##计算误差(Cross entropy)
loss = tf.losses.softmax_cross_entropy(onehot_labels=output_y, logits=logits)
##优化器Adam 优化器 ,学习率 0.001
train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
#精度。计算预测值 和实际标签的匹配程度
#返回(accuracy,update_op) ,会创建两个局部变量
accuracy = tf.metrics.accuracy(labels=tf.argmax(output_y),
predictions=tf.argmax(logits,axis= 1))[1]
print(accuracy)
#创建会话
sess = tf.Session()
#初始化变量:全局和局部
init = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())
sess.run(init)
for i in range(20000):
batch = mnist.train.next_batch(50) #从train训练集中取50个
train_loss2,train_op2 = sess.run([loss,train_op],{input_x:batch[0],output_y:batch[1]})
if i%100 == 0:
test_accuracy = sess.run(accuracy,{input_x:test_x,output_y:test_y})
print(test_accuracy,train_loss2)
##测试:打印20个预测值和真实值的对
test_output = sess.run(logits,{input_x:test_x[:20]})
inference_y = np.argmax(test_output,1)
print(inference_y,‘Inferenced numbers’ ) #推测的数字
print(np.argmax(test_y[:20]),‘Real Numbers’)
sess.close()
1回答
-
Oscar
2019-07-15
这么奇怪吗?应该是
# 从 Test(测试)数据集里选取 3000 个手写数字的图片和对应标签 test_x = mnist.test.images[:3000] # 图片 test_y = mnist.test.labels[:3000] # 标签
啊
00
相似问题