tf1中的dataset.make_initializable_iterator()与dataset.repeat(epochs)的冲突问题

来源:5-10 TF1_dataset使用

wxz123

2019-11-14

import numpy as np
import tensorflow as tf

from tensorflow import keras

    
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_all[:5000], x_train_all[5000:]
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]

print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)

from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(
    x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
x_valid_scaled = scaler.transform(
    x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
x_test_scaled = scaler.transform(
    x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)

y_train = np.asarray(y_train, dtype = np.int64)
y_valid = np.asarray(y_valid, dtype = np.int64)
y_test = np.asarray(y_test, dtype = np.int64)


def make_dataset(images, labels, epochs, batch_size, shuffle = True):
    dataset = tf.data.Dataset.from_tensor_slices((images, labels))
    if shuffle:
        dataset = dataset.shuffle(10000)
    dataset = dataset.repeat(epochs).batch(batch_size)
    return dataset
batch_size = 128
epochs = 10

images_placeholder = tf.placeholder(tf.float32, [None, 28 * 28])
labels_placeholder = tf.placeholder(tf.int64, (None,))

dataset = make_dataset(images_placeholder, labels_placeholder,
                       epochs = epochs,
                       batch_size = batch_size)

dataset_iter = dataset.make_initializable_iterator()
x, y = dataset_iter.get_next()


hidden_units = [100, 100]
class_num = 10

input_for_next_layer = x
for hidden_unit in hidden_units:
    input_for_next_layer = tf.layers.dense(input_for_next_layer,
                                           hidden_unit,
                                           activation=tf.nn.relu)
logits = tf.layers.dense(input_for_next_layer,
                         class_num)
loss = tf.losses.sparse_softmax_cross_entropy(labels = y,
                                              logits = logits)
prediction = tf.argmax(logits, 1)
correct_prediction = tf.equal(prediction, y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float64))

train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)



init = tf.global_variables_initializer()
train_steps_per_epoch = x_train.shape[0] // batch_size
valid_steps = x_valid.shape[0] // batch_size

def eval_with_sess(sess, images_placeholder,labels_placeholder,x_valid_scaled,y_valid,accuracy, valid_steps):
    sess.run(dataset_iter.initializer,
                 feed_dict = {
                     images_placeholder: x_valid_scaled,
                     labels_placeholder: y_valid,
                 })
    eval_accuracies = []
    for step in range(valid_steps):
        accuracy_val = sess.run(accuracy)
        eval_accuracies.append(accuracy_val)
    return np.mean(eval_accuracies)
with tf.Session() as sess:
    sess.run(init)
    for epoch in range(epochs):
        sess.run(dataset_iter.initializer,
             feed_dict = {
                 images_placeholder: x_train_scaled,
                 labels_placeholder: y_train
             })
        for step in range(train_steps_per_epoch):
            loss_val, accuracy_val, _ = sess.run(
                [loss, accuracy, train_op])
            print('\r[Train] epoch: %d, step: %d, loss: %3.5f, accuracy: %2.2f' % (
                epoch, step, loss_val, accuracy_val), end="")
        valid_accuracy = eval_with_sess(sess, 
                                        images_placeholder,labels_placeholder,
                                        x_valid_scaled,y_valid,
                                        accuracy, 
                                        valid_steps)
        print("\t[Valid] acc: %2.2f" % (valid_accuracy))

老师能否帮我看一下这个代码,对于tensorflow1,用dataset的make_initializable_iterator()这个方法使dataset_iter可以在同时用在训练集和验证集上,但我每一个epoch用验证集的时候都要运行

sess.run(dataset_iter.initializer,
                 feed_dict = {
                     images_placeholder: x_valid_scaled,
                     labels_placeholder: y_valid,
                 })

这样一段代码来重新初始化dataset_iter,验证完再进行下一个epoch训练时又要运行

sess.run(dataset_iter.initializer,
             feed_dict = {
                 images_placeholder: x_train_scaled,
                 labels_placeholder: y_train
             })

这段代码再一次重新初始化dataset_iter。
我的问题是:既然每一个epoch的dataset_iter要在训练集和验证集之间来回切换时都要重新初始化dataset_iter,那

def make_dataset(images, labels, epochs, batch_size, shuffle = True):
    dataset = tf.data.Dataset.from_tensor_slices((images, labels))
    if shuffle:
        dataset = dataset.shuffle(10000)
    dataset = dataset.repeat(epochs).batch(batch_size)
    return dataset

这段代码中的dataset = dataset.repeat(epochs).batch(batch_size)的.repeat(epochs)是不是没什么必要了?

写回答

1回答

正十七

2019-11-17

同学你好,如果你保证了每次验证的时候在验证集上不会超过一遍的话,那么这个repeat(epoches)就是不必要的。

0
3
正十七
回复
wxz123
应该没问题,如果觉得程序臃肿的话可以抽取成几个独立的函数。感觉会好一些。
2020-03-05
共3条回复

Google老师亲授 TensorFlow2.0 入门到进阶

Tensorflow2.0实战—以实战促理论的方式学习深度学习

1849 学习 · 896 问题

查看课程