## Difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits

Answer from < Olivier Moindrot >
Having two different functions is a convenience, as they produce the same result.

The difference is simple:
For sparse_softmax_cross_entropy_with_logits, labels must have the shape [batch_size] and the dtype int32 or int64. Each label is an int in range [0, num_classes-1].
For softmax_cross_entropy_with_logits, labels must have the shape [batch_size, num_classes] and dtype float32 or float64.

Labels used in softmax_cross_entropy_with_logits are the one hot version of labels used in sparse_softmax_cross_entropy_with_logits.

Another tiny difference is that with sparse_softmax_cross_entropy_with_logits, you can give -1 as a label to have loss 0 on this label.

Answer from < 全意 >
sparse_softmax_cross_entropy_with_logits中 lables接受直接的数字标签

softmax_cross_entropy_with_logits中 labels接受one-hot标签

## What does global_step mean in tensorflow ?

global_step refers to the number of batches seen by the graph. Every time a batch is provided, the weights are updated in the direction that minimizes the loss. global_step just keeps track of the number of batches seen so far. When it is passed in the minimize() argument list, the variable is increased by one. Have a look at optimizer.minimize().

You can get the global_step value using tf.train.global_step(). Also handy are the utility methods tf.train.get_global_step or tf.train.get_or_create_global_step.

0 is the initial value of the global step in this context.
Answer from < maddin25 and martianwars >

global_step 指的是 graph 所看到的 batch 的数量. 如果 dataset 中包含100个数据, 而 batch_size 被设置为 10, 则每隔10个 batch 结束一个 epoch.
epoch 指的是遍历一遍 dataset.

## Web Solution

Answer from < mrry >

The FailedPreconditionError arises because the program is attempting to read a variable (named “Variable_1”) before it has been initialized. In TensorFlow, all variables must be explicitly initialized, by running their “initializer” operations. For convenience, you can run all of the variable initializers in the current session by executing the following statement before your training loop.

This exception is most commonly raised when running an operation that reads a tf.Variable before it has been initialized.

## My Case

In my case, when I wrote the __init__ func in class Model, I claimed more variables after initialized the saver with tf.gloabl_variables(). It seems like :

import tensorflow as tf

class Model(object):
def __init__(self, hparmas):
self.hparams = hparmas
""" some variables init """
self.saver = tf.train.Saver(tf.global_variables(), max_to_keep = self.hparams.max_to_keep)
self.init_embeddings()

def init_embeddings(self):
""" some variables init """


When saving variables, the saver does not realize other variable initialized in func init_embedding. Thus after restore step, these variables cannot be restored from ckpt files. When using them, it will throw FailedPreconditionError like tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value beta1_power in which the beta1_power is the unlucky one.