You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi, I love what you do and open source work.
and I have a question about the loss function.
I read your paper, the paper said loss function is MFE and MSFE.
but I found the code is:
for i in range(logits.get_shape().as_list()[-1]): # [128, None, 7]
class_fill_targets = tf.fill(tf.shape(targets), i) #[?,?]
weights_i = tf.cast(tf.equal(targets, class_fill_targets), "float") #[?,?]
loss_is.append(tf.contrib.seq2seq.sequence_loss(logits, targets, weights_i, average_across_batch=False))
I googled this function. It compute cross-entropy loss. also weights_i parameter is my another question.
for example, when i=7 the weight is 7.
If I have something wrong, can you help me clarify it. thanks a lot!
The text was updated successfully, but these errors were encountered:
hi, I love what you do and open source work.
and I have a question about the loss function.
I read your paper, the paper said loss function is MFE and MSFE.
but I found the code is:
I googled this function. It compute cross-entropy loss. also weights_i parameter is my another question.
for example, when i=7 the weight is 7.
If I have something wrong, can you help me clarify it. thanks a lot!
The text was updated successfully, but these errors were encountered: