You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Case-1: When inputs=outputs for the model construction but training different output shape: Training success
import keras
from keras import layers
import numpy as np
input_1 = layers.Input(shape=(3,))
input_2 = layers.Input(shape=(5,))
model_1 = keras.models.Model([input_1, input_2], [input_1, input_2])
print(model_1.summary())
model_1.compile(optimizer='adam',metrics=['accuracy','accuracy'],loss=['mse'])
#Notice I am passing different output size for training but still training happens
model_1.fit([np.random.normal(size=(10,3)),np.random.normal(size=(10,5))],
[np.random.normal(size=(10,1)),np.random.normal(size=(10,2))])
print('Training completed')
Case 2: Same as Case-1 but different behavior with different mismatched output shapes(than case-1) for training: Error during loss calculation. But I expect Error during graph execution itself.
#With diffrent output shapes than model constructed its raising error while calculating the loss.
#Instead it should have raised shape mismatch error during graph execution.
model_1.fit([np.random.normal(size=(10,3)),np.random.normal(size=(10,5))],
[np.random.normal(size=(10,2)),np.random.normal(size=(10,4))])
Case 3: With Unconnected inputs and outputs
input_1 = layers.Input(shape=(3,))
input_2 = layers.Input(shape=(5,))
input_3 = layers.Input(shape=(1,))
input_4 = layers.Input(shape=(2,))
model_2 = keras.models.Model([input_1, input_2], [input_3, input_4])
model_2.compile(optimizer='adam',metrics=['accuracy','accuracy'],loss=['mse'])
#Passing correct input and ouputs fails because these are not connected.
model_2.fit([np.random.normal(size=(10,3)),np.random.normal(size=(10,5))], [np.random.normal(size=(10,1)),np.random.normal(size=(10,2))])
Got error below which is correct but it is not useful for end users. Instead it should have raised error during graph construction.
177 output_tensors = []
178 for x in self.outputs:
--> 179 output_tensors.append(tensor_dict[id(x)])
180
181 return tree.pack_sequence_as(self._outputs_struct, output_tensors)
KeyError: "Exception encountered when calling Functional.call().\n\n\x1b[1m139941182292272\x1b[0m\n\nArguments received by Functional.call():\n • inputs=('tf.Tensor(shape=(None, 3), dtype=float32)', 'tf.Tensor(shape=(None, 5), dtype=float32)')\n • training=True\n • mask=('None', 'None')"
I tried to fix an issue similar to case-3 by raising Error during graph build itself in PR #20705 where I noticed this issue related to case1(From failed Test case). Please refer the gist.
The text was updated successfully, but these errors were encountered:
To my knowledge, Keras builds the computation graph in a JIT/lazy fashion, and I'm not too sure if addressing Case 2 is in the spirit of the paradigm. Shape validation typically happens when the model actually starts training, which is probably why the Case 2 error comes up during loss calculation and not graph construction. Having said that, I’d love to hear if there’s anything else to consider when trying to bypass it, especially since I’ve only recently started contributing to the library.
I think there seems a bug in functional model.
Case-1: When inputs=outputs for the model construction but training different output shape: Training success
Case 2: Same as Case-1 but different behavior with different mismatched output shapes(than case-1) for training: Error during loss calculation. But I expect Error during graph execution itself.
Case 3: With Unconnected inputs and outputs
Got error below which is correct but it is not useful for end users. Instead it should have raised error during graph construction.
I tried to fix an issue similar to case-3 by raising Error during graph build itself in PR #20705 where I noticed this issue related to case1(From failed Test case). Please refer the gist.
The text was updated successfully, but these errors were encountered: