keras中loss与val_loss的关系是什么?
不懂keras中loss与val_loss的关系是什么??其实想解决这个问题也不难,下面让小编带着大家一起学习怎么去解决,希望大家阅读完这篇文章后大所收获。
loss函数如何接受输入值
keras封装的比较厉害,官网给的例子写的云里雾里,
在stackoverflow找到了答案
You can wrap the loss function as a inner function and pass your input tensor to it (as commonly done when passing additional arguments to the loss function).
def custom_loss_wrapper(input_tensor): def custom_loss(y_true, y_pred): return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor) return custom_loss
input_tensor = Input(shape=(10,))hidden = Dense(100, activation='relu')(input_tensor)out = Dense(1, activation='sigmoid')(hidden)model = Model(input_tensor, out)model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')
You can verify that input_tensor and the loss value will change as different X is passed to the model.
X = np.random.rand(1000, 10)y = np.random.randint(2, size=1000)model.test_on_batch(X, y) # => 1.1974642X *= 1000model.test_on_batch(X, y) # => 511.15466
fit_generator
fit_generator ultimately calls train_on_batch which allows for x to be a dictionary.
Also, it could be a list, in which casex is expected to map 1:1 to the inputs defined in Model(input=[in1, …], …)
### generatoryield [inputX_1,inputX_2],y### modelmodel = Model(inputs=[inputX_1,inputX_2],outputs=...)
补充知识:学习keras时对loss函数不同的选择,则model.fit里的outputs可以是one_hot向量,也可以是整形标签
我就废话不多说了,大家还是直接看代码吧~
from __future__ import absolute_import, division, print_function, unicode_literalsimport tensorflow as tffrom tensorflow import kerasimport numpy as npimport matplotlib.pyplot as pltprint(tf.__version__)fashion_mnist = keras.datasets.fashion_mnist(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']# plt.figure()# plt.imshow(train_images[0])# plt.colorbar()# plt.grid(False)# plt.show()train_images = train_images / 255.0test_images = test_images / 255.0# plt.figure(figsize=(10,10))# for i in range(25):# plt.subplot(5,5,i+1)# plt.xticks([])# plt.yticks([])# plt.grid(False)# plt.imshow(train_images[i], cmap=plt.cm.binary)# plt.xlabel(class_names[train_labels[i]])# plt.show()model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax')])model.compile(optimizer='adam', loss='categorical_crossentropy', #loss = 'sparse_categorical_crossentropy' 则之后的label不需要变成one_hot向量,直接使用整形标签即可 metrics=['accuracy'])one_hot_train_labels = keras.utils.to_categorical(train_labels, num_classes=10)model.fit(train_images, one_hot_train_labels, epochs=10)one_hot_test_labels = keras.utils.to_categorical(test_labels, num_classes=10)test_loss, test_acc = model.evaluate(test_images, one_hot_test_labels)print('\nTest accuracy:', test_acc)# predictions = model.predict(test_images)# predictions[0]# np.argmax(predictions[0])# test_labels[0]
loss若为loss=‘categorical_crossentropy', 则fit中的第二个输出必须是一个one_hot类型,
而若loss为loss = ‘sparse_categorical_crossentropy' 则之后的label不需要变成one_hot向量,直接使用整形标签即可
感谢你能够认真阅读完这篇文章,希望小编分享keras中loss与val_loss的关系是什么?内容对大家有帮助,同时也希望大家多多支持亿速云,关注亿速云行业资讯频道,遇到问题就找亿速云,详细的解决方法等着你来学习!
声明:本站所有文章资源内容,如无特殊说明或标注,均为采集网络资源。如若本站内容侵犯了原著者的合法权益,可联系本站删除。