tf.keras.ops.sparse_categorical_crossentropy
Stay organized with collections
Save and categorize content based on your preferences.
Computes sparse categorical cross-entropy loss.
tf.keras.ops.sparse_categorical_crossentropy(
target, output, from_logits=False, axis=-1
)
The sparse categorical cross-entropy loss is similar to categorical
cross-entropy, but it is used when the target tensor contains integer
class labels instead of one-hot encoded vectors. It measures the
dissimilarity between the target and output probabilities or logits.
Args |
target
|
The target tensor representing the true class labels as
integers. Its shape should match the shape of the output
tensor except for the last dimension.
|
output
|
The output tensor representing the predicted probabilities
or logits.
Its shape should match the shape of the target tensor except
for the last dimension.
|
from_logits
|
(optional) Whether output is a tensor of logits
or probabilities.
Set it to True if output represents logits; otherwise,
set it to False if output represents probabilities.
Defaults toFalse .
|
axis
|
(optional) The axis along which the sparse categorical
cross-entropy is computed.
Defaults to -1 , which corresponds to the last dimension
of the tensors.
|
Returns |
Integer tensor: The computed sparse categorical cross-entropy
loss between target and output .
|
Example:
target = keras.ops.convert_to_tensor([0, 1, 2], dtype=int32)
output = keras.ops.convert_to_tensor(
[[0.9, 0.05, 0.05],
[0.1, 0.8, 0.1],
[0.2, 0.3, 0.5]])
sparse_categorical_crossentropy(target, output)
array([0.10536056 0.22314355 0.6931472 ], shape=(3,), dtype=float32)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.ops.sparse_categorical_crossentropy\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/nn.py#L1517-L1569) |\n\nComputes sparse categorical cross-entropy loss.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.keras.ops.nn.sparse_categorical_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/ops/sparse_categorical_crossentropy)\n\n\u003cbr /\u003e\n\n tf.keras.ops.sparse_categorical_crossentropy(\n target, output, from_logits=False, axis=-1\n )\n\nThe sparse categorical cross-entropy loss is similar to categorical\ncross-entropy, but it is used when the target tensor contains integer\nclass labels instead of one-hot encoded vectors. It measures the\ndissimilarity between the target and output probabilities or logits.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `target` | The target tensor representing the true class labels as integers. Its shape should match the shape of the `output` tensor except for the last dimension. |\n| `output` | The output tensor representing the predicted probabilities or logits. Its shape should match the shape of the `target` tensor except for the last dimension. |\n| `from_logits` | (optional) Whether `output` is a tensor of logits or probabilities. Set it to `True` if `output` represents logits; otherwise, set it to `False` if `output` represents probabilities. Defaults to`False`. |\n| `axis` | (optional) The axis along which the sparse categorical cross-entropy is computed. Defaults to `-1`, which corresponds to the last dimension of the tensors. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Integer tensor: The computed sparse categorical cross-entropy loss between `target` and `output`. ||\n\n\u003cbr /\u003e\n\n#### Example:\n\n target = keras.ops.convert_to_tensor([0, 1, 2], dtype=int32)\n output = keras.ops.convert_to_tensor(\n [[0.9, 0.05, 0.05],\n [0.1, 0.8, 0.1],\n [0.2, 0.3, 0.5]])\n sparse_categorical_crossentropy(target, output)\n array([0.10536056 0.22314355 0.6931472 ], shape=(3,), dtype=float32)"]]