tf.keras.ops.image.affine_transform
Stay organized with collections
Save and categorize content based on your preferences.
Applies the given transform(s) to the image(s).
tf.keras.ops.image.affine_transform(
image,
transform,
interpolation='bilinear',
fill_mode='constant',
fill_value=0,
data_format='channels_last'
)
Args |
image
|
Input image or batch of images. Must be 3D or 4D.
|
transform
|
Projective transform matrix/matrices. A vector of length 8 or
tensor of size N x 8. If one row of transform is
[a0, a1, a2, b0, b1, b2, c0, c1] , then it maps the output point
(x, y) to a transformed input point
(x', y') = ((a0 x + a1 y + a2) / k, (b0 x + b1 y + b2) / k) ,
where k = c0 x + c1 y + 1 . The transform is inverted compared to
the transform mapping input points to output points. Note that
gradients are not backpropagated into transformation parameters.
Note that c0 and c1 are only effective when using TensorFlow
backend and will be considered as 0 when using other backends.
|
interpolation
|
Interpolation method. Available methods are "nearest" ,
and "bilinear" . Defaults to "bilinear" .
|
fill_mode
|
Points outside the boundaries of the input are filled
according to the given mode. Available methods are "constant" ,
"nearest" , "wrap" and "reflect" . Defaults to "constant" .
"reflect" : (d c b a | a b c d | d c b a)
The input is extended by reflecting about the edge of the last
pixel.
"constant" : (k k k k | a b c d | k k k k)
The input is extended by filling all values beyond
the edge with the same constant value k specified by
fill_value .
"wrap" : (a b c d | a b c d | a b c d)
The input is extended by wrapping around to the opposite edge.
"nearest" : (a a a a | a b c d | d d d d)
The input is extended by the nearest pixel.
|
fill_value
|
Value used for points outside the boundaries of the input if
fill_mode="constant" . Defaults to 0 .
|
data_format
|
string, either "channels_last" or "channels_first" .
The ordering of the dimensions in the inputs. "channels_last"
corresponds to inputs with shape (batch, height, width, channels)
while "channels_first" corresponds to inputs with shape
(batch, channels, height, weight) . It defaults to the
image_data_format value found in your Keras config file at
~/.keras/keras.json . If you never set it, then it will be
"channels_last" .
|
Returns |
Applied affine transform image or batch of images.
|
Examples:
x = np.random.random((2, 64, 80, 3)) # batch of 2 RGB images
transform = np.array(
[
[1.5, 0, -20, 0, 1.5, -16, 0, 0], # zoom
[1, 0, -20, 0, 1, -16, 0, 0], # translation
]
)
y = keras.ops.image.affine_transform(x, transform)
y.shape
(2, 64, 80, 3)
x = np.random.random((64, 80, 3)) # single RGB image
transform = np.array([1.0, 0.5, -20, 0.5, 1.0, -16, 0, 0]) # shear
y = keras.ops.image.affine_transform(x, transform)
y.shape
(64, 80, 3)
x = np.random.random((2, 3, 64, 80)) # batch of 2 RGB images
transform = np.array(
[
[1.5, 0, -20, 0, 1.5, -16, 0, 0], # zoom
[1, 0, -20, 0, 1, -16, 0, 0], # translation
]
)
y = keras.ops.image.affine_transform(x, transform,
data_format="channels_first")
y.shape
(2, 3, 64, 80)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.ops.image.affine_transform\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/image.py#L312-L410) |\n\nApplies the given transform(s) to the image(s). \n\n tf.keras.ops.image.affine_transform(\n image,\n transform,\n interpolation='bilinear',\n fill_mode='constant',\n fill_value=0,\n data_format='channels_last'\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `image` | Input image or batch of images. Must be 3D or 4D. |\n| `transform` | Projective transform matrix/matrices. A vector of length 8 or tensor of size N x 8. If one row of transform is `[a0, a1, a2, b0, b1, b2, c0, c1]`, then it maps the output point `(x, y)` to a transformed input point `(x', y') = ((a0 x + a1 y + a2) / k, (b0 x + b1 y + b2) / k)`, where `k = c0 x + c1 y + 1`. The transform is inverted compared to the transform mapping input points to output points. Note that gradients are not backpropagated into transformation parameters. Note that `c0` and `c1` are only effective when using TensorFlow backend and will be considered as `0` when using other backends. |\n| `interpolation` | Interpolation method. Available methods are `\"nearest\"`, and `\"bilinear\"`. Defaults to `\"bilinear\"`. |\n| `fill_mode` | Points outside the boundaries of the input are filled according to the given mode. Available methods are `\"constant\"`, `\"nearest\"`, `\"wrap\"` and `\"reflect\"`. Defaults to `\"constant\"`. \u003cbr /\u003e - `\"reflect\"`: `(d c b a | a b c d | d c b a)` The input is extended by reflecting about the edge of the last pixel. - `\"constant\"`: `(k k k k | a b c d | k k k k)` The input is extended by filling all values beyond the edge with the same constant value k specified by `fill_value`. - `\"wrap\"`: `(a b c d | a b c d | a b c d)` The input is extended by wrapping around to the opposite edge. - `\"nearest\"`: `(a a a a | a b c d | d d d d)` The input is extended by the nearest pixel. |\n| `fill_value` | Value used for points outside the boundaries of the input if `fill_mode=\"constant\"`. Defaults to `0`. |\n| `data_format` | string, either `\"channels_last\"` or `\"channels_first\"`. The ordering of the dimensions in the inputs. `\"channels_last\"` corresponds to inputs with shape `(batch, height, width, channels)` while `\"channels_first\"` corresponds to inputs with shape `(batch, channels, height, weight)`. It defaults to the `image_data_format` value found in your Keras config file at `~/.keras/keras.json`. If you never set it, then it will be `\"channels_last\"`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Applied affine transform image or batch of images. ||\n\n\u003cbr /\u003e\n\n#### Examples:\n\n x = np.random.random((2, 64, 80, 3)) # batch of 2 RGB images\n transform = np.array(\n [\n [1.5, 0, -20, 0, 1.5, -16, 0, 0], # zoom\n [1, 0, -20, 0, 1, -16, 0, 0], # translation\n ]\n )\n y = keras.ops.image.affine_transform(x, transform)\n y.shape\n (2, 64, 80, 3)\n\n x = np.random.random((64, 80, 3)) # single RGB image\n transform = np.array([1.0, 0.5, -20, 0.5, 1.0, -16, 0, 0]) # shear\n y = keras.ops.image.affine_transform(x, transform)\n y.shape\n (64, 80, 3)\n\n x = np.random.random((2, 3, 64, 80)) # batch of 2 RGB images\n transform = np.array(\n [\n [1.5, 0, -20, 0, 1.5, -16, 0, 0], # zoom\n [1, 0, -20, 0, 1, -16, 0, 0], # translation\n ]\n )\n y = keras.ops.image.affine_transform(x, transform,\n data_format=\"channels_first\")\n y.shape\n (2, 3, 64, 80)"]]