With the deprecation of global graphs, TF no longer tracks variables in
collections. In other words, there are no global variables in TF2. Thus, the
global step functions have been removed (get_or_create_global_step,
create_global_step, get_global_step) . You have two options for migrating:
Create a Keras optimizer, which generates an iterations variable. This
variable is automatically incremented when calling apply_gradients.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.compat.v1.train.get_or_create_global_step\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/training/training_util.py#L258-L326) |\n\nReturns and create (if necessary) the global step tensor. \n\n tf.compat.v1.train.get_or_create_global_step(\n graph=None\n )\n\n\u003cbr /\u003e\n\nMigrate to TF2\n--------------\n\n\u003cbr /\u003e\n\n| **Caution:** This API was designed for TensorFlow v1. Continue reading for details on how to migrate from this API to a native TensorFlow v2 equivalent. See the [TensorFlow v1 to TensorFlow v2 migration guide](https://www.tensorflow.org/guide/migrate) for instructions on how to migrate the rest of your code.\n\nWith the deprecation of global graphs, TF no longer tracks variables in\ncollections. In other words, there are no global variables in TF2. Thus, the\nglobal step functions have been removed (`get_or_create_global_step`,\n`create_global_step`, `get_global_step`) . You have two options for migrating:\n\n1. Create a Keras optimizer, which generates an `iterations` variable. This variable is automatically incremented when calling `apply_gradients`.\n2. Manually create and increment a [`tf.Variable`](../../../../tf/Variable).\n\nBelow is an example of migrating away from using a global step to using a\nKeras optimizer:\n\nDefine a dummy model and loss: \n\n def compute_loss(x):\n v = tf.Variable(3.0)\n y = x * v\n loss = x * 5 - x * v\n return loss, [v]\n\nBefore migrating: \n\n g = tf.Graph()\n with g.as_default():\n x = tf.compat.v1.placeholder(tf.float32, [])\n loss, var_list = compute_loss(x)\n global_step = tf.compat.v1.train.get_or_create_global_step()\n global_init = tf.compat.v1.global_variables_initializer()\n optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)\n train_op = optimizer.minimize(loss, global_step, var_list)\n sess = tf.compat.v1.Session(graph=g)\n sess.run(global_init)\n print(\"before training:\", sess.run(global_step))\n before training: 0\n sess.run(train_op, feed_dict={x: 3})\n print(\"after training:\", sess.run(global_step))\n after training: 1\n\nMigrating to a Keras optimizer: \n\n optimizer = tf.keras.optimizers.SGD(.01)\n print(\"before training:\", optimizer.iterations.numpy())\n before training: 0\n with tf.GradientTape() as tape:\n loss, var_list = compute_loss(3)\n grads = tape.gradient(loss, var_list)\n optimizer.apply_gradients(zip(grads, var_list))\n print(\"after training:\", optimizer.iterations.numpy())\n after training: 1\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nDescription\n-----------\n\n### Used in the notebooks\n\n| Used in the guide | Used in the tutorials |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [Migrating model checkpoints](https://www.tensorflow.org/guide/migrate/migrating_checkpoints) - [Debug a TensorFlow 2 migrated training pipeline](https://www.tensorflow.org/guide/migrate/migration_debugging) | - [Multi-worker training with Estimator](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_estimator) - [Checkpointer and PolicySaver](https://www.tensorflow.org/agents/tutorials/10_checkpointer_policysaver_tutorial) - [Exploring the TF-Hub CORD-19 Swivel Embeddings](https://www.tensorflow.org/hub/tutorials/cord_19_embeddings) - [Linear Mixed-Effect Regression in {TF Probability, R, Stan}](https://www.tensorflow.org/probability/examples/HLM_TFP_R_Stan) - [Graph-based Neural Structured Learning in TFX](https://www.tensorflow.org/tfx/tutorials/tfx/neural_structured_learning) |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------|-------------------------------------------------------------------------------------|\n| `graph` | The graph in which to create the global step tensor. If missing, use default graph. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| The global step tensor. ||\n\n\u003cbr /\u003e"]]