tf.variable_scope VS tf.name_scope

This article is based on the version of tf1.13 tf.variable_scope Class sum tf.name_scope For example, the initialization of corresponding classes is introduced.

  def __init__(self,
               name_or_scope,       #Name or VariableScope, used as the name prefix of op functions in the current scope
               default_name=None,   #The default name of name_or_scope is globally unique when it is not specified; the variable is invalid when name_or_scope is specified, and [1,2,...] is automatically added after the default name when it is accessed many times.
               values=None,         #Input parameter list (op input)
               initializer=None,    #Default initial values for all variables in the current scope (space)
               regularizer=None,    #Default regularization values for all variables in the current scope
               caching_device=None, #Default buffer devices for all variables in the current scope
               partitioner=None,    #Partitioning of all variables in the current scope 
               custom_getter=None,
               reuse=None,          #Whether the variable reuses the flag bit, `True', None, or tf.AUTO_REUSE three values, True represents the value of reuse flag bit inherited from the parent space when the tf.AUTO_REUSE variable does not exist and when it exists, it is rebuilt.
               dtype=None,          #Variable data type
               use_resource=None,   #Markers, False for regularized variables, True for customized variables
               constraint=None,
               auxiliary_name_scope=True): #Flag bit, False means not using aliases, True means creating aliases or spaces.
def __init__(self, 
            name,               #Class space name, used as the name prefix of op functions in the current scope
            default_name=None,  #Default name when name=None
            values=None):       #Input parameter list (op input)

These two class functions are often used to restrict tf.Variable and tf.get_variable Class scope of use. The base class of the former is Variable, and the base class of the latter is VariableScope.

  tf.Variable tf.get_variable
name Influenced by tf.name_variable and tf.variable_scope Influenced only by tf.variable_scope
Repeated creation

First created with the name;

The number of repetitions ${name} ${number of repetitions, incremental from 1} is added after the name when the repetition is created.

When the variable name is first created, it is created directly when it does not exist. If reuse=True is set, an error will be reported, and tf.AUTO_REUSE can be used.

When creating repeatedly, you need to specify the variable reuse flag as True or tf.AUTO_REUSE

import tensorflow as tf
in_1=1
in_2=2
name=None
name='foo'

v1 = tf.get_variable('foo/var1', [1])  #If the comment is changed, reuse=tf.AUTO_REUSE of tf.variable_scope will report an error if it is not set or set to True.
for idx in range(2):
    with tf.variable_scope(name, 'V0', [in_1, in_2]) as scope:
        with tf.name_scope("bar"):
            scope.reuse_variables()  #Valid only if the variable name already exists, that is, with v1 = tf.get_variable('foo/var1', [1])
            #tf.get_variable_scope().reuse_variables() #Valid only if the variable name exists
            v1 = tf.get_variable("var1", [1])
            v2 = tf.Variable(initial_value=[2.], name='var2', dtype=tf.float32)
            v3 = tf.Variable(initial_value=[2.], name='var2', dtype=tf.float32)

    with tf.Session() as sess:
        print('##ID: {}##'.format(idx))
        print('v1 name: {}'.format(v1.name))
        print('v2 name: {}'.format(v2.name))
        print('v3 name: {}'.format(v3.name))


//The results are as follows:
##ID: 0##
v1 name: foo/var1:0      #The variable name of tf.get_variable only depends on tf.variable_scope as prefix
v2 name: foo/bar/var2:0  #The variable name of tf.Variable depends on tf.variable_scope and tf.name_scope.
v3 name: foo/bar/var2_1:0  #When the current variable space is renamed, the name changes automatically to maintain global uniqueness
##ID: 1##
v1 name: foo/var1:0        #Set scope.reuse_variables(), so get it directly from the existing variables
v2 name: foo_1/bar/var2:0  #When the variable space of tf.Variable is renamed, the name is automatically changed to the name with suffix underlining and the number of repetitions.
v3 name: foo_1/bar/var2_1:0

 

Reference:

  1. variable_scope.py
  2. TensorFlow function: tf.variable_scope

  3. TensorFlow Building Graphics: tf.name_scope Function

Keywords: Session

Added by akforce on Mon, 30 Sep 2019 13:04:19 +0300