Local Variable Referenced Before Assignment Python Class Name

This is because, even though exists, you're also using an assignment statement on the name inside of the function ( at the bottom line). Naturally, this creates a variable inside the function's scope called (truthfully, a or will only update (reassign) an existing variable, but for reasons unknown (likely consistency in this context), Python treats it as an assignment). The Python interpreter sees this at module load time and decides (correctly so) that the global scope's should not be used inside the local scope, which leads to a problem when you try to reference the variable before it is locally assigned.

Using global variables, outside of necessity, is usually frowned upon by Python developers, because it leads to confusing and problematic code. However, if you'd like to use them to accomplish what your code is implying, you can simply add:

inside the top of your function. This will tell Python that you don't intend to define a or variable inside the function's local scope. The Python interpreter sees this at module load time and decides (correctly so) to look up any references to the aforementioned variables in the global scope.

Some Resources

  • the Python website has a great explanation for this common issue.
  • Python 3 offers a related statement - check that out as well.

General Questions¶

Is there a source code level debugger with breakpoints, single-stepping, etc.?¶

Yes.

The pdb module is a simple but adequate console-mode debugger for Python. It is part of the standard Python library, and is . You can also write your own debugger by using the code for pdb as an example.

The IDLE interactive development environment, which is part of the standard Python distribution (normally available as Tools/scripts/idle), includes a graphical debugger.

PythonWin is a Python IDE that includes a GUI debugger based on pdb. The Pythonwin debugger colors breakpoints and has quite a few cool features such as debugging non-Pythonwin programs. Pythonwin is available as part of the Python for Windows Extensions project and as a part of the ActivePython distribution (see https://www.activestate.com/activepython).

Boa Constructor is an IDE and GUI builder that uses wxWidgets. It offers visual frame creation and manipulation, an object inspector, many views on the source like object browsers, inheritance hierarchies, doc string generated html documentation, an advanced debugger, integrated help, and Zope support.

Eric is an IDE built on PyQt and the Scintilla editing component.

Pydb is a version of the standard Python debugger pdb, modified for use with DDD (Data Display Debugger), a popular graphical debugger front end. Pydb can be found at http://bashdb.sourceforge.net/pydb/ and DDD can be found at https://www.gnu.org/software/ddd.

There are a number of commercial Python IDEs that include graphical debuggers. They include:

How can I create a stand-alone binary from a Python script?¶

You don’t need the ability to compile Python to C code if all you want is a stand-alone program that users can download and run without having to install the Python distribution first. There are a number of tools that determine the set of modules required by a program and bind these modules together with a Python binary to produce a single executable.

One is to use the freeze tool, which is included in the Python source tree as . It converts Python byte code to C arrays; a C compiler you can embed all your modules into a new program, which is then linked with the standard Python modules.

It works by scanning your source recursively for import statements (in both forms) and looking for the modules in the standard Python path as well as in the source directory (for built-in modules). It then turns the bytecode for modules written in Python into C code (array initializers that can be turned into code objects using the marshal module) and creates a custom-made config file that only contains those built-in modules which are actually used in the program. It then compiles the generated C code and links it with the rest of the Python interpreter to form a self-contained binary which acts exactly like your script.

Obviously, freeze requires a C compiler. There are several other utilities which don’t. One is Thomas Heller’s py2exe (Windows only) at

http://www.py2exe.org/

Another tool is Anthony Tuininga’s cx_Freeze.

My program is too slow. How do I speed it up?¶

That’s a tough one, in general. There are many tricks to speed up Python code; consider rewriting parts in C as a last resort.

In some cases it’s possible to automatically translate Python to C or x86 assembly language, meaning that you don’t have to modify your code to gain increased speed.

Pyrex can compile a slightly modified version of Python code into a C extension, and can be used on many different platforms.

Psyco is a just-in-time compiler that translates Python code into x86 assembly language. If you can use it, Psyco can provide dramatic speedups for critical functions.

The rest of this answer will discuss various tricks for squeezing a bit more speed out of Python code. Never apply any optimization tricks unless you know you need them, after profiling has indicated that a particular function is the heavily executed hot spot in the code. Optimizations almost always make the code less clear, and you shouldn’t pay the costs of reduced clarity (increased development time, greater likelihood of bugs) unless the resulting performance benefit is worth it.

There is a page on the wiki devoted to performance tips.

Guido van Rossum has written up an anecdote related to optimization at https://www.python.org/doc/essays/list2str.

One thing to notice is that function and (especially) method calls are rather expensive; if you have designed a purely OO interface with lots of tiny functions that don’t do much more than get or set an instance variable or call another method, you might consider using a more direct way such as directly accessing instance variables. Also see the standard module which makes it possible to find out where your program is spending most of its time (if you have some patience – the profiling itself can slow your program down by an order of magnitude).

Remember that many standard optimization heuristics you may know from other programming experience may well apply to Python. For example it may be faster to send output to output devices using larger writes rather than smaller ones in order to reduce the overhead of kernel system calls. Thus CGI scripts that write all output in “one shot” may be faster than those that write lots of small pieces of output.

Also, be sure to use Python’s core features where appropriate. For example, slicing allows programs to chop up lists and other sequence objects in a single tick of the interpreter’s mainloop using highly optimized C implementations. Thus to get the same effect as:

it is much shorter and far faster to use

Note that the functionally-oriented built-in functions such as , , and friends can be a convenient accelerator for loops that perform a single task. For example to pair the elements of two lists together:

or to compute a number of sines:

The operation completes very quickly in such cases.

Other examples include the and methods of string objects. For example if s1..s7 are large (10K+) strings then may be far faster than the more obvious , since the “summation” will compute many subexpressions, whereas does all the copying in one pass. For manipulating strings, use the and the methods on string objects. Use regular expressions only when you’re not dealing with constant string patterns. You may still use the old % operations and .

Be sure to use the built-in method to do sorting, and see the sorting mini-HOWTO for examples of moderately advanced usage. beats other techniques for sorting in all but the most extreme circumstances.

Another common trick is to “push loops into functions or methods.” For example suppose you have a program that runs slowly and you use the profiler to determine that a Python function is being called lots of times. If you notice that :

tends to be called in loops like:

or:

then you can often eliminate function call overhead by rewriting to:

and rewrite the two examples to and to:

Single calls to translate to with little penalty. Of course this technique is not always appropriate and there are other variants which you can figure out.

You can gain some performance by explicitly storing the results of a function or method lookup into a local variable. A loop like:

resolves every iteration. If the method isn’t going to change, a slightly faster implementation is:

Default arguments can be used to determine values once, at compile time instead of at run time. This can only be done for functions or objects which will not be changed during program execution, such as replacing

with

Because this trick uses default arguments for terms which should not be changed, it should only be used when you are not concerned with presenting a possibly confusing API to your users.

L2=[]foriinrange(3):L2.append(L1[i])
L2=list(L1[:3])# "list" is redundant if L1 is a list.
>>> zip([1,2,3],[4,5,6])[(1, 4), (2, 5), (3, 6)]
>>> map(math.sin,(1,2,3,4))[0.841470984808, 0.909297426826, 0.14112000806, -0.756802495308]
defff(x):...# do something with x computing result...returnresult
forxinsequence:value=ff(x)...# do something with value...
defffseq(seq):resultseq=[]forxinseq:...# do something with x computing result...resultseq.append(result)returnresultseq
forvalueinffseq(sequence):...# do something with value...
forkeyintoken:dict[key]=dict.get(key,0)+1
dict_get=dict.get# look up the method onceforkeyintoken:dict[key]=dict_get(key,0)+1
defdegree_sin(deg):returnmath.sin(deg*math.pi/180.0)
defdegree_sin(deg,factor=math.pi/180.0,sin=math.sin):returnsin(deg*factor)

Core Language¶

Why am I getting an UnboundLocalError when the variable has a value?¶

It can be a surprise to get the UnboundLocalError in previously working code when it is modified by adding an assignment statement somewhere in the body of a function.

This code:

works, but this code:

results in an UnboundLocalError:

This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope. Since the last statement in foo assigns a new value to , the compiler recognizes it as a local variable. Consequently when the earlier attempts to print the uninitialized local variable and an error results.

In the example above you can access the outer scope variable by declaring it global:

This explicit declaration is required in order to remind you that (unlike the superficially analogous situation with class and instance variables) you are actually modifying the value of the variable in the outer scope:

>>> x=10>>> defbar():... printx>>> bar()10
>>> x=10>>> deffoo():... printx... x+=1
>>> foo()Traceback (most recent call last):...UnboundLocalError: local variable 'x' referenced before assignment
>>> x=10>>> deffoobar():... globalx... printx... x+=1>>> foobar()10

What are the rules for local and global variables in Python?¶

In Python, variables that are only referenced inside a function are implicitly global. If a variable is assigned a value anywhere within the function’s body, it’s assumed to be a local unless explicitly declared as global.

Though a bit surprising at first, a moment’s consideration explains this. On one hand, requiring for assigned variables provides a bar against unintended side-effects. On the other hand, if was required for all global references, you’d be using all the time. You’d have to declare as global every reference to a built-in function or to a component of an imported module. This clutter would defeat the usefulness of the declaration for identifying side-effects.

Why do lambdas defined in a loop with different values all return the same result?¶

Assume you use a for loop to define a few different lambdas (or even plain functions), e.g.:

This gives you a list that contains 5 lambdas that calculate . You might expect that, when called, they would return, respectively, , , , , and . However, when you actually try you will see that they all return :

This happens because is not local to the lambdas, but is defined in the outer scope, and it is accessed when the lambda is called — not when it is defined. At the end of the loop, the value of is , so all the functions now return , i.e. . You can also verify this by changing the value of and see how the results of the lambdas change:

In order to avoid this, you need to save the values in variables local to the lambdas, so that they don’t rely on the value of the global :

Here, creates a new variable local to the lambda and computed when the lambda is defined so that it has the same value that had at that point in the loop. This means that the value of will be in the first lambda, in the second, in the third, and so on. Therefore each lambda will now return the correct result:

Note that this behaviour is not peculiar to lambdas, but applies to regular functions too.

>>> squares=[]>>> forxinrange(5):... squares.append(lambda:x**2)
>>> squares[2]()16>>> squares[4]()16
>>> x=8>>> squares[2]()64
>>> squares=[]>>> forxinrange(5):... squares.append(lambdan=x:n**2)
>>> squares[2]()4>>> squares[4]()16

How do I share global variables across modules?¶

The canonical way to share information across modules within a single program is to create a special module (often called config or cfg). Just import the config module in all modules of your application; the module then becomes available as a global name. Because there is only one instance of each module, any changes made to the module object get reflected everywhere. For example:

config.py:

mod.py:

main.py:

Note that using a module is also the basis for implementing the Singleton design pattern, for the same reason.

x=0# Default value of the 'x' configuration setting
importconfigconfig.x=1
importconfigimportmodprintconfig.x

What are the “best practices” for using import in a module?¶

In general, don’t use . Doing so clutters the importer’s namespace, and makes it much harder for linters to detect undefined names.

Import modules at the top of a file. Doing so makes it clear what other modules your code requires and avoids questions of whether the module name is in scope. Using one import per line makes it easy to add and delete module imports, but using multiple imports per line uses less screen space.

It’s good practice if you import modules in the following order:

  1. standard library modules – e.g. , , ,
  2. third-party library modules (anything installed in Python’s site-packages directory) – e.g. mx.DateTime, ZODB, PIL.Image, etc.
  3. locally-developed modules

Only use explicit relative package imports. If you’re writing code that’s in the module and want to import , do not just write , even though it’s legal. Write or instead.

It is sometimes necessary to move imports to a function or class to avoid problems with circular imports. Gordon McMillan says:

Circular imports are fine where both modules use the “import <module>” form of import. They fail when the 2nd module wants to grab a name out of the first (“from module import name”) and the import is at the top level. That’s because names in the 1st are not yet available, because the first module is busy importing the 2nd.

In this case, if the second module is only used in one function, then the import can easily be moved into that function. By the time the import is called, the first module will have finished initializing, and the second module can do its import.

It may also be necessary to move imports out of the top level of code if some of the modules are platform-specific. In that case, it may not even be possible to import all of the modules at the top of the file. In this case, importing the correct modules in the corresponding platform-specific code is a good option.

Only move imports into a local scope, such as inside a function definition, if it’s necessary to solve a problem such as avoiding a circular import or are trying to reduce the initialization time of a module. This technique is especially helpful if many of the imports are unnecessary depending on how the program executes. You may also want to move imports into a function if the modules are only ever used in that function. Note that loading a module the first time may be expensive because of the one time initialization of the module, but loading a module multiple times is virtually free, costing only a couple of dictionary lookups. Even if the module name has gone out of scope, the module is probably available in .

Why are default values shared between objects?¶

This type of bug commonly bites neophyte programmers. Consider this function:

The first time you call this function, contains a single item. The second time, contains two items because when begins executing, starts out with an item already in it.

It is often expected that a function call creates new objects for default values. This is not what happens. Default values are created exactly once, when the function is defined. If that object is changed, like the dictionary in this example, subsequent calls to the function will refer to this changed object.

By definition, immutable objects such as numbers, strings, tuples, and , are safe from change. Changes to mutable objects such as dictionaries, lists, and class instances can lead to confusion.

Because of this feature, it is good programming practice to not use mutable objects as default values. Instead, use as the default value and inside the function, check if the parameter is and create a new list/dictionary/whatever if it is. For example, don’t write:

but:

This feature can be useful. When you have a function that’s time-consuming to compute, a common technique is to cache the parameters and the resulting value of each call to the function, and return the cached value if the same value is requested again. This is called “memoizing”, and can be implemented like this:

You could use a global variable containing a dictionary instead of the default value; it’s a matter of taste.

deffoo(mydict={}):# Danger: shared reference to one dict for all calls...computesomething...mydict[key]=valuereturnmydict
deffoo(mydict=None):ifmydictisNone:mydict={}# create a new dict for local namespace
# Callers will never provide a third parameter for this function.defexpensive(arg1,arg2,_cache={}):if(arg1,arg2)in_cache:return_cache[(arg1,arg2)]# Calculate the valueresult=...expensivecomputation..._cache[(arg1,arg2)]=result# Store result in the cachereturnresult

How can I pass optional or keyword parameters from one function to another?¶

Collect the arguments using the and specifiers in the function’s parameter list; this gives you the positional arguments as a tuple and the keyword arguments as a dictionary. You can then pass these arguments when calling another function by using and :

In the unlikely case that you care about Python versions older than 2.0, use :

deff(x,*args,**kwargs):...kwargs['width']='14.3c'...g(x,*args,**kwargs)
deff(x,*args,**kwargs):...kwargs['width']='14.3c'...apply(g,(x,)+args,kwargs)

What is the difference between arguments and parameters?¶

Parameters are defined by the names that appear in a function definition, whereas arguments are the values actually passed to a function when calling it. Parameters define what types of arguments a function can accept. For example, given the function definition:

foo, bar and kwargs are parameters of . However, when calling , for example:

the values , , and are arguments.

deffunc(foo,bar=None,**kwargs):pass
func(42,bar=314,extra=somevar)

Why did changing list ‘y’ also change list ‘x’?¶

If you wrote code like:

you might be wondering why appending an element to changed too.

There are two factors that produce this result:

  1. Variables are simply names that refer to objects. Doing doesn’t create a copy of the list – it creates a new variable that refers to the same object refers to. This means that there is only one object (the list), and both and refer to it.
  2. Lists are mutable, which means that you can change their content.

After the call to , the content of the mutable object has changed from to . Since both the variables refer to the same object, using either name accesses the modified value .

If we instead assign an immutable object to :

we can see that in this case and are not equal anymore. This is because integers are immutable, and when we do we are not mutating the int by incrementing its value; instead, we are creating a new object (the int ) and assigning it to (that is, changing which object refers to). After this assignment we have two objects (the ints and ) and two variables that refer to them ( now refers to but still refers to ).

Some operations (for example and ) mutate the object, whereas superficially similar operations (for example and ) create a new object. In general in Python (and in all cases in the standard library) a method that mutates an object will return to help avoid getting the two types of operations confused. So if you mistakenly write thinking it will give you a sorted copy of , you’ll instead end up with , which will likely cause your program to generate an easily diagnosed error.

However, there is one class of operations where the same operation sometimes has different behaviors with different types: the augmented assignment operators. For example, mutates lists but not tuples or ints ( is equivalent to and mutates , whereas and create new objects).

In other words:

  • If we have a mutable object (, , , etc.), we can use some specific operations to mutate it and all the variables that refer to it will see the change.
  • If we have an immutable object (, , , etc.), all the variables that refer to it will always see the same value, but operations that transform that value into a new value always return a new object.

If you want to know if two variables refer to the same object or not, you can use the operator, or the built-in function .

>>> x=[]>>> y=x>>> y.append(10)>>> y[10]>>> x[10]
>>> x=5# ints are immutable>>> y=x>>> x=x+1# 5 can't be mutated, we are creating a new object here>>> x6>>> y5

How do I write a function with output parameters (call by reference)?¶

Remember that arguments are passed by assignment in Python. Since assignment just creates references to objects, there’s no alias between an argument name in the caller and callee, and so no call-by-reference per se. You can achieve the desired effect in a number of ways.

  1. By returning a tuple of the results:

    This is almost always the clearest solution.

    deffunc2(a,b):a='new-value'# a and b are local namesb=b+1# assigned to new objectsreturna,b# return new valuesx,y='old-value',99x,y=func2(x,y)printx,y# output: new-value 100
  2. By using global variables. This isn’t thread-safe, and is not recommended.

  3. By passing a mutable (changeable in-place) object:

    deffunc1(a):a[0]='new-value'# 'a' references a mutable lista[1]=a[1]+1# changes a shared objectargs=['old-value',99]func1(args)printargs[0],args[1]# output: new-value 100
  4. By passing in a dictionary that gets mutated:

    deffunc3(args):args['a']='new-value'# args is a mutable dictionaryargs['b']=args['b']+1# change it in-placeargs={'a':'old-value','b':99}func3(args)printargs['a'],args['b']
  5. Or bundle up values in a class instance:

    There’s almost never a good reason to get this complicated.

    classcallByRef:def__init__(self,**args):for(key,value)inargs.items():setattr(self,key,value)deffunc4(args):args.a='new-value'# args is a mutable callByRefargs.b=args.b+1# change object in-placeargs=callByRef(a='old-value',b=99)func4(args)printargs.a,args.b

Your best choice is to return a tuple containing the multiple results.

How do you make a higher order function in Python?¶

You have two choices: you can use nested scopes or you can use callable objects. For example, suppose you wanted to define which returns a function that computes the value . Using nested scopes:

Or using a callable object:

In both cases,

gives a callable object where .

The callable object approach has the disadvantage that it is a bit slower and results in slightly longer code. However, note that a collection of callables can share their signature via inheritance:

Object can encapsulate state for several methods:

Here , and act like functions which share the same counting variable.

deflinear(a,b):defresult(x):returna*x+breturnresult
classlinear:def__init__(self,a,b):self.a,self.b=a,bdef__call__(self,x):returnself.a*x+self.b
classexponential(linear):# __init__ inheriteddef__call__(self,x):returnself.a*(x**self.b)
classcounter:value=0defset(self,x):self.value=xdefup(self):self.value=self.value+1defdown(self):self.value=self.value-1count=counter()inc,dec,reset=count.up,count.down,count.set

How can my code discover the name of an object?¶

Generally speaking, it can’t, because objects don’t really have names. Essentially, assignment always binds a name to a value; The same is true of and statements, but in that case the value is a callable. Consider the following code:

Arguably the class has a name: even though it is bound to two names and invoked through the name B the created instance is still reported as an instance of class A. However, it is impossible to say whether the instance’s name is a or b, since both names are bound to the same value.

Generally speaking it should not be necessary for your code to “know the names” of particular values. Unless you are deliberately writing introspective programs, this is usually an indication that a change of approach might be beneficial.

In comp.lang.python, Fredrik Lundh once gave an excellent analogy in answer to this question:

The same way as you get the name of that cat you found on your porch: the cat (object) itself cannot tell you its name, and it doesn’t really care – so the only way to find out what it’s called is to ask all your neighbours (namespaces) if it’s their cat (object)…

….and don’t be surprised if you’ll find that it’s known by many names, or no name at all!

>>> classA:... pass...>>> B=A>>> a=B()>>> b=a>>> printb<__main__.A instance at 0x16D07CC>>>> printa<__main__.A instance at 0x16D07CC>

What’s up with the comma operator’s precedence?¶

Comma is not an operator in Python. Consider this session:

Since the comma is not an operator, but a separator between expressions the above is evaluated as if you had entered:

not:

The same is true of the various assignment operators (, etc). They are not truly operators but syntactic delimiters in assignment statements.

>>> "a"in"b","a"(False, 'a')
("a"in"b"),"a"
"a"in("b","a")

Is there an equivalent of C’s “?:” ternary operator?¶

Yes, this feature was added in Python 2.5. The syntax would be as follows:

For versions previous to 2.5 the answer would be ‘No’.

[on_true]if[expression]else[on_false]x,y=50,25small=xifx<yelsey

Is it possible to write obfuscated one-liners in Python?¶

Yes. Usually this is done by nesting within . See the following three examples, due to Ulf Bartelt:

# Primes < 1000printfilter(None,map(lambday:y*reduce(lambdax,y:x*y!=0,map(lambdax,y=y:y%x,range(2,int(pow(y,0.5)+1))),1),range(2,1000)))# First 10 Fibonacci numbersprintmap(lambdax,f=lambdax,f:(f(x-1,f)+f(x-2,f))ifx>1else1:f(x,f),range(10))# Mandelbrot setprint(lambdaRu,Ro,Iu,Io,IM,Sx,Sy:reduce(lambdax,y:x+y,map(lambday,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambdayc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM,Sx=Sx,Sy=Sy:reduce(lambdax,y:x+y,map(lambdax,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,i=i,Sx=Sx,F=lambdaxc,yc,x,y,k,f=lambdaxc,yc,x,y,k,f:(k<=0)or(x*x+y*y>=4.0)or1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):

0 thoughts on “Local Variable Referenced Before Assignment Python Class Name”

    -->

Leave a Comment

Your email address will not be published. Required fields are marked *