-
Notifications
You must be signed in to change notification settings - Fork 564
Cursor
The Cursor object represents a database cursor, which is typically used to manage the context of a fetch operation. Database cursors map to ODBC HSTMTs. Cursors created from the same connection are not isolated, i.e. any changes done to the database by one cursor are immediately visible by the other cursors. Note, cursors do not manage database transactions, transactions are committed and rolled-back from the connection.
This read-only attribute is a list of 7-item tuples, one tuple for each column returned by the last SQL select statement. Each tuple contains:
- column name (or alias, if specified in the SQL)
- type code
- display size (pyodbc does not set this value)
- internal size (in bytes)
- precision
- scale
- nullable (True/False)
This attribute will be None for operations that do not return rows or if one of the execute methods has not been called. The 'type code' value is the class type used to create the Python objects when reading rows. For example, a varchar column's type will be str
.
Any descriptive messages generated by the query as part of the processing, as described in PEP-0249. Typically, these messages include PRINT statements and logs. The messages
attribute is returned as a list of tuples. The first element in the tuple is the type of the message (similar to pyodbc error messages). The second element contains the text of the message. For example (on SQL Server):
cnxn = pyodbc.connect(cnxn_str, autocommit=True)
crsr = cnxn.cursor()
crsr.execute("PRINT 'Hello world!'")
print(crsr.messages)
Results in:
[('[01000] (0)', '[Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Hello world!')]
As you can see, the text of the message is prefixed with source information. This is because the messages are retrieved as "diagnostic records" from the driver (i.e. using the SQLGetDiagRec function).
The number of rows modified by the last SQL statement.
This is -1 if no SQL has been executed or if the number of rows is unknown. Note that it is not uncommon for databases to report -1 immediately after a SQL select statement for performance reasons. (The exact number may not be known before the first records are returned to the application.)
Prepares and executes a SQL statement, returning the Cursor object itself. The optional parameters may be passed as a sequence, as specified by the DB API, or as individual values.
# standard
cursor.execute("select a from tbl where b=? and c=?", (x, y))
# pyodbc extension
cursor.execute("select a from tbl where b=? and c=?", x, y)
The return value is always the cursor itself:
for row in cursor.execute("select user_id, user_name from users"):
print(row.user_id, row.user_name)
row = cursor.execute("select * from tmp").fetchone()
rows = cursor.execute("select * from tmp").fetchall()
count = cursor.execute("update users set last_logon=? where user_id=?", now, user_id).rowcount
count = cursor.execute("delete from users where user_id=1").rowcount
As suggested in the DB API, the last prepared statement is kept and reused if you execute the same SQL again, making executing the same SQL with different parameters more efficient.
Executes the same SQL statement for each set of parameters, returning None. The single params
parameter must be a sequence of sequences, or a generator of sequences.
params = [ ('A', 1), ('B', 2) ]
cursor.executemany("insert into t(name, id) values (?, ?)", params)
This will execute the SQL statement twice, once with ('A', 1) and once with ('B', 2). That is, the above code is essentially equivalent to:
params = [ ('A', 1), ('B', 2) ]
for p in params:
cursor.execute("insert into t(name, id) values (?, ?)", p)
Hence, running executemany() with fast_executemany=False is generally not going to be much faster than running multiple execute() commands directly.
Note, after running executemany(), the number of affected rows is NOT available in the rowcount
attribute.
Also, be careful if autocommit
is True. In this scenario, the provided SQL statement will be committed for each and every record in the parameter sequence. So if an error occurs part-way through processing, you will end up with some of the records committed in the database and the rest not, and it may be not be easy to tell which records have been committed. Hence, you may want to consider setting autocommit
to False (and explicitly commit()
/ rollback()
) to make sure either all the records are committed to the database or none are, e.g.:
try:
cnxn.autocommit = False
params = [ ('A', 1), ('B', 2) ]
cursor.executemany("insert into t(name, id) values (?, ?)", params)
except pyodbc.DatabaseError as err:
cnxn.rollback()
else:
cnxn.commit()
finally:
cnxn.autocommit = True
Executes the SQL statement for the entire set of parameters, returning None. The single params
parameter must be a sequence of sequences, or a generator of sequences.
params = [ ('A', 1), ('B', 2) ]
cursor.fast_executemany = True
cursor.executemany("insert into t(name, id) values (?, ?)", params)
Here, all the parameters are sent to the database server in one bundle (along with the SQL statement), and the database executes the SQL against all the parameters as one database transaction. Hence, this form of executemany() should be much faster than the default executemany(). However, there are limitations to it, see fast_executemany
for more details.
Note, after running executemany(), the number of affected rows is NOT available in the rowcount
attribute.
Under the hood, there is one important difference when fast_executemany=True. In that case, on the client side, pyodbc converts the Python parameter values to their ODBC "C" equivalents, based on the target column types in the database. E.g., a string-based date parameter value of "2018-07-04" is converted to a C date type binary value by pyodbc before sending it to the database. When fast_executemany=False, that date string is sent as-is to the database and the database does the conversion. This can lead to some subtle differences in behavior depending on whether fast_executemany is True or False.
Returns the next row in the query, or None when no more data is available.
A ProgrammingError exception is raised if no SQL has been executed or if it did not return a result set (e.g. was not a SELECT statement).
cursor.execute("select user_name from users where user_id=?", userid)
row = cursor.fetchone()
if row:
print(row.user_name)
Returns the first column of the first row if there are results. For more info see Features beyond the DB API
Returns a list of all the remaining rows in the query.
Since this reads all rows into memory, it should not be used if there are a lot of rows. Consider iterating over the rows instead. However, it is useful for freeing up a Cursor so you can perform a second query before processing the resulting rows.
A ProgrammingError exception is raised if no SQL has been executed or if it did not return a result set (e.g. was not a SELECT statement).
cursor.execute("select user_id, user_name from users where user_id < 100")
rows = cursor.fetchall()
for row in rows:
print(row.user_id, row.user_name)
Returns a list of remaining rows, containing no more than size
rows, used to process results in chunks. The list will be empty when there are no more rows.
The default for cursor.arraysize is 1 which is no different than calling fetchone().
Do not include the size=
keyword when calling this method with an ad-hoc array size. Simply use
rows = crsr.fetchmany(3)
because
rows = crsr.fetchmany(size=3)
will throw "TypeError: fetchmany() takes no keyword arguments"
A ProgrammingError exception is raised if no SQL has been executed or if it did not return a result set (e.g. was not a SELECT statement).
Commits all SQL statements executed on the connection that created this cursor, since the last commit/rollback.
This affects all cursors created by the same connection!
This is no different than calling commit on the connection. The benefit is that many uses can now just use the cursor and not have to track the connection.
Rolls back all SQL statements executed on the connection that created this cursor, since the last commit/rollback.
This affects all cursors created by the same connection!
Skips the next count
records in the query by calling SQLFetchScroll with SQL_FETCH_NEXT.
For convenience, skip(0) is accepted and will do nothing.
This method will make the cursor skip to the next available result set, discarding any remaining rows from the current result set. If there are no more result sets, the method returns False. Otherwise, it returns a True and subsequent calls to the fetch methods will return rows from the next result set.
This method is primarily used if you have stored procedures that return multiple results.
Closes the cursor. A ProgrammingError exception will be raised if any operation is attempted with the cursor.
Cursors are closed automatically when they are deleted (typically when they go out of scope), so calling this is not usually necessary.
This optional method can be used to explicitly declare the types and sizes of query parameters. For example:
sql = "INSERT INTO product (item, price) VALUES (?, ?)"
params = [('bicycle', 499.99), ('ham', 17.95)]
# specify that parameters are for NVARCHAR(50) and DECIMAL(18,4) columns
crsr.setinputsizes([(pyodbc.SQL_WVARCHAR, 50, 0), (pyodbc.SQL_DECIMAL, 18, 4)])
#
crsr.executemany(sql, params)
This is optional in the API and is not supported.
This is not yet supported since there is no way for pyodbc to determine which parameters are input, output, or both.
You will need to call stored procedures using execute(). You can use your database's format or the ODBC escape format. For more information, see the Calling Stored Procedures page.
Returns an iterator for generating information about the tables in the database that match the given criteria.
The table, catalog, and schema interpret the '_' and '%' characters as wildcards. The escape character is driver specific, so use Connection.searchescape.
Each row has the following columns. See the SQLTables documentation for more information.
- table_cat: The catalog name.
- table_schem: The schema name.
- table_name: The table name.
- table_type: One of the string values 'TABLE', 'VIEW', 'SYSTEM TABLE', 'GLOBAL TEMPORARY', 'LOCAL TEMPORARY', 'ALIAS', 'SYNONYM', or a datasource specific type name.
- remarks: A description of the table.
for row in cursor.tables():
print(row.table_name)
# Does table 'x' exist?
if cursor.tables(table='x').fetchone():
print('yes it does')
Creates a result set of column information in the specified tables using the SQLColumns function.
Each row has the following columns:
- table_cat
- table_schem
- table_name
- column_name
- data_type
- type_name
- column_size
- buffer_length
- decimal_digits
- num_prec_radix
- nullable
- remarks
- column_def
- sql_data_type
- sql_datetime_sub
- char_octet_length
- ordinal_position
- is_nullable: One of SQL_NULLABLE, SQL_NO_NULLS, SQL_NULLABLE_UNKNOWN.
Note that some ODBC drivers may include additional column metadata. For example, the PostgreSQL ODBC driver version 13.x also includes
- display_size
- field_type
- auto_increment
- physical number
- table oid
- base typeid
- typmod
- table info
# columns in table x
for row in cursor.columns(table='x'):
print(row.column_name)
Creates a result set of statistics about a single table and the indexes associated with the table by executing SQLStatistics.
If unique
is True only unique indexes are returned; if False all indexes are returned.
If quick
is True, CARDINALITY and PAGES are returned only if they are readily available. Otherwise NULL is returned on those columns.
Each row has the following columns:
- table_cat
- table_schem
- table_name
- non_unique
- index_qualifier
- index_name
- type
- ordinal_position
- column_name
- asc_or_desc
- cardinality
- pages
- filter_condition
Executes SQLSpecialColumns with SQL_BEST_ROWID which creates a result set of columns that uniquely identify a row.
Each row has the following columns.
- scope: One of SQL_SCOPE_CURROW, SQL_SCOPE_TRANSACTION, or SQL_SCOPE_SESSION
- column_name
- data_type: The ODBC SQL data type constant (e.g. SQL_CHAR)
- type_name
- column_size
- buffer_length
- decimal_digits
- pseudo_column: One of SQL_PC_UNKNOWN, SQL_PC_NOT_PSEUDO, SQL_PC_PSEUDO
Executes SQLSpecialColumns with SQL_ROWVER which creates a result set of columns that are automatically updated when any value in the row is updated. Returns the Cursor object. Each row has the following columns.
- scope: One of SQL_SCOPE_CURROW, SQL_SCOPE_TRANSACTION, or SQL_SCOPE_SESSION
- column_name
- data_type: The ODBC SQL data type constant (e.g. SQL_CHAR)
- type_name
- column_size
- buffer_length
- decimal_digits
- pseudo_column: One of SQL_PC_UNKNOWN, SQL_PC_NOT_PSEUDO, SQL_PC_PSEUDO
Creates a result set of column names that make up the primary key for a table by executing the SQLPrimaryKeys function.
Each row has the following columns:
- table_cat
- table_schem
- table_name
- column_name
- key_seq
- pk_name
foreignKeys(table=None, catalog=None, schema=None, foreignTable=None, foreignCatalog=None, foreignSchema=None)
Executes the SQLForeignKeys function and creates a result set of column names that are foreign keys in the specified table (columns in the specified table that refer to primary keys in other tables) or foreign keys in other tables that refer to the primary key in the specified table.
Each row has the following columns:
- pktable_cat
- pktable_schem
- pktable_name
- pkcolumn_name
- fktable_cat
- fktable_schem
- fktable_name
- fkcolumn_name
- key_seq
- update_rule
- delete_rule
- fk_name
- pk_name
- deferrability
Executes SQLProcedures and creates a result set of information about the procedures in the data source. Each row has the following columns:
- procedure_cat
- procedure_schem
- procedure_name
- num_input_params
- num_output_params
- num_result_sets
- remarks
- procedure_type
Executes SQLGetTypeInfo a creates a result set with information about the specified data type or all data types supported by the ODBC driver if not specified. Each row has the following columns:
- type_name
- data_type
- column_size
- literal_prefix
- literal_suffix
- create_params
- nullable
- case_sensitive
- searchable
- unsigned_attribute
- fixed_prec_scale
- auto_unique_value
- local_type_name
- minimum_scale
- maximum_scale
- sql_data_type
- sql_datetime_sub
- num_prec_radix
- interval_precision
Cursor objects do support the Python context manager syntax (the with
statement), but it's important to understand the "context" in this scenario. The following code:
with cnxn.cursor() as crsr:
do_stuff
is essentially equivalent to:
crsr = cnxn.cursor()
do_stuff
if not cnxn.autocommit:
cnxn.commit()
# note, the cursor is not closed
As you can see, commit()
is called on the cursor's connection even if autocommit
is False. Hence, the "context" is not so much the cursor. Rather, it's better to think of it as a database transaction that will be committed without explicitly calling commit()
. Also, keep in mind the cursor is not explicitly closed when the context is exited.
If you want a context manager that does nothing more than close the cursor on exit, one option is to use the contextlib.closing utility:
from contextlib import closing
with closing(cnxn.cursor()) as crsr:
do_stuff