Interface : Datatypes : DB-API Extensions : Examples : Structure : Download & Installation : Subpackages ( Adabas : MySQL : Solid : Sybase : Oracle : Informix : Windows ODBC Manager : Unix iODBC Manager : Other DBs ) : Support : Copyright : License : History : Home | Version 1.0.1 |
The mxODBC package provides a nearly 100% Python DB API compliant interface to databases that are accessible via the ODBC API. This can either be done through an ODBC manager, e.g. the one that comes with Windows, or iODBC which is a free ODBC manager for Unix now maintained by OpenLink, or directly by linking to the database's ODBC driver.
Since ODBC is a widely supported standard for accessing databases, it should in general be possible to adapt the package to any ODBC 2.0 compliant database driver / manager. All you have to do is change the include directives to point to the specific files for your database and maybe set some macro switches to disable ODBC APIs that your driver does not provide. See the installation section for details.
As of version 0.8 the package supports multiple database interfacing meaning that you can access e.g. two different databases from within one process. Included are several preconfigured subpackages for a wide range of common databases.
Note: Unlike most of the other database modules available for Python, this package uses the new date & time types provided by another package I wrote, mxDateTime, eliminating the problems you normally face when handling dates before 1.1.1970 and after 2038. This also makes the module Year 2000 safe.
The module tries to adhere to the Python DB API Version 1.0 in most details (an updated, but yet unapproved, version 1.1 is included in the archive).
There are some minor deviations though:
cursor.execute()
though.
The standard (DB API compliant) constructor
ODBC()
has been renamed to
Connect()
. There is an alias to ODBC() to
remain in sync with the API specifications.
(sqlstate, sqltype, errortext, lineno)
where
lineno
refers to the line number in the
mxODBC.c file (to ease debugging the module).
cursor.fetchxxx()
methods return in case no
further data is available; mxODBC uses the following
conventions:
cursor.fetchone()
returns None
cursor.fetchmany()
returns an empty listcursor.fetchall()
also returns an empty list
Since the ODBC API is very rich in terms of accessing information about what is stored in the database, the module makes several additional cursor methods available. These and several other useful extensions are explained in the next section.
Apart from the standard methods, mxODBC also offers some extra methods which provide more information to the user by giving access to many of the underlying ODBC API function.
dbc =
Connect(datasource,userid,passwd[,clear_auto_commit=1])
Normally, auto-commit is turned off by the constructor. If given, clear_auto_commit overrides the default behaviour (false: don't clear the flag and use the database default, true: clear the flag). Note that a compile time switch (DONT_CLEAR_AUTOCOMMIT) allows altering the default behaviour to clear the flag.
Use the connection method
db.setconnectoption(SQL.AUTOCOMMIT,
SQL.AUTOCOMMIT_ON|OFF|DEFAULT)
(see below) to later
set the behaviour to your needs.
dbc =
ODBC(datasource,userid,passwd[,clear_auto_commit=1])
dbc =
DriverConnect(DSN_string[,clear_auto_commit=1])
Connect()
for
comments on clear_auto_commit.
Note that this API is only available if the interface was compiled with HAVE_SQLDriverConnect defined. See install section for details.
dbc.setconnectoption(option,value)
SQL
class (see below), e.g. SQL.AUTOCOMMIT corresponds to the SQL
option (SQL_AUTOCOMMIT in C).
The method is a direct interface to the ODBC SQLSetConnectOption() function. Please refer to the ODBC documentation for more information.
Note that while the API function also supports setting character fields, the method currently does not know how to handle these.
Note for ADABAS users: Adabas can emulate several different SQL dialects. They have introduced an option for this to be set. These are the values you can pass: 1 = ADABAS, 2 = DB2, 3 = ANSI, 4 = ORACLE, 5 = SAPR3. The option code is SQL.CONNECT_OPT_DRVR_START + 2 according to the documentation.
dbc.getconnectoption(option)
SQL
class (see
below).
The method returns the data as 32-bit integer. It is up to the callee to decode the integer.
dbc.getinfo(info_id)
SQL
class (see
below).
The method returns a tuple (integer,string) giving a 32-bit integer decoding of the API's result as well as the raw buffer data as string. It is up to the callee to decode the data (e.g. using the struct module).
This API gives you a very wide range of information about the underlying database and its capabilities. See the ODBC documentation for more information.
dbc.cursor([name])
Note: This is an extension to the DB-API spec. The specification does allow any arguments to the constructor.
dbc.datetimeformat
DATETIME_DATETIMEFORMAT
[default]
TIMEVALUE_DATETIMEFORMAT
TUPLE_DATETIMEFORMAT
STRING_DATETIMEFORMAT
I strongly suggest always using the DateTime/DateTimeDelta instances.
dbc.closed
ProgrammingError
to be raised. This variable can
be used to conveniently test for this state.
Note that not all databases support all of these ODBC APIs. They can be selectively switched off to adapt the interface to the underlying ODBC driver.
All of the following catalog methods use the same interface: they do an implicit call to cursor.execute() and return their output in form of a list of rows, that can be obtained with the fetch methods in the usual way. The methods always return the number of rows in the result set. Please refer to the ODBC documentation for more information.
cursor.tables(qualifier,owner,table,type)
cursor.tableprivileges(qualifier,owner,table)
cursor.columns(qualifier,owner,table,column)
cursor.columnprivileges(qualifier,owner,table,column)
cursor.foreignkeys(primary_qualifier,primary_owner,pimary_table,
foreign_qualifier,foreign_owner,foreign_table)
cursor.primarykeys(qualifier,owner,table)
cursor.procedures(qualifier,owner,procedure)
cursor.procedurecolumns(qualifier,owner,procedure,column)
cursor.specialcolumns(qualifier,owner,table,coltype,scope,nullable)
cursor.statistics(qualifier,owner,table,unique,accuracy)
cursor.setcursorname(name)
cursor.getcursorname()
cursor.datetimeformat
dbc.datetimeformat
instance variable and defaults to the creating connection
object's settings for datetimeformat
.
I should also warn you: if you plan to write cross database applications, use these methods with care since at least some of the databases I know don't support certain APIs or return misleading results. Also, be sure to check the correct performance of the methods and executes. I don't want to see you loosing your data due to some error I made, or the fact that the ODBC driver of your DB is buggy (like mine :-().
The module uses a more detailed set of exceptions to do error reporting than defined in the DB API 1.0 (these are proposed extensions for the 1.1 versions of the specification).
Error, error
You can use this class to catch all errors
related to database or interface failures. error
is just an alias to Error
needed for DB-API 1.0
compatibility.
Error is a subclass of exceptions.StandardError.
Warning
Warning is a subclass of exceptions.StandardError. This may change in a future release to some other baseclass indicating warnings.
InterfaceError
DataError
OperationalError
IntegrityError
InternalError
ProgrammingError
A hint for troubles with Warning exceptions:
If some
function does not work because it always raises a
Warning
exception, you could either turn off
warning generation completely by recompiling the module using
the DONT_REPORT_WARNINGS flag (see Setup[.in] for explanations)
or step through the source to find the exact location where the
exception occurs and replace the Py_SQLCheck()
with
Py_SQLErrorCheck()
. If you do the latter please
inform me of the change so that I can include it in future
versions.
The module defines a class called SQL
which is
initialized with flags and other integer values defined in the
ODBC standard header files. E.g. to the flag value of
SQL_AUTOCOMMIT is made available through
SQL.AUTOCOMMIT
.
If you pass None
as a value where a string would be
expected, that entry is passed as NULL to the underlying
API. This is because some APIs use the empty string
''
as special value.
SQL
errorclass
sqltype
CHAR, VARCHAR, LONGVARCHAR, BINARY, VARBINARY,
LONGVARBINARY, TINYINT, SMALLINT, INTEGER, BIGINT,
DECIMAL, NUMERIC, BIT, REAL, FLOAT, DOUBLE, DATE, TIME,
TIMESTAMP
To simplify debugging the module I added debugging output in several important places. The feature is only enabled if the module is compiled with -DMAL_DEBUG and output is only generated if Python is run in debugging mode (use the interpreter flag '-d'). The debugging log file is named mxODBC.log. It will be created in the current working directory; messages are always appended to the file so no trace is lost until you explicitly erase the log file. If the log file can not be opened, the module will use stderr to do the reporting.
Note that the debug version of the module is almost as fast as the regular build, so you might as well leave it enabled.
mxODBC itself is written in a thread safe way. There are no module globals that are written to and thus no locking is necessary. Many of the underlying ODBC SQL function calls are wrapped by macros unlocking the global Python interpreter lock before doing the call and regaining that lock directly afterwards. The most prominent of those are the connection APIs and the execute and fetch APIs.
So in general when using a separate database connection for each thread, you shouldn't run into threading problems. If you do, it is more likely that the ODBC driver is not 100% thread safe and thus not 100% ODBC compatible. Note that having threads share cursors is not a good idea: there are many very strange transaction related problems you can then run into.
Unlocking the interpreter lock during long SQL function calls gives your application more responsiveness. This is especially important for GUI based applications, since no other Python thread can run when the global lock is acquired by one thread.
Note: mxODBC will only support threading if you have
built Python itself with thread support enabled. Python for
Windows has this per default. Try: python -c "import
thread"
to find out.
Thanks to Andy Dustman for bringing this to my attention. If you do run into mxODBC related threading problems, feel free to contact me.
The following data types are supported:
SQL type | Python type | Comments |
CHAR, VARCHAR, LONGVARCHAR (LONG in SQL) | String | The conversion truncates the string at the first 0-byte or the SQL field length -- whichever comes first. The handling of special characters depends on the codepage the database uses. |
BINARY, VARBINARY, LONGVARBINARY (LONG BYTE in SQL) | String | Truncation at the SQL field length. These can contain embedded 0-bytes and other special characters. |
TINYINT, SMALLINT, INTEGER, BIT | Integer | Conversion from the Python integer (a C long) to the SQL type is left to the ODBC API of the database, so expect the usual truncations. |
BIGINT | Long Integer | Conversion to and from the Python long integer is done via string representation since there is no C type with enough precision to hold the value. Because of this, you might receive errors indicating truncation or errors because the database sent string data that cannot be converted to a Python long integer. Not all SQL databases implement this type, MySQL is one that does. |
DECIMAL, NUMERIC, REAL, FLOAT, DOUBLE | Float | Conversion from the Python float (a C double) to the SQL type is left to the ODBC API of the database, so expect the usual truncations. |
DATE | DateTime instance / ticks / (year,month,day) / Python string |
While you should use DateTime instances, the module also
excepts ticks (Python numbers indicating the number of
seconds since the Unix Epoch; these are converted to
local time and then stored in the database) and tuples
(year,month,day) on input.
The type of the return values depends on the setting of cursor.datetimeformat. Default is to return DateTime instances. |
TIME | DateTimeDelta instance / tocks / (hour,minute,second) / Python string |
While you should use DateTimeDelta instances, the module
also excepts tocks (Python numbers indicating the number
of seconds since 0:00:00.00) and tuples
(hour,minute,second) on input.
The type of the return values depends on the setting of cursor.datetimeformat. Default is to return DateTimeDelta instances. |
TIMESTAMP | DateTime instance / ticks / (year,month,day,hour,minute,second) / Python string |
While you should use DateTime instances, the module also
excepts ticks (Python numbers indicating the number of
seconds since the epoch; these are converted to local
time and then stored in the database) and tuples
(year,month,day,hour,minute,second) on input.
The type of the return values depends on the setting of cursor.datetimeformat. Default is to return DateTime instances. |
The above SQL types are provided by the module as SQL type
code integers as attributes of the class SQL
,
so that you can decode the value in
cursor.description by comparing it to one of those
constants. A reverse mapping of integer codes to code names
is provided by the dictionary sqltype
.
Note: You may run into problems when using the tuple versions for date/time/timestamp arguments. This is because some databases (noteably MySQL) want these arguments to be passed as strings. mxODBC does the conversion internally but tuples turn out as: '(1998,4,6)' which it will refuse to accept. The solution: use DateTime[Delta] instances instead. These convert themselves to ISO dates/times which most databases (including MySQL) do understand.
While you should always try to use the above Python types for passing input values to the resp. columns, the package will try to automatically convert the types you give into the proper ones, e.g. passing the integer literal '123' works just as well as passing the integer 123 directly. (You shouldn't rely on this feature in case you intend to move to another DB-API compliant database module.)
Sorry, I can't give you any more elaborate scripts at the
moment:
projects/mxODBC> python
Python 1.5 (#159, Dec 15 1997, 13:25:26) [GCC 2.7.2.1] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import ODBC.MySQL
>>> db = ODBC.MySQL.Connect('host:MYDB','user','passwd')
>>> c = db.cursor()
>>> c.execute('select count(*) from test')
>>> c.fetchone()
(305,)
>>> c.close()
>>> db.close()
>>>
[ODBC] [Adabas] dbi.py showdb.py Doc/ [Informix] dbi.py [MySQL] dbi.py [Oracle] dbi.py [Solid] dbi.py [Sybase] dbi.py [Windows] dbi.py [iODBC] dbi.py [mxODBC] dbi.py
Entries enclosed in brackets are packages (i.e. they are
directories that include a __init__.py file). Ones
with slashes are just simple subdirectories that are not
accessible via import
.
First, you will have to install another extension called mxDateTime. Be
sure to always fetch the latest release of both packages,
since I always synchronize the two whenever something
changes. If you only update one of them, you may run into
problems later.
After that is installed and running, download the archive, unzip it to a directory
on your Python path. If you don't want to use the new date &
time types or still use Python 1.4, then you'll have to
stick to version 0.5.
You might also want to download the ODBC reference manual
from SolidTech
which I used to develop this package.
Note: The next section contains
database specific installation notes. You may want to read
those first before continuing the setup.
Next, follow the steps below for each of the subpackage that
you intend to use.
If none of them fits your database configuration, create a
new directory MyDatabase first and proceed as
follows (please send me the modified Setup and
mxODBC.h files for inclusion in future releases).
Unix:
Instructions for compiling the C extension(s) on Unix
platforms:
If you are setting up a new database subpackage, copy
all files from the mxODBC directory to the
subpackage directory and then edit the mxODBC.h
header file and include the appropriate header files for
your database
make -f Makefile.pre.in boot
Fix the include directories, libs and lib paths (this
file uses the same syntax as the Modules/Setup
file that you edit to configure Python); the file also
contains some database specific hints
make
If you get an error like 'unresolved symbol: SQLxxx',
try to add a '-DDONT_HAVE_SQLxxx' flag to the setup line
in Setup and recompile (make clean;
make).
Windows:
Please also see the section below on how to interface to the
Windows ODBC Manager; especially the
defines mentioned there are important.
Instructions for compiling the C extension for use with the
Windows ODBC Manager using VC++ (adapted from generic
instructions by Gordon McMillan):
You now have a new package called ODBC with
subpackages for each of your ODBC databases. Accessing a
particular database is done by calling the connection
constructor of the database subpackage, e.g.
If you plan to use the dbi.py abstraction module
for a particular database, you can access the specific
version for that database by importing ODBC.<database
name>.dbi, e.g.
Please post any bug reports, questions etc. to the db-sig@python.org (see
the Python Website for
details on how to subscribe) or mail them directly to me.
This section includes some specific notes for preconfigured
setups. If you are on WinXX you should always use the
ODBC.Windows subpackage and access the databases through the
ODBC Driver Manager. The other packages provide Unix based
interfaces to the databases.
In general you should always check the path setting in the
corresponding Setup file. I can't automate that
process, sorry.
The SuSE Linux distribution ships with a free personal
edition of Adabas (available in form of RPMs
from SuSE). A commercial
version is also available, though I'd suggest first trying
the personal edition.
If you want to trim down the interface module size, try
linking against a shared version of the static ODBC driver
libs. You can create a pseudo-shared lib by telling the linker
to wrap the static ones into a single shared one:
Note: The ADABAS ODBC driver returns microseconds in
the timestamp fraction field. Because of this the Setup
includes a define to do the conversion to seconds using a
microseconds scale instead of the ODBC standard nanosecond
scale (see the history section for
more details on this problem).
The module has been tested under Linux 2 with Adabas D
6.1.1. Linux Edition.
MySQL is a SQL database for Unix and Windows platforms
developed by TCX. It is free
for most types of usage (see their FAQ for details) and
offers good performance and stability.
There is one particularity with the ODBC driver for MySQL:
all input parameters are being requested as string -- even
integers and floats. mxODBC does the needed conversions for
you, but they could obviously lead to loss of precision (the
interface uses
Note that although MyODBC includes the free Unix ODBC
manager iODBC, this package interfaces directly to the MySQL
driver. Use the iODBC package if you intend to connect via
the ODBC manager.
Since MySQL does not support transactions, clearing the
auto-commit flag on connections (which is normally done per
default by the connection constructors) will not work. The
subpackage simply uses auto-commit mode as default. You can
turn this "work-around" off by editing MySQL/Setup and
removing the switch DONT_CLEAR_AUTOCOMMIT if the feature
should become available.
Important: The setup MySQL version 3.21.33 +
MyODBC-2.50.17 has some serious memory leaks. mxODBC
contains a possible workaround for this, but it's probably
be wiser to upgrade. TCX advises to use MySQL version 3.22 +
MyODBC-2.50.19.
Solid Tech. offers a free
personal edition of their database for Linux in addition
to the standard server and webserver licenses. More
information about prices, licenses and downloads is
available on their website.
BTW: The Solid Server's low-level database API uses ODBC as
interface standard (most other vendors have proprietary
interfaces), so mxODBC should deliver the best performance
possible.
Note: The Solid ODBC driver leaves out some of the
ODBC 2.0 catalog functions. The missing ones are:
SQLTablePrivileges, SQLColumnPrivileges, SQLForeignKeys,
SQLProcedures, SQLProcedureColumns. You won't be able to use
the corresponding cursor methods.
The setup for Solid was kindly donated by Andy Dustman from
ComStar Communications Corp. He also found a long standing
bug that needed fixing.
Sybase for Unix doesn't ship with Unix ODBC drivers. You can
get them from Intersolv or OpenLink though (see below for
URLs).
The included setup is for the Intersolv evaluation drivers
and Sybase Adaptive Server 11.5 and was kindly donated by
Gary Pennington from Sun Microsystems. It was tested on
Solaris 2.6.
To use the OpenLink driver setup instead copy
Setup.in to Setup and enable the OpenLink
section in Setup before compiling.
Oracle for Unix doesn't ship with Unix ODBC drivers. You
can get them from Intersolv or OpenLink though (see below for
URLs).
Once you have installed the ODBC drivers following the
vendor's instructions, run make -f Makefile.pre.in
boot in the Oracle/ subdirectory, enable the
appropriate set of directives in Setup and then run
make to finish the compilation.
Using Intersolv drivers is reported to work. Shawn Dyer
(irin.com) has kindly provided the setup for this
combination and some additional notes:
LD_LIBRARY_PATH= both the oracle lib path and the Intersolv
library path
Once you talk to the Intersolv odbc driver, it seems to be
a simple matter of setting up the ODBC data source name in
their .ini file that has that stuff. At that point you can
talk to any of their ODBC drivers you have installed.
To use the OpenLink driver setup instead copy
Setup.in to Setup and enable the OpenLink
section in Setup before compiling.
Informix for Unix doesn't come with Unix ODBC drivers, but
there a few source for these: Informix sells the driver
under the term "Informix CLI"; Intersolv and OpenLink also
support Informix through their driver suites (see below for
URLs).
Note: There is also a free Informix SDK available
for a few commercial Unix platforms like HP-UX and
Solaris. It includes the needed ODBC libs and header files
(named infxcli.h and libifsql.a).
Once you have installed the ODBC drivers following the
vendor's instructions, run make -f Makefile.pre.in
boot in the Informix/ subdirectory, enable the
appropriate set of directives in Setup and then run
make to finish the compilation.
To use the OpenLink driver setup instead copy
Setup.in to Setup and enable the OpenLink
section in Setup before compiling.
mxODBC compiles on Windows using VC++ and links against the
Windows ODBC driver manager. The necessary import libs and
header files are included in the VC++ package but are also
available for free in the Microsoft ODBC
SDK. Note that the latter is usually more up-to-date.
Compiling the module has to be done in the usual VC++ way
(see the instructions above), producing a DLL named
mxODBC.pyd. All necessary files are located in the Windows/
subdirectory of the package, the main target being
mxODBC.cpp. You should pass the following defines to the
compiler:
The archive already contains a precompiled version of mxODBC
provided courtesy of Stephen Ng (version 0.9.0; up-to-date
versions are always welcome). He is using the module to
interface to MS SQL Server.
Martin Sckopke reported that when connecting to the database
through the ODBC manager, no 'host:' prefix to the DSN is
necessary. He has the module running on Windows NT and is
interfacing to ADABAS D and Oracle 8.0.x without problems.
Notes:
Use the DriverConnect() API to connect to the data source if
you need to pass in extra configuration information such as
names of log files, etc.
If you have installed the win32 extensions by Mark Hammond
et al. you'll run into a naming collision: there already is
an odbc module (all lowercase letters) in the distribution
that could be loaded instead of the ODBC package (all
uppercase letters) depending on your configuration.
AFAIK, there are at least three ways to change this:
mxODBC compiles against the modified iODBC version shipped
with the ODBC driver for MySQL (see notes above). You can
get it from TCX for free.
It should also compile against the original version by Ke
Jin which is now maintained and distributed under the LGPL
by OpenLink (follow the link above).
Note: Use the DriverConnect() API to connect to the
data source if you need to pass in extra configuration
information such as names of log files, etc.
I tested the interface with iODBC-2.12 as included in the
MyODBC-2.50.17 package on Linux using MySQL and ADABAS.
ODBC drivers and managers are usually compiled as a shared
library. When running CGI scripts most HTTP daemons (aka web
servers) don't pass through the path for the dynamic loader
(e.g. LD_LIBRARY_PATH) to the script, thus importing the
mxODBC C extension will fail with unresolved symbols because
the loader doesn't find the ODBC driver/manager's libs.
To have the loader find the path to those shared libs you
can either wrap the Python script with a shell script that
sets the path according to your system configuration or tell
the HTTP daemon to set or pass these through (see the
daemon's documentation for information on how to do this;
for Apache the directives are named SetEnv and
PassEnv).
If you want to run mxODBC in a Unix environment and your
database doesn't provide an Unix ODBC driver, you can try
the drivers sold by Intersolv.
They have an evaluation package available. To see if their
ODBC driver package supports your client/server setup check
these two matrices: server
based or client
based solutions.
Another source for commercial ODBC drivers is OpenLink. To see if
they support your client/server setup check this matrix. They
were giving away 2-client/10-connect licenses for free last
time I checked.
For a fairly large list of sources for ODBC drivers have a
look on this
page provided by the CorVu Cooperation. They
also have some informative pages that describe common database functions
and operators which are helpful.
Alternatively, you could write a remote client (in Python)
that communicates with your database via a WinNT-Box. Most
databases provide Win95/NT ODBC drivers so you can use
mxODBC with the Windows ODBC manager. This method is not
exactly high-performance, but cheaper (provided you can come
up with a running version in less than a day's work, that
is...). The Python standard lib module SocketServer.py
should get you going pretty fast. Protocol and security are
up to you, of course.
I am providing commercial support for this package through
Python Professional
Services Inc.. If you are interested in receiving
information about this service please visit their web-site.
© 1997, 1998, Copyright by Marc-André Lemburg
(mailto:mal@lemburg.com);
All Rights Reserved.
Permission to use, copy, modify, and distribute this
software and its documentation for any purpose limited by
the restrictions put forward in the following paragraph and
without fee or royalty is hereby granted, provided that the
above copyright notice appear in all copies and that both
the copyright notice and this permission notice appear in
supporting documentation or portions thereof, including
modifications, that you make.
Redistribution of this software or modifications thereof for
commercial use requires a separate license which must
be negotiated with the author. As a guideline, packaging the
module and documentation with other free software will be
possible without fee or royalty as described above. This
license does not allow you to ship this software as part of
a product that you sell.
THE AUTHOR MARC-ANDRE LEMBURG DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE
LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR
ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH
THE USE OR PERFORMANCE OF THIS SOFTWARE !
Things that still need to be done:
Changes from 1.0.0 to 1.0.1:
Changes from 0.9.0 to 1.0.0:
The strange thing about the timestamp fractions is that
the ODBC standard manuals say they represent nanoseconds
while an older SQL manual from IBM states they should in
fact be microseconds. I've now adopted the ODBC POV, but
am still not sure what is right and what wrong.
Since I couldn't make up my mind, I've introduced a
compile time switch to set things up to use microseconds
instead of nanoseconds: USE_MICROSECOND_FRACTIONS. See
Setup.in for more information.
Some databases seem to use the old standard too: at
least ADABAS does. The switch is defined in the ADABAS
Setup per default.
Changes from 0.8.1 to 0.9.0:
Changes from 0.8.0 to 0.8.1:
Changes from 0.7 to 0.8.1:
Changes from 0.6 to 0.7:
Changes from 0.5 to 0.6:
Installation
dbc =
ODBC.Adabas.Connect('host:DB','user','passwd')
. You
can use multiple databases at the same time using this
mechanism.
ODBC.Adabas.dbi
.
Special notes regarding the included database subpackages
ld -shared --whole-archive odbclib.a libsqlrte.a libsqlptc.a \
-lncurses -o /usr/local/lib/libodbc.so
str()
to do the conversion so
you can test how the data that gets passed to the driver
will look like).
...we also set the following environment variables:
ODBCINI= the odbc.ini file in the Intersolv install
-DHAVE_SQLDriverConnect
-DMS_ODBC_MANAGER
Support
What I'd like to hear from you...
To the Unix iOBDC Manager ? [Yes]
Copyright & Disclaimer
Please also see the additional licensing
information. This is especially important if you
intend to use the product in a commercial setting.
History & Future
int()
will do the
conversion for you. Same for floats. It is still better to
pass the "right" types as no extra conversion is done in
this case.