SQL Anywhere Bug Fix Readme for Version 9.0.1, build 2044
Contents
A subset of the software with one or more bug fixes. The bug fixes are
listed below. A Bug Fix update may only be applied to installed software
with the same version number.
Moderate testing has been performed on the software, but full testing has not
been performed. Customers are encouraged to verify the suitability of the software
before releasing it into a production environment.
A complete set of software that upgrades installed software from an older
version with the same major version number (version number format is
major.minor.patch). Bug fixes and other changes are listed in the "readme"
file for the upgrade.
For answers to commonly asked questions please use the following link:
Frequently asked Questions
This section contains a description of critical bug fixes made since the release
of version 9.0.1. If any of these bug fixes appear to affect your installation,
iAnywhere strongly recommends that you install this EBF.
================(Build #1873 - Engineering Case #357228)================
When using a database with the UTF8 collation, statements containing non-English
characters could fail with the error "Syntax error or access violation",
and Unicode bound data stored in the database could be corrupted. This problem
would affect any application using ODBC or OLEDB, including Java-based applications
using the JDBC-ODBC bridge (8.0) or iAnywhere JDBC Driver (9.0), including
DBISQL and Sybase Central.
dbmlsync was also affected.
The bug was introduced in the following versions and builds:
8.0.2 build 4409
9.0.0 build 1302
9.0.1 build 1852
This problem has been fixed.
================(Build #1873 - Engineering Case #357700)================
When using a database with the UTF8 collation, statements containing non-English
characters could fail with the error "Syntax error or access violation",
and Unicode bound data stored in the database could be corrupted.
This problem would affect any application using ODBC or OLEDB, including
Java-based applications using the JDBC-ODBC bridge (8.0) or iAnywhere JDBC
Driver (9.0), including DBISQL and Sybase Central.
dbmlsync was also affected.
The bug was introduced in the following versions and builds:
8.0.2 build 4409
9.0.0 build 1302
9.0.1 build 1852
This problem has been fixed.
================(Build #1831 - Engineering Case #348922)================
If a transaction was active when a BACKUP DATABASE statement was executed,
and the transaction subsequently rolled back, changes made by the transaction
prior to the start of the backup would not have been rolled back in the backed
up database. The contents of the transaction log would have been backed up
correctly, and this log could have been applied to an earlier copy of the
database to produce the correct results. This problem only occurred on databases
created with versions 8.0.0 or later, and does not affect client-side backups
created with DBBACKUP. Now, uncommitted transactions will be rolled back
correctly in backed up databases.
================(Build #1851 - Engineering Case #352692)================
When executing a multi-row UPDATE or DELETE statement, the server could have
behaved incorrectly. The server updated or deleted the right rows from the
underlying table, but when updating the transaction log and index entries
for an affected row, it could have used the data from the previous row modified.
This could have caused several symptoms, including:
1. Incorrect row logged into the transaction and the undo logs,
2. Spurious "Index entry not found" errors,
3. Incorrect entries deleted from indexes, and so on.
This problem has now been fixed.
================(Build #1856 - Engineering Case #351733)================
If a transaction deleted rows from a table concurrently with another transaction
inserting rows, there was a chance of database corruption. While this was
more likely to occur on multiprocessor and Unix systems, it was still possible
for it to have occurred on single processor and Windows systems. Corruption
was also possible solely with concurrent deletes, but only in very rare circumstances.
This has been corrected.
================(Build #1941 - Engineering Case #370998)================
The problem addressed by changes for Engineering Case 323973, was reintroduced
by the changes for Case 369727.
A server crash at an inopportune moment could have resulted in a corrupt
database. This was more likely to have occurred with 9.x servers, and with
8.x servers running 8.x databases. It was unlikely to have occurred with
8.x and earlier servers when running against 7.x or earlier databases.
This has now been fixed.
================(Build #1961 - Engineering Case #374844)================
The changes for CR 364372 could have caused, in rare situations, an incorrect
result set to be returned. This has been fixed.
================(Build #2019 - Engineering Case #389039)================
The server could have failed with a "dynamic memory exhausted" error if AWE
was enabled (ie. -cw command line option). This problemhas been corrected.
The only work around is to disable AWE.
================(Build #1873 - Engineering Case #357701)================
Synchronizing to an ASA remote database using the UTF8 collation could fail
with errors or put corrupted data into the database.
The same problem would affect any application using ODBC or OLEDB, including
Java-based applications using the JDBC-ODBC bridge (8.0) or iAnywhere JDBC
Driver (9.0), including DBISQL and Sybase Central.
The bug was introduced in the following versions and builds:
8.0.2 build 4409
9.0.0 build 1302
9.0.1 build 1852
This problem has been fixed.
This section contains a description of bug fixes made since the release
of version 9.0.1.
================(Build #1816 - Engineering Case #343503)================
A "Connection was terminated" exception would habe been thrown when executing
a command with a pooled connection, if the server was no longer available.
This problem has been fixed.
================(Build #1819 - Engineering Case #345221)================
If the AsaClient was in the process of issuing an error, a native error exception
would have been thrown, if a codepage corresponding to the database's charset
was not available. Now, if the code page of the database is not available,
the AsaClient always returns an English error message to the application.
================(Build #1827 - Engineering Case #347032)================
If an error occurred in the managed provider, a MessageBox was being displayed
even if the underlying application was running as a service and that service
was not running in UserInteractive mode. Now, the MessageBox is no longer
displayed in this situation.
================(Build #1832 - Engineering Case #349014)================
An application using the ADO.NET provider may have failed to connect to a
version 7.0 server. While connecting, the provider attempts to determine
the client's character set and the database's character set, so that it can
do character set translation. If the provider determined that it could convert
to the database's character set, it attempted to turn off character set conversion
on the server by sending a 'change character set' command. The 7.0 server
does not recognize this command and responds with an error. It was this
error that caused the connection failure. In fact, this error is not fatal
and the provider will now ignored it, using the client's character set intead.
================(Build #1839 - Engineering Case #350353)================
Modifying and rebuilding an ASP.Net application, which used the Managed Provider,
would have caused the ASP.Net worker process aspnet_wp.exe, to consume 100%
CPU time. This has now beed corrected.
================(Build #1843 - Engineering Case #351245)================
The following ADO.Net connection parameters were not allowed to be quoted:
- connect timeout
- connection timeout
- connection lifetime
- connection reset
- enlist
- max pool size
- min pool size
- pooling
- persist security info
This has now been corrected, these parameters can now be quoted.
================(Build #1848 - Engineering Case #352119)================
Calling the BeginTransaction() method would have failed with the error "Can
not set a temporary option for user 'DBA'", when connected to a database
using the Turkish 1254TRK collation. This problem has been fixed.
================(Build #1853 - Engineering Case #352770)================
Attempting to re-execute a query using the ADO function Requery(), would
have failed with the error "Function Sequence error".
This is illustrated by the following Visual Basic code fragment:
SQLText = "Select * from customer"
myRS.Open(SQLText, myConn, adOpenDynamic, adLockBatchOptimistic,
adCmdText)
If myRS.EOF Then
myRS.Close()
Else
myRS.MoveFirst()
End If
myRS.Requery()
This problem has now been fixed.
================(Build #1865 - Engineering Case #355474)================
When a query with multiple result sets was opened with ExecuteReader(CommandBehavior.SingleRow),
calling NextResult would always have returned false. Only the a single row
from the first result set could have been fetched. This problem has been
fixed so that a single row is now fetched from each result set, which matches
the .Net specifications.
================(Build #1865 - Engineering Case #355587)================
On dual cpu machines, after creating an new prepared AsaCommand and inserting
new rows in a loop, a communication error would have occurred after some
iterations.
For example, (VB.NET code):
Imports iAnywhere.Data.AsaClient
Module Module1
Sub Main()
Dim conn As AsaConnection
Dim cmd As AsaCommand
Dim i As Int32
Try
conn = New AsaConnection("uid=dba;pwd=sql;eng=asatest")
conn.Open()
for i = 1 to 2000
cmd = New AsaCommand("insert into ian values( 1 )", conn)
cmd.Prepared()
cmd.ExecuteNonQuery()
Next
Console.WriteLine("Inserted {0} rows", i)
conn.Close()
Catch e As Exception
Console.WriteLine(e.ToString())
End Try
End Sub
End Module
This problem has been fixed.
================(Build #1866 - Engineering Case #355929)================
The ASA provider could have loaded the wrong unmanaged dll, (dbdata8.dll
or dbdata9.dll), if multiple version of ASA were installed. Now, the ASA
provider will search for the unmanaged dll and will continue searching until
it finds and loads the right one. If a matching version can not be found,
the latest version will be loaded with a warning message.
================(Build #1876 - Engineering Case #355145)================
A .NET application, using multiple database connections through separate
threads, could have hung when updating the same table in different threads.
When this situation occurred, one thread would have been blocked in the server
(which is expected, as it is blocked against the other connection which is
holding a lock on the table as a result of its update), and the other thread
would appear to be hang as well, but it would not have been blocked in the
server. What was happening was that the first thread had entered a critical
section and was waiting for the server's response, while the second thread
was waiting to enter the same critical section, thus caused the application
hang. This has been fixed.
================(Build #1878 - Engineering Case #353442)================
Calling stored procedures with long varchar or long binary output parameters,
would have resulted in the data being corrupted after 32K. The AsaClient
was always using a default maximum length of 32K. This problem has been fixed.
================(Build #1878 - Engineering Case #358333)================
A NullReferenceException could have occurred when fetching long varchar or
long binary date using the DataReader object. This problem has been fixed.
================(Build #1881 - Engineering Case #359136)================
UltraLite.NET has a simpler error system and thus does not have the ADO Errors
collection. In order to make it easier to move from UltraLite to ASA, two
new properties have been added, AsaException.NativeError and AsaInfoMessageEventArgs.
================(Build #1888 - Engineering Case #360591)================
A data reader opened with a select statement would have held a lock on the
table, even after the data reader had been closed. This has now been fixed.
================(Build #1889 - Engineering Case #360761)================
Calling the AsaDataAdapter.Fill method multiple times on the same DataTable
that had a primary key, woulkd have caused a 'System.Data.ConstrainException'
exception on the second call. Now, if a primary key exists, incoming rows
are merged with matching rows that already exist. If no primary key exists,
incoming rows are appended to the DataTable. If primary key information is
present, any duplicate rows are reconciled and only appear once in the DataTable.
================(Build #1903 - Engineering Case #363177)================
An application that created multiple threads, and opened and closed pooled
connections on each thread, could possibly have had the threads become deadlocked,
causing come connections to fail, if the 'Max Pool Size' was smaller than
the number of threads. This problem has been fixed.
================(Build #1909 - Engineering Case #364573)================
It was not possible to assign an enum value to AsaParameter.Value without
an explicit cast. Now when AsaParameter.Value is set to an enum value, the
AsaParameter.AsaDbType is set to the underllying type of the enum value (Byte,
Int16, UInt16, Int32, UInt32, Int64 or UInt64) and the value is converted
to the underlying type.
================(Build #1919 - Engineering Case #366428)================
The AsaCommandBuilder class could not have generated INSERT, UPDATE or DELETE
statements for parameterized queries if the parameters were not provided.
Now, if the command is a stored procedure, the AsaClient will call AsaCommandBuilder.DeriveParameters
to add parameters for the SELECT command. If the command is text, the AsaClient
will add some dummy parameters.
================(Build #1924 - Engineering Case #363211)================
A FillError exception was not thrown when an error occurred during a fill
operation of the ASADataAdapter object. Now, when an error occurs during
a fill operation, the adapter will call the FillError delegate. If Continue
was set to true, the adapter will continue the fill operation. Otherwise,
it will throw the exception.
================(Build #1924 - Engineering Case #367464)================
The ASACommandBuilder class could not derive parameters if the stored procedure
name was quoted. Fixed by parsing the command text and using an unquoted
procedure name for deriving parameters.
================(Build #1935 - Engineering Case #369704)================
When filling a DataSet using the ASADataAdapter object, the AutoIncrement
property of DataColumn was not set properly. This has now been fixed.
================(Build #1938 - Engineering Case #370326)================
The method AsaDataReader.GetSchemaTable() may have caused an InvalidCastException
when the data reader had some unique columns and computed columns. This problem
has been fixed.
================(Build #1944 - Engineering Case #371411)================
The isolation level for a transaction was being set to 1 when the connection
was opened. Now, the isolation level is no longer set to any specific value.
The server default is the value defined for the connection by the database
option Isolation_level.
Note, this problem was introduced in the following builds:
8.0.3 5128
8.0.2 4442
9.0.2 2528
9.0.0 1333
9.0.1 1887
The old behavior is now restored
================(Build #1961 - Engineering Case #374580)================
An InvalidCastException would have been thrown when filling a DataTable using
AsaDataAdapter, if a returned TIME column was mapped to a STRING column in
the DataTable.
This problem has been fixed.
================(Build #1980 - Engineering Case #378360)================
The Data Adapter did not set IsKey property to true when filling a DataTable
if the source tables had unique indexes. This problem has been fixed.
================(Build #1991 - Engineering Case #379532)================
The same command object could have been deleted twice when running in a multi-threaded
environment. This could potentially have caused a crash. The problem has
been fixed.
================(Build #2010 - Engineering Case #385349)================
When inserting Multi-byte Character Set strings, by passing them as parameters
to the AsaCommand method, the strings were not saved. This has ben fixed.
================(Build #2013 - Engineering Case #386109)================
The AsaDataAdapter object was very slow when filling a DataSet or a DataTable
which had primary key. This problem has been fixed.
================(Build #2016 - Engineering Case #387070)================
The the method AsaDataAdapter.Fill may have caused an InvalidCastException
if the query returned columns which had the same name and different data
types. This problem has been fixed, but it is not recommanded to use duplicate
column names when filling a DataSet.
================(Build #2027 - Engineering Case #391887)================
Calling the method AsaDataReader.GetSchema() could have generated the exception:
"ASA .NET Data Provider: Column 'table_name' not found (-143)", if the database
also had a user table named systable. This has now been fixed by qualifying
references to system tables with the SYS owner name.
================(Build #2030 - Engineering Case #392294)================
If the server was already stopped, the AsaClient would have thrown an exception
when closing the connection. The AsaClient was checking the error code when
closing the connection and threw the exception if the error code is not -85
Communication error. The AsaClient now ignores error whem closing the connection.
================(Build #1816 - Engineering Case #344020)================
Calls to the db_stop_database function would have failed if the server was
not found using the shared memory link, even if the LINKS parameter was specified
in the connection string or DSN. The LINKS parameter was being ignored.
================(Build #1818 - Engineering Case #344712)================
If a named connection was forcibly dropped by the server, due to a liveness
or idle timeout, the next attempt by the same application to make a connection
with the same name would have failed. This has been fixed.
================(Build #1821 - Engineering Case #345936)================
If an Embedded SQL or ODBC application connected with an ENG connection parameter
containing a dot, and after the dot, a back slash, forward slash, semicolon,
or ampersand, the application would have crashed. This has be fixed
================(Build #1848 - Engineering Case #352159)================
It was possible for an application to have hung after attempting to cancel
a request. This problem would only have occurred very rarely on multiprocessor
systems, and was even less likely to have occurred on single processor systems.
This has been fixed.
================(Build #1858 - Engineering Case #354101)================
The documentation for the LDAP search_timeout parameter says that "A value
of 0 disables this option so that all entries are assumed to be current".
This was not the actual behaviour, specifying a timeout of 0 would have forced
all LDAP entries to be ignored. This has been fixed, the behaviour now matches
the documentation.
================(Build #1862 - Engineering Case #354838)================
If an error occurred on an embedded SQL EXECUTE statement, and there were
bound columns with all NULL data pointers, a communication error could have
occurred and the connection would have been dropped.
An example of bound columns will all NULL data pointers from ESQL is:
SQLDA *sqlda = alloc_sqlda( 1 );
sqlda->sqld = 1;
sqlda->sqlvar[0].sqltype = DT_INT;
sqlda->sqlvar[0].sqldata = NULL;
EXEC SQL EXECUTE stmt into descriptor into_sqlda;
This has been fixed so that an error is returned and the connection is not
dropped.
================(Build #1888 - Engineering Case #360597)================
Applications attempting Shared memory connections may have hung if the server
was forcibly shut down during the connection attempt. This has been fixed.
================(Build #1896 - Engineering Case #362197)================
If a multi-threaded client application had more than one connection (on more
than one thread) logging to the same log file, the logged entries from the
connections could have been mixed up. Specifically, several timestamps may
have appeared together, followed by the text of the messages. Also, in 9.x
clients, the date stamp at the top of the file would always have used lowercase
English strings. This has been fixed.
================(Build #1908 - Engineering Case #364378)================
If an error occurred when positioning a cursor, future fetches would have
failed with the error -853 "Cursor not in a valid state". When prefetch
was enabled (the default) the specific error when positioning a cursor may
not have been returned to the application, with "Cursor not in a valid state"
being returned instead.
For example, if a query had a WHERE clause which caused a conversion error,
the application may never have received an error stating a conversion error
occurred, but would have received the error "Cursor not in a valid state"
instead.
This has been fixed so that the initial error, which put the cursor in an
invalid state, is now returned to the application.
================(Build #1914 - Engineering Case #365460)================
An application that used GetData to retrieve an unbound column on the first
row, could have had poor performance, if prefetch was enabled and some, but
not all columns, were bound before the first fetch. This poor performance
would have been particularly noticeable if the first few rows of the query
were expensive to evaluate. Applications which used the iAnywhere JDBC driver,
on queries which had columns with type LONG VARCHAR or LONG BINARY, were
also affected by this poor performance. This has been fixed.
================(Build #1951 - Engineering Case #372175)================
The server could have leaked memory, eventually resulting in an 'Out of Memory'
error. This could have occurred while executing INSERT or LOAD TABLE statements
for tables which the server maintains statistics. This has been fixed
================(Build #1957 - Engineering Case #373480)================
If a server was started with a server name containing non-7-bit ASCII characters,
and the client machine's character set did not match the server machine's
character set, applications may not have been able to connect when specifying
the server name, (ie ENG parameter). This has been fixed.
================(Build #1957 - Engineering Case #373482)================
If a connection string included the TCP parameter VerifyServerName=NO, and
contained an incorrect server name, the connection would have failed, essentially,
the VerifyServerName parameter was ignored. This has been fixed.
================(Build #1964 - Engineering Case #374874)================
When using a cursor for which prefetch could be enabled, the fetch performance
of many rows may have been slower than expected. In order for this problem
to have occurred, the bound size, or described size, of the columns would
have to have been fairly small (less than 50 bytes), and the number of rows
prefetched must have been reduced by the maximum memory used for buffering
prefetch data (see the PrefetchBuffer connection parameter). This has now
been fixed.
================(Build #1972 - Engineering Case #375971)================
An application could have hung, received a communication error, or have possibly
seen other incorrect behaviour, when doing a fetch with prefetch enabled,
and then immediately doing a commit, rollback, or another fetch with an absolute
or negative offset. It was rare on multiprocessor machines, and would have
been even rarer on single processor machines. As well, there may have been
other timing dependent cases which could have failed. This has been fixed.
================(Build #1999 - Engineering Case #382507)================
When using TLS encryption, the client software would go through the list
of trusted root certificates provided and would fail with a handshake error
if any of the certificates had expired. This behaviour was incorrect. Now,
clients will ignore expired root certificates when reading them, and will
only report an error during the SSL/TLS handshake if no valid root certificates
were found to match the server.
================(Build #2027 - Engineering Case #391747)================
If a connection string contained the START= parameter with a value that ended
with "-x",
"-xs" or "-ec", and was immediately followed by the -x, -xs or -ec parameter
with at least two options, then the string would not have parsed properly
and the application would have failed with a SQLCODE -95 error. This is most
likely to happen with dbspawn, which converts the server command to a START=
parameter.
For example:
dbspawn dbeng9 -n myserver -o output-file-x -x tcpip(port=1234;timeout=3)
This has been fixed.
================(Build #2040 - Engineering Case #395662)================
An Embedded SQL or ODBC application, which used wide fetches or ODBC multi-row
rowset fetches on a cursor which had prefetched enabled, could have returned
a row with invalid data or data from a previous row when an error should
have been returned instead. An example of a row error which could have caused
this type of behaviour is the "Subquery cannot return more than one row"
error. Also, for Embedded SQL applications, SQLCOUNT was not being set correctly
to the number of rows fetched on an error. These problems have been fixed
so that the error is correctly returned, and SQLCOUNT is set correctly on
errors.
================(Build #1816 - Engineering Case #343176)================
When using the iAnywhere JDBC Driver, calling the method DatabaseMetaData.getURL()
would have returned null, instead of the actual URL used to establish the
connection. This problem has been fixed.
================(Build #1824 - Engineering Case #345171)================
If an application called ResultSet.isLast() prior to calling ResultSet.next()
or ResultSet.first() on a ResultSet object, then calling ResultSet.next()
afterwards would have incorrectly returned FALSE. This problem has now been
fixed.
================(Build #1842 - Engineering Case #350688)================
If an application called ResultSet.last(), to scroll to the last row in the
result set, and then called ResultSet.isLast(), to check to see if the cursor
was positioned on the last row, the iAnywhere JDBC Driver would have incorrectly
returned false, rather than true. This problem has now been fixed.
================(Build #1853 - Engineering Case #353208)================
Referencing columns in a result set by name, rather than by its ordinal postion,
may have failed. Since column names are case insensitive, all references
to column names are converted to lower case before any comparison. It was
this convertion to lower case that was being done incorrectly. This problem
has now been fixed.
================(Build #1897 - Engineering Case #362356)================
While connected to a multi-byte character set database with the iAnywhere
JDBC Driver, executing a procedure whose result set was defined to have a
varchar column, but the size of the column in the definition was too small,
could have resulted in an "Out of memory" exception. This problem has now
been fixed.
For example:
CREATE PROCEDURE test()
result( c1 varchar(254) )
begin
select repeat( 'abcdef', 1000 )
end
Notice that a varchar( 254 ) column is much too small to hold the result
of repeat( 'abcdef', 1000 ). In this case, executing the procedure test would
have resulted in an "Out of memory" exception.
================(Build #1908 - Engineering Case #364379)================
If an application running on a Unix platform, and using the iAnywhere JDBC
Driver, fetched a string with embedded null characters, the resulting string
would have been truncated at the first null character. This problem has been
fixed.
Note that a similar problem exists for applications running on Windows systems
as well. However this problem exists in the ASA ODBC Driver and is addressed
by Engineering Case 364608.
================(Build #1909 - Engineering Case #364610)================
Connecting to a database using Interactive SQL dbisql, and the iAnywhere
JDBC Driver, would have set the initial isolation level to 0, instead of
the proper isolation level as defined by the database option Isolation_level.
Once the connection was made though, changes to the isolation level would
have worked fine. This problem has now been fixed.
================(Build #1915 - Engineering Case #365797)================
The changes for Engineering Case 364610 prevented the Interactive SQL utility
dbisql, when using the iAnywhere JDBC driver, from connecting to the utility
database. This is now fixed. Note, this was not a problem when connecting
via jConnect.
================(Build #1919 - Engineering Case #366362)================
If a result set returned a LONG VARCHAR or LONG BINARY column, and one of
the values of that column was empty, then retrieving that empty result would
have caused the iAnywhere JDBC driver to leak memory. This problem has been
fixed.
================(Build #1954 - Engineering Case #373086)================
If a JDBC cursor was positioned on a row with a LONG VARCHAR column, then
calling ResultSet.getString() on the column would have returned the proper
value for the first call, but each subsequent call would have returned NULL
if the cursor had not been repositioned. This problem has now been fixed.
================(Build #1961 - Engineering Case #374451)================
An application using the iAnywhere JDBC Driver would have leaked memory if
it called Connection.getMetaData() repeatedly. This problem has been fixed.
================(Build #1962 - Engineering Case #374714)================
Executing a statement using Interactive SQL that sends messages back to the
client (ex. ALTER DATABASE UPGRADE, or CREATE DATABASE, or MESSAGE ... TO
CLIENT), could have caused Interactive SQL to crash, if the connection was
via the iAnywhere JDBC driver. This has now been fixed.
While the problem exists on all platforms, the crash has only been seen
on AIX systems.
================(Build #1962 - Engineering Case #374727)================
When using the iAnywhere JDBC Driver to connect to a DB2 database using the
IBM DB2 ODBC driver, and the method ResultSet.getObject() was used on a BLOB
column, the iAnywhere JDBC Driver would have failed with the error "Failed
to map result on column ? to a Java class" where ? is the column number.
This problem has been fixed.
================(Build #1963 - Engineering Case #374840)================
Calling the Connection.getCatalog() method, when using the iAnywhere JDBC
Driver, would have yielded a string with extra characters. Note that this
problem only existed if the JDBC Driver was used to connect to a server other
than an ASA server. The problem has been fixed.
================(Build #1977 - Engineering Case #377885)================
If an application used the PreparedStatement.setTimestamp() method to set
a timestamp parameter, then the millisecond portion of the timestamp would
not have been set. This problem has been fixed.
================(Build #2017 - Engineering Case #388573)================
If a result set had a DECIMAL column with a value that has more than 10 digits,
but less than 19 digits, and was an integer, then calling ResultSet.getLong()
should have returned the entire value but did not. This problem has been
fixed.
================(Build #2034 - Engineering Case #393604)================
If an application used ResultSet.relative(0) to attempt to refresh a row,
then the iAnywhere JDBC Driver would usually have given an "Invalid cursor
position" or a "Not on row" error. It should be noted that the "Invalid cursor
position" error is valid since that error is usually given by the underlying
ODBC driver when the Statement or PreparedStatement that generated the ResultSet
is of type TYPE_FORWARD_ONLY. However, when the Statement or PreparedStatment
is scrollable, then the iAnywhere JDBC Driver should refresh the row rather
than give the "Not on row" error. This problem has been fixed.
================(Build #2036 - Engineering Case #394722)================
If an application retrieved the ResultSetMetaData and then queried the datatype
of an unsigned smallint, unsigned int or unsigned bigint column, the datatype
returned would have been incorrect. This problem has now been fixed so that
an application can properly determine the unsigned column type using the
ResultSet.getColumnType() and ResultSet.isSigned() methods.
================(Build #2044 - Engineering Case #396873)================
Changes for Engineering Case 392484 ensured that data exceptions that occurred
during a wide fetch were not lost, but the changes introducing an error such
that warnings were lost instead. This has been corrected so that both data
exceptions and warnings are properly reported to the client.
================(Build #1752 - Engineering Case #347306)================
When doing a SQLPutData or SQLSetPos using bound variables, specifying a
zero-length string could cause a crash in the ODBC driver. This has been
fixed.
================(Build #1812 - Engineering Case #341446)================
A call to SQLNativeSql would have resulted in a truncated string, when the
output buffer length was less than twice the length of the input string.
This has now been corrected
The following C example shows the problem:
char sql_stmt_out[24];
strcpy( sql_stmt_in, "select * from customer" );
rc = SQLNativeSql( dbc, sql_stmt_in, SQL_NTS,
sql_stmt_out, sizeof( sql_stmt_out ), &len );
The output buffer would have contained only 12 characters (ie 24/2).
================(Build #1812 - Engineering Case #341910)================
Calling SQLGetDiagField() with a DiaIdentifier of SQL_DIAG_ROW_NUMBERshould,
would have returned the row number in the rowset, not the row number in the
result set. For example, after a second SQLFetchScroll (for a rowset of size
1), if a truncation occurred, the record number reported as beaing truncated
would have been 2, which was incorrect.
This problem has been fixed, SQLGetDiagField() now returns the rowset number
of the row with an error.
================(Build #1813 - Engineering Case #342150)================
The UNIQUEIDENTIFIER data type is stored in the database as a 16 byte binary
value. This value was not being stored correctly on little-endian platforms,
such as Intel x86. This problem has been corrected in the ODBC and OLEDB
drivers.
================(Build #1832 - Engineering Case #348106)================
In a Visual Basic RDO application, after updating columns in a resultset,
calling MoveFirst or MoveLast would then have failed with the error "Not
Enough fields allocated in SQLDA".
For example:
Call GResSet.Edit()
GResSet.rdoColumns("Col2").Value = 2
Call GResSet.Update()
GResSet.MoveFirst() < ---- 'Error : Not Enough fields allocated in SQLDA
The same problem was reproducable when calling ODBC functions directly.
For example:
Set rowset size to 100.
SQLExtendedFetch (SQL_FETCH_FIRST) to obtain a rowset.
SQLSetPos (SQL_UPDATE) to change a row in the rowset.
SQLSetPos (SQL_REFRESH) to update a row in the rowset.
SQLExtendedFetch (SQL_FETCH_FIRST) to refetch the rowset. < ---- 'Error
: Not Enough fields allocated in SQLDA
This problem has been fixed.
================(Build #1839 - Engineering Case #350399)================
If a binary column was bound as char (SQL_C_CHAR) or wide char (SQL_C_WCHAR)
and the rowset size was greater than 1, and more than 1 row was fetched,
the calculation for the offset into the data buffer for the column value
was incorrect. For SQLBindCol(), BufferLength is the size of a single element
in the data buffer array. The calculation for the offset was (BufferLength*
row_number )+(column_size * row_number) where row_number is 0, 1, 2, 3,
etc. This incorrect calculation was done in an attempt to compensate for
the fact that each binary byte results in two character bytes upon conversion.
For example, if the column was BINARY(20) and it was bound as SQL_C_CHAR
and the BufferLength value was 20, then the converted values were stored
at offset 0, 20*1+20*1, 20*2+20*2, etc. which was 0, 40, 80, etc. This is
incorrect. The column values must be stored at offset 0, offset BufferLength
* 1, offset BufferLength * 2, etc. which, in the example, is 0, 20, 40, etc.
This problem has been fixed, but as a result. the application must ensure
that the value for BufferLength for binary to character conversions, is double
that of the actual column length, in order to avoid truncation. For example,
if the column is BINARY(20) and it is bound as SQL_C_CHAR then the BufferLength
value must be 40 (ie 2*20).
A similar problem also existed for UniqueIdentifier columns, bound as char
(SQL_C_CHAR) or wide char (SQL_C_WCHAR), in which a 16-byte binary value
is converted to a 36-byte character string complete with hyphens (e.g., "41dfe9e6-db91-11d2-8c43-006008d26a6f"
). The buffer offset calculation for this was also incorrect, and has also
has been corrected.
================(Build #1842 - Engineering Case #349975)================
Blob data, written to a database by an ODBC application using SQLPutData,
would have been corrupted if the following conditions were true:
- the application used a different charset than the database
- the server had character set translation enabled
- the length parameter passed to SQLPutData was larger than SQL_ATTR_MAX_LENGTH
In this case the data is send as VARCHAR and the server does character set
translation. This has now been fixed.
================(Build #1852 - Engineering Case #352795)================
If a Unicode (wide character) value was bound with SQLBindParameter, in a
database using the UTF8 collation, then the column value may not have been
transmitted back to the application properly.
This problem would have occurred under the following conditions:
1. The value must have been bound with SQL_C_WCHAR.
2. The indicator length must have been set to the value's actual length
(rather than SQL_NTS).
3. The column must have contained embedded NULL wide characters.
4. The table must reside in a database using the UTF8 collation.
This problem has now been fixed.
================(Build #1872 - Engineering Case #352572)================
An application that used multiple threads to access the same connection through
ODBC could have hung. This has been fixed.
================(Build #1875 - Engineering Case #358207)================
As of ASA 9.0.0, the ODBC driver allows for the establishment of a message
callback function, so that 'message' statements can be send back to the connection.
These callback functions are established by calling SQLSetConnectAttr(),
but it was not possible to uninstall the message callback function. Now,
calling SQLSetConnectAttr() and passing a NULL pointer will disable the message
callback function.
================(Build #1882 - Engineering Case #359428)================
A memory leak can have occurred in the ODBC driver if the database server
closed the connection for any reason, for example an idle time-out. A pointer
to the driver's connection object was being set to null, but the object was
not freed. This has now been corrected.
================(Build #1887 - Engineering Case #360479)================
The ODBC SQLDisconnect() function could have returned a failing error code
if the server dropped the client's connection (e.g. for idle time-out reasons),
while the connection had a dirty transaction to be committed or rolled back
(i.e. SQLExecute, SQLExecDirect or SQLSetPos was called), and SQL_AUTOCOMMIT_OFF
option was set ON for the connection.
Now, SQLDisconnect() returns SQL_SUCCESS_WITH_INFO and sets the SQLSTATE
to 01002 (Disconnect error).
================(Build #1908 - Engineering Case #364685)================
A change has been made to the existing callback support such that when a
connection designates a message callback function, it applies only to that
connection.
================(Build #1909 - Engineering Case #364608)================
If an ODBC application running on a Windows system fetched a string with
embedded null characters, the resulting string would have been truncated
at the first null character. This problem has been fixed.
Note that a similar problem exists for applications running on Unix platforms
as well. However this problem exists in the iAnywhere JDBC Driver and is
addressed by Engineering Case 364379.
================(Build #1913 - Engineering Case #365173)================
If an ODBC application called SQLGetInfoW() to get the user name or collation
sequence from a database using a Multibyte Character Set, then the conversion
to UNICODE would likely have been wrong. This problem has been fixed.
================(Build #1913 - Engineering Case #365330)================
When an ODBC application retrieved certain metadata values in Unicode, like
the column name of a result set column, the returned value could have been
truncated. This could have occurred, even though the specified buffer was
large enough to hold the Unicode value. It was also probable that no truncation
error would have been returned to the application. Both of these problem
have now been fixed.
================(Build #1922 - Engineering Case #367232)================
If a database created with the UTF8 collation, had a column that contained
a 5 or 6 byte Chinese character sequence, ODBC client applications would
likely have crashed fetching the column. This has been fixed.
This problem was introduced by the changes for Engineering Case 364608.
================(Build #1946 - Engineering Case #370905)================
An ODBC application that allocated both an ODBC 2.0 style environment handle
and an ODBC 3.0 style environment handle, could have have returned ODBC 3.0
result codes when functions were called in the ODBC 2,0 environment, or vice
versa. This could have lead to subsequent function calls failing with erroneous
error messages, including 'function sequence error'. Now, the driver will
always allocate a separate environment handle each time SQLAllocEnv or SQLAllocHandle
is called.
================(Build #1951 - Engineering Case #370604)================
In ODBC, changing the option for a cursor's scrollability, could have caused
the driver to change the cursor type as well. For instance if the cursor
type was forward-only, changing the scrollability to scrollable would have
changed the cursor type to dynamic. The problem was that the driver was
always changing the cursor type to dynamic regardless of the existing cursor
type. This has been corrected.
================(Build #1972 - Engineering Case #355595)================
Calling the ODBC function SQLGetData() with a length of 0 would have failed
for SQL_WCHAR.
SQLRETURN SQLGetData(
SQLHSTMT StatementHandle,
SQLUSMALLINT ColumnNumber,
SQLSMALLINT TargetType,
SQLPOINTER TargetValuePtr,
SQLINTEGER BufferLength,
SQLINTEGER * IndPtr);
SQLGetData can be used to obtain the amount of data available by passing
0 for the BufferLength argument. The amount of data available is returned
in the location pointed to by IndPtr. If the amount available cannot be determined,
SQL_NO_TOTAL is returned. When the TargetType was SQL_C_WCHAR, the amount
of available data was incorrect (a character count rather than byte count
was returned). This has been fixed.
There were also some problems returning correct indicator values for databases
using the UTF8 collation. This has also been fixed.
================(Build #2027 - Engineering Case #390234)================
A call to SQLDriverConnect() can return the completed connection string.
In version 8, the ASA ODBC driver produced the connection string with the
"DSN=" connection parameter first in the string. In version 9, the ODBC driver
produced the connection string such that "DSN=" was not necessarily first.
For example, if the connection string passed to SQLDriverConnect() was "PWD=sql;Userid=dba;DSN=test9"
then in version 8 the connect string "DSN=test9;UID=dba;PWD=sql" was returned,
and in version 9 the connection string "UID=dba;PWD=sql;DSN=test9" was returned.
According to the ODBC specification, the order shouldn't matter, but Gupta
Team Developer 3.1 applications did not handle the version 9 order correctly.
This problem has been resolved by restoring the version 8 behavior where
"DSN=" is placed first in the connect string.
================(Build #2033 - Engineering Case #393587)================
The ODBC driver was not returning some reserved words for SQLGetInfo( SQL_KEYWORDS
). The mssing reserved words were:
character
dec
options
proc
reference
subtrans
These words are synonyms for other reserved words, and have now been added
to the list returned by SQLGetInfo( SQL_KEYWORDS ).
================(Build #2040 - Engineering Case #392484)================
If an application using either the ASA ODBC driver, or the iAnywhere JDBC
driver, fetched a set of rows in which one of the rows encountered a data
exception, then it was likely that the error would not have been reported.
Note that Prefetch must have been on for the problem to occur. This problem
has now been fixed, but in addition to this change, the changes to the server
for Engineering Case 395662 are also required
================(Build #1812 - Engineering Case #340046)================
An exception could have occurred when moving through a result set that contained
rows with column lengths 200 bytes or greater.
For example, (assume the following table contains entries with the NOTES
column containing 400 characters):
create table TBLVARCHAR
(
ID unsigned bigint not null,
NOTES long varchar,
primary key (ID)
);
The following is a VB example that selects from this table:
rs.Open "SELECT * FROM tblVARCHAR", conn, adOpenStatic, adLockReadOnly
' loop through all recordsets
Do While Not rs.EOF
For i = 0 To 1
Debug.Print rs.Fields(i).Name, TypeName(rs.Fields(i).Value), rs.Fields(i).Value
Next
rs.MoveNext
Loop
rs.Close
This problem has been fixed. The GetRowsAt() method now handles long columns.
================(Build #1815 - Engineering Case #341458)================
If an ADO application fetched a binary column into a variable of type VARIANT,
only 16 bytes would have been stored. Also, for a non-variant DBTYPE_ARRAY
variable, the length was incorrectly set to zero, which meant that no data
would have been copied. These problems have now been fixed.
================(Build #1816 - Engineering Case #343472)================
An attempt to insert a null value into a binary column, would resulted in
an access violation. This has been fixed.
================(Build #1817 - Engineering Case #344137)================
Using the OLEDB driver, a conversion of a value of type DBTYPE_IUNKNOWN to
a value of type DBTYPE_STR would have failed.
This problem could have occured if a column was of type adLongVarWChar as
in the following Visual Basic example:
If rsTempSource.Fields(strFieldName).Type = ADOR.DataTypeEnum.adLongVarWChar
Then
rsTempTarget.Fields(strFieldName).Value = Trim(rsTempSource.Fields(strFieldName).Value)
rsTempTarget.UpdateBatch()
End If
Binding would have failed for a column of this type resulting in a "Count
field incorrect" error. This problem has been corrected.
================(Build #1818 - Engineering Case #342915)================
If Microsoft OLEDB support is not installed, a C++ application that uses
the ASA OLEDB driver will fail in the call to CoCreateInstance. The failure
code from CoCreateInstance does not give any indication as to the source
of the problem.
Below is a sample C++ call to CoCreateInstance to initialize the ASA OLEDB
provider.
CLSIDFromProgID(T2COLE(_T("ASAProv")), &clsid);
hr = CoCreateInstance( clsid, NULL,
CLSCTX_INPROC_SERVER,
IID_IDBInitialize,
(void**)&m_pIDBInitialize);
This problem is likely to occur on a Pocket PC 2003 device. The Pocket PC
2003 SDK (eg, \WinCE Tools\wce420\Pocket PC 2003) does not include the Microsoft
Data Access 3.1 modules (like msdaer.dll, msdaeren.dll) that are present
in older SDKs like the Pocket PC 2002 SDK (e.g., \WinCE Tools\wce300\Pocket
PC 2002\dataaccess31\target\arm).
To help diagnose the problem, the ASA OLEDB provider has been changed to
display a message box when it cannot access the Microsoft OLE DB Error Collection
Service module.
The title of the message box is "Fatal Error".
The interior text of the message box will state something like "CoGetClassObject(CLSID_EXTENDEDERRORINFO)
0x8007007e ProgID: MSDAER.1". The hexadecimal number is the error code returned
by CoGetClassObject(). In this example, the programmatic identifier associated
with CLSID_EXTENDEDERRORINFO is "MSDAER.1".
The settings for CLSID_EXTENDEDERRORINFO can be examined in the system registry
under HKEY_CLASSES_ROOT\CLSID\{C8B522CF-5CF3-11CE-ADE5-00AA0044773D}. Included
is the path for the Microsoft OLE DB Error Collection Service module. This
information can then be used to help determine if the required module is
present on the system.
================(Build #1830 - Engineering Case #345015)================
When a CHAR or VARCHAR column contained the empty string (a string of length
0) and the data was fetched into a variant of type BSTR, the OLEDB provider
did not convert the result into a proper null-length BSTR. The pointer to
the string value would have been uninitialized and this could have resulted
in an application crash. This has now been fixed.
================(Build #1842 - Engineering Case #350551)================
When data was inserted using the OLEDB provider into a column with a datatype
such as "long varchar", "text", "long binary", "image" or "long varbinary",
which have a maximum length of 2 Gbytes, the data may have been truncated
at 4 bytes. This problem has now been fixed.
================(Build #1842 - Engineering Case #351094)================
The OLEDB driver would almost always treat the database as if it was the
1252 codepage charset. In order to determine the character set of the database
to which it was connected, the OLEDB driver queried the server for the collation
name. It then used this collation name to search a table of codepage charset
names. This was incorrect, as for the most part, collation names (e.g.,
"1252LATIN1") are not the same as charset names (such as "cp936", "cp1252",
and so on). There are some exceptions however. The collation names "sjis",
"utf8", and "iso_1" are the same as the charset name, so for these character
sets, the problem would not appear. When the search routine did not find
the collation name, it would default the charset to 1252. All application
data is converted from UNICODE to the database charset, thus all application
data was being converted to 1252, perhaps incorrectly.
This problem has been corrected, the correct charset is noe used.
================(Build #1844 - Engineering Case #344732)================
If a row was added to a recordset with an autoincremented primary key, the
new autoincremented value was not updated in the recordset. This has been
fixed for server-side keyset and dynamic cursors. The following server-side
cursors support refetching of column values: keyset, dynamic, and static.
Forward-only cursors are not supported. Due to the way ADO interacts with
the OLEDB provider, no client-server cursors support refetching of the columns.
================(Build #1852 - Engineering Case #348794)================
Attempting to put a Unicode (wide character) string into a column that had
a datatype of char, varchar, binary or varbinary, via a parameterized INSERT
statement, would have resulted in garbage characters being placed into the
column. A work around is to explicitly set the type to DbType.AnsiString
The following C# fragment illustrates the problem.
IDbCommand cmd = dbConnection.CreateCommand();
cmd.CommandText = "insert into t ( value,value2 ) values (?,?)";
IDbDataParameter param1 = cmd.CreateParameter();
IDbDataParameter param2 = cmd.CreateParameter();
// uncomment this line for workaround
// param1.DbType = DbType.AnsiString
param1.Value = "ABC\x00"+"DEFG";
param2.Value = "ABCD\x00"+"EFG";
cmd.Parameters.Add(param1);
cmd.Parameters.Add(param2);
cmd.ExecuteNonQuery();
cmd.Dispose();
This has been fixed.
================(Build #1852 - Engineering Case #351298)================
After calling a procedure that returned more than one result set, attempting
to move to the next result set using ADO's NextRecordset() method, would
always have returned Null. This problem has been fixed.
The following Visual Basic code fragment illustrates the problem (the stored
procedure mysp() returns two result sets):
adors.Open("call mysp()", adocon, ADODB.CursorTypeEnum.adOpenDynamic,
ADODB.LockTypeEnum.adLockReadOnly,
ADODB.CommandTypeEnum.adCmdText)
Label1.Text = adors.Fields(1).Value
adors = adors.NextRecordset() <--- adors was always Null after this
Label1.Text = adors.Fields(1).Value
================(Build #1859 - Engineering Case #354221)================
The following changes have been made to the OLEDB provider's Rowset property
set DBPROPSET_ROWSET.
The following Rowset properties were returning TRUE, indicating that the
associated interface was supported. They now return FALSE, to indicate that
the corresponding interfaces are not supported.
DBPROP_IChapteredRowset
DBPROP_IParentRowset
DBPROP_IRowsetFind
DBPROP_IRowsetIndex
DBPROP_IRowsetRefresh
DBPROP_IRowsetResynch
The Rowset property DBPROP_IRowsetIdentity, was returning TRUE, although
the IRowsetIdentity interface was not supported. The IRowsetIdentity interface
is now implemented. The property continues to return TRUE.
The following properties have been removed from the Rowset properties, as
they are not Rowset properties, they are View properties.
DBPROP_IViewChapter
DBPROP_IViewFilter
DBPROP_IViewRowset
================(Build #1859 - Engineering Case #354317)================
The following changes have been made to correct problems in metadata reporting:
The Schema Rowset returned for DBSCHEMA_TABLE_CONSTRAINTS has been corrected.
The Schema rowset for DBSCHEMA_PROVIDER_TYPES now includes the types DBTYPE_R4
(FLOAT), DBTYPE_R8 (DOUBLE), and DBTYPE_GUID (UNIQUEIDENTIFIER).
The type TINYINT is now described as signed, previously it was described
as unsigned.
The MAXIMUM_SCALE for DECIMAL and NUMERIC was NULL instead of 127. The MAXIMUM_SCALE
for the SMALLINIT, TINYINT and INTEGER types is now NULL, not 0.
Schema Rowsets that returned column type information did not include support
for the DBTYPE_GUID (UNIQUEIDENTIFIER) type, now they do.
In order to implement these changes, the stored procedures in scripts\oleschem.sql
must replace the stored procedures in the database.
================(Build #1862 - Engineering Case #354689)================
The following changes have been made to correct problems in metadata reporting:
The IDBSchemaRowset::GetSchemas method now correctly returns bit masks rather
than argument counts to indicate which restriction parameters are supported.
When using linked tables with Microsoft SQL Server 2000 Distributed Queries,
this problem manifested itself with the message "OLE DB provider 'ASAProv.xxx'
returned an invalid schema definition".
The DBSCHEMA_TABLES_INFO Schema Rowset used by Microsoft SQL Server 2000
Distributed Queries is now supported.
To install the revised schema support, the stored procedures in scripts\oleschem.sql
must replace the stored procedures in the database.
================(Build #1863 - Engineering Case #353195)================
If a cursor was opened with requested attributes of INSENSITIVE and FOR UPDATE
(either using lock, row versions, or values), then a cursor with ASENSITIVE
semantics would have been reported. In reality, the cursor used by the engine
was implemented using a keyset-driven cursor, which gave INSENSITIVE MEMBERSHIP.
The values returned to the application, however, were not necessarily SENSITIVE
as prefetch was not disabled (optimistic concurrency prevented lost updates).
Now, prefetch is disabled in this case, and the cursor type is described
to the client as INSENSITIVE MEMBERSHIP / SENSITIVE VALUES. In order to more
closely match the requested INSENSITIVE semantics, the cursor work table
is populated eagerly at open time.
This change allows bookmarks to be supported for the returned cursor.
================(Build #1865 - Engineering Case #356646)================
The OLE DB driver did not work well on 64-bit Windows systems. A number of
memory alignment issues and problems with bookmarks (which were incorrectly
assumed to be 32-bit values) were resolved.
================(Build #1875 - Engineering Case #350319)================
When using the Microsoft Query Analyzer with Microsoft SQL Server 2000 to
issue a query on a Linked Server definition that referenced an ASA server,
an error such as the following would have been reported:
Server: Msg 7317, Level 16, State 1, Line 1
OLE DB provider 'ASAProv.80' returned an invalid schema definition.
For example:
select * from ASA8.asademo.dba.customer
where "ASA8" is the name of the Linked Server, "asademo" is the catalog
name, "dba" is the schema name and "customer" is the table name. This problem
has been fixed, but the following must also be done in order to support a
Linked Server query:
- When the Linked Server is defined, "Provider Options" must be selected.
(this button is greyed out and unusable once the Linked Server has been defined).
In the Provider Options dialog, the "Allow InProcess" option must be selected.
- ASA does not support catalogs, so the four part table reference must omit
the catalog name (two consecutive periods with no intervening characters,
ie select * from ASA8..dba.customer). Including a catalog name will result
in the error: "Invalid schema or catalog specified for provider 'ASAProv.80'"
- The database must be updated to include the revised stored procedures
found in the scripts\oleschem.sql directory. This file includes a new stored
procedure dbo.sa_oledb_tables_info that is required for Linked Server support.
================(Build #1881 - Engineering Case #358843)================
A memory leak occurred in the OLEDB provider, ASAProv, when a repeated sequence
of calls to SetCommandText(), Prepare(), and GetColumnInfo() was executed.
These calls could be generated by an ADO Open() call with a SELECT statement
containing a number of table JOINs. This problem has now been fixed.
================(Build #1884 - Engineering Case #359675)================
When a binary array of bytes was inserted into a binary column using the
OLEDB provider "ASAProv", the data was converted to a hexadecimal string
and stored into the binary column. For example:
BYTE m_lParm1[2]={0x10,0x5f);
would have been stored as the binary value 0x31303566 which is the original
binary value stored as a hexadecimal string of characters. This has been
fixed so that parameters are not converted from the user's type to a string.
Instead, bound parameters are converted to the type specified by the application.
================(Build #1886 - Engineering Case #360196)================
The ASA OLEDB provider ASAProv did not return the correct error codes as
documented by Microsoft. For example, the ICommand::Execute method should
have returned DB_E_INTEGRITYVIOLATION when a literal value in the command
text violated the integrity constraints for a column, but was returning E_FAIL.
This has been corrected.
The following additional error codes are also now returned:
DB_E_NOTABLE
DB_E_PARAMNOTOPTIONAL
DB_E_DATAOVERFLOW
DB_E_CANTCONVERTVALUE
DB_E_TABLEINUSE
DB_E_ERRORSINCOMMAND
DB_SEC_E_PERMISSIONDENIED
================(Build #1889 - Engineering Case #360678)================
ADO applications using the ASA OLEDB provider, ASAProv, could have failed
with an "invalid rowset accessor" error.
The following Visual Basic code example demonstrates the problem:
Dim conn As new OleDbConnection()
Dim cmd As OleDbCommand
Dim reader As OleDbDataReader
Try
conn.ConnectionString = "Provider=ASAProv;uid=dba;pwd=sql;eng=asademo"
conn.Open()
cmd = New OleDbCommand("SELECT * FROM DEPARTMENT", conn)
reader = cmd.ExecuteReader()
While reader.Read()
Console.WriteLine(reader.GetInt32(0).ToString() + ", " _
+ reader.GetString(1) + ", " + reader.GetInt32(2).ToString())
End While
reader.Close()
reader = Nothing
conn.Close()
conn = Nothing
Catch ex As Exception
MessageBox.Show(ex.Message)
End Try
This problem has been fixed, the rgStatus array passed to the IAccessor::CreateAccessor
method is now correctly initialized.
================(Build #1895 - Engineering Case #361464)================
This change fixes several problems with the DBSCHEMA_INDEXES rowset, implemented
by the sa_oledb_indexes stored procedure.
The COLLATION column was always empty. It now contains a 1 for ASCENDING
ordering and a 2 for DESCENDING ordering.
The CLUSTERED column was always TRUE, even for a non-clustered index. This
column now contain 0 unless the index is clustered, in which case it will
contain a 1.
The INDEX_NAME column contained the additional string "(primary key)" when
the index was based on a primary key. In the DBSCHEMA_PRIMARY_KEYS rowset
(implemented by the sa_oledb_primary_keys stored procedure), the PK_NAME
column does not contain this string.
Since the names were different, it was difficult to join the information
in these two tables. Elsewhere the index name is reported using only the
table name (in access plans for example). For these reasons, the INDEX_NAME
column will no longer contain the string "(primary key)".
The following column names have been corrected:
FKTABLE_CAT is now FK_TABLE_CATALOG
FKTABLE_SCHEMA is now FK_TABLE_SCHEMA
PKTABLE_CAT is now PK_TABLE_CATALOG
PKTABLE_SCHEMA is now PK_TABLE_SCHEMA
These corrections are to the "oleschema.sql" file located in the "scripts"
folder, which will be in effect for newly created databases. To implement
the corrections to an existing database, connect to the database with Interactive
SQL dbisql and run the contents of "oleschema.sql".
================(Build #1896 - Engineering Case #362198)================
In the COLUMNS, PROCEDURE_PARAMETERS and PROCEDURE_COLUMNS rowsets, the CHARACTER_MAXIMUM_LENGTH
column contained incorrect values for BIT and LONG VARCHAR/LONG VARBINARY
columns and parameters. This column should contain the maximum possible length
of a value in the column. For character, binary, or bit columns, this is
one of the following:
1) The maximum length of the column in characters, bytes, or bits, respectively,
if one is defined. For example, a CHAR(5) column in an SQL table has a maximum
length of five (5).
2) The maximum length of the data type in characters, bytes, or bits, respectively,
if the column does not have a defined length.
3) Zero (0) if neither the column nor the data type has a defined maximum
length.
4) NULL for all other types of columns.
As well, the CHARACTER_OCTET_LENGTH column contained incorrect values for
LONG VARCHAR/LONG VARBINARY columns and parameters. The CHARACTER_OCTET_LENGTH
column should contain the maximum length in bytes of the parameter if the
type of the parameter is character or binary. A value of zero means the parameter
has no maximum length. NULL for all other types of parameters.
In the PROCEDURE_COLUMNS and PROCEDURE_PARAMETERS rowsets, the column name
should have been CHARACTER_OCTET_LENGTH rather than CHAR_OCTET_LENGTH.
In the PROVIDER_TYPES rowset, the COLUMN_SIZE column contained incorrect
values for LONG VARCHAR/LONG VARBINARY types.
These problems have been corrected and will appear in the "oleschema.sql"
file located in the "scripts" folder once the EBF has been applied. To implement
the corrections to an existing database, connect to the database with Interactive
SQL (dbisql) and run the contents of "oleschema.sql".
================(Build #1896 - Engineering Case #362207)================
In the FOREIGN_KEYS rowset (implemented by the sa_oledb_foreign_keys stored
procedure), the DEFERRABILITY column contained the values 5 or 6. This column
should contain one of the following:
DBPROPVAL_DF_INITIALLY_DEFERRED 0x01
DBPROPVAL_DF_INITIALLY_IMMEDIATE 0x02
DBPROPVAL_DF_NOT_DEFERRABLE 0x03
These corrections will appear in the "oleschema.sql" file located in the
"scripts" folder once the EBF has been applied. To implement the corrections
to an existing database, connect to the database with Interactive SQL and
load and run the contents of "oleschema.sql".
================(Build #1897 - Engineering Case #362011)================
The OLEDB provider ASAProv assumed that DBTYPE_BOOL values were 1 byte long.
So for database columns of type BIT, it would indicate that only 1 byte needed
to be allocated for a DBTYPE_BOOL column. This was incorrect, since DBTYPE_BOOL
values are actually 2 bytes long. Any consumer application that fetched
2 bytes for a DBTYPE_BOOL column, (such as applications based on Borland's
Delphi), and examined both bytes, would have obtained an incorrect result.
Also columns adjacent to DBTYPE_BOOL columns will overlap in memory. This
has been fixed.
================(Build #1932 - Engineering Case #369016)================
When an application using the OLEDB driver provided a DBTYPE_DECIMAL parameter
with over 15 digits, the most significant digits would have been lost. For
example, if the value 1234567890.123456 was provided as a DBTYPE_DECIMAL
parameter, this would have been incorrectly interpreted as 234567890.123456
(the leading 1 would be lost). In particular, this could affect Visual Basic
applications using an OleDbDataAdapter on a query with a numeric or decimal
typed column, and a generated DataSet. The problem has now been fixed.
================(Build #1933 - Engineering Case #368574)================
The execution of a SELECT statement containing JOINs of several tables by
applications using the OLEDB provider ASAProv, would have resulted in a memory
leak. This has been fixed.
================(Build #1933 - Engineering Case #369272)================
A call to IRowsetChange::InsertRow() in the OLEDB provider, ASAProv, results
in a crash. This call can be made from C++ using a simple table insert:
CTable<CAccessor<CSimpleAccessor> > dbSimple;
hr = dbSimple.Insert(1);
This problem has been fixed.
================(Build #1934 - Engineering Case #369521)================
If the GetCurrentCommand method of the ICommandPersist interface was called,
the memory heap could have been corrupted. This problem has been fixed.
================(Build #1935 - Engineering Case #369072)================
When using the OLEDB provider ASAProv, String parameters may not have been
passed correctly to stored procedures. This problem has been fixed.
The following Visual Basic example calls a stored procedure with a String
parameter.
Dim sendParam1 As String
sendParam1 = "20040927120000"
Dim cmd As ADODB.Command
cmd = New ADODB.Command
With cmd
.CommandText = "testproc1"
.CommandType = ADODB.CommandTypeEnum.adCmdStoredProc
.ActiveConnection = myConn
.Prepared = True
.Parameters(0).Value = sendParam1
Call .Execute()
End With
An example of a stored procedure follows.
ALTER PROCEDURE "DBA"."testproc1" (in param1 varchar(30))
BEGIN
message 'in Parameter [' + param1 + ']';
END
================(Build #1979 - Engineering Case #376453)================
An ADO .Net application that attempted to obtain the primary keys from a
query on a table using the OLEDB provider, may have been returned incorrect
results when the table had more than one primary key column and/or columns
with unique constraints or unique indexes.
A sample code fragment follows:
DataTable Table = new DataTable(textTableName.Text);
OleDbDataAdapter adapter;
OleDbConnection connection = new OleDbConnection(textConnectionString.Text);
using (connection)
{
try
{
connection.Open();
adapter = new OleDbDataAdapter("select * from dba." + textTableName.Text
+ " where 1=0", connection);
adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey;
adapter.Fill(Table);
listBox1.Items.Clear();
foreach(DataColumn col in Table.PrimaryKey)
{
listBox1.Items.Add(col.ColumnName);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
The DataTable PrimaryKey property is an array of columns that function as
primary keys for the data table. This problem has been fixed.
One of the elements that ADO.Net uses in deciding whether a column belongs
in this set is the column metadata rowset.
IColumnsRowset::GetColumnsRowset - Returns a rowset containing metadata
about each column in the current rowset. This rowset is known as the column
metadata rowset and is read-only. The optional Metadata Column DBCOLUMN_KEYCOLUMN
is described to contain one of the values VARIANT_TRUE or VARIANT_FALSE or
NULL.
VARIANT_TRUE — The column is one of a set of columns in the rowset that,
taken together, uniquely identify the row. The set of columns with DBCOLUMN_KEYCOLUMN
set to VARIANT_TRUE must uniquely identify a row in the rowset. There is
no requirement that this set of columns is a minimal set of columns. This
set of columns may be generated from a base table primary key, a unique constraint
or a unique index.
VARIANT_FALSE — The column is not required to uniquely identify the row.
This column used to contain VARIANT_TRUE or VARIANT_FALSE. It now contains
NULL since OLEDB cannot correctly set the value. As a result, ADO.Net uses
other means for determining which columns belong in the PrimaryKey columns
property.
================(Build #1990 - Engineering Case #379901)================
The OLEDB provider was failing to close the result set cursor between prepared
command executions. Enigeering Case 351298 reintroduced this bug, which was
originally described by Case 271435. This fix addresses both issues, an open
cursor is now closed before a SQLExecute, when the command has been previously
prepared.
================(Build #2023 - Engineering Case #389502)================
The OLEDB PROVIDER_TYPES rowset did not implement the DATA_TYPE and BEST_MATCH
restrictions. It implemented a restriction on TYPE_NAME instead and ignored
BEST_MATCH. This problem has been fixed so that the PROVIDER_TYPES rowset
now implements the DATA_TYPE and BEST_MATCH restrictions. To install a new
version of the PROVIDER_TYPES rowset into your database, load and run scripts\oleschem.sql
against the database.
As well, not all type names are included in the rowset, and some data types
that could be included were not. This has also been corrected.
================(Build #1816 - Engineering Case #339342)================
When given a workload containing hundreds of thousands of queries, the Index
Consultant could have caused Sybase Central to exit, leaving a log file reporting
that memory had been exhausted. A simple governor has been added to avoid
this problem; it limits the number of queries analyzed by the consultant
to 25,000. As of version 9.0.2, the Index Consultant will be able to handle
arbitrarily large query sets.
A workaround for instances involving evenly distributed duplicate queries
(which will be the normal case for this number of captured queries) is to
reduce the time spent capturing queries. By using the PAUSE/RESUME feature
of the consultant, representative samples of different phases of an application
can be captured, without overloading the consultant.
================(Build #1820 - Engineering Case #345909)================
There was a change in the a_sync_db structure in version 9.0.1, which meant
that any application built with 9.0.0, would have failed with an Access Violation
crash, when calling the 9.0.1 DBSynchronizeLog function. This problem has
now been corrected.
To work around this problem, the 9.0.0 application must be rebuilt with
9.0.1 software. In particular, it is important to specify the correct version
of the DBTOOLS library when setting up the a_sync_db structure:
dbSyncStruct.version=9000; // is the wrong way to initialize this
field
dbSyncStruct.version=DB_TOOLS_VERSION_NUMBER; // is the correct way
This problem affects any application built with 9.0.0 and deployed with
9.0.1 installed, and any application built with 9.0.1 and deployed with 9.0.1
installed if the wrong version number is specified in the a_sync_db structure.
These applications must be re-built with the 9.0.1 dbtools library and they
must specify the correct DBTOOLS version number as shown above.
================(Build #1823 - Engineering Case #346747)================
When just Sybase Central was installed (including jConnect and one or more
plug-ins), some required files and registry entries were missing. This prevented
Sybase Central from connecting to a database using the JDBC-ODBC bridge.
Also, when shutting down, Sybase Central would report errors about not being
able to start ISQL. This problem was initially addressed by engineering issue
314450. These changes complete the fix.
================(Build #1843 - Engineering Case #351396)================
When installed on CE devices, the OLEDB client library was not self-registering
itself. This has been corrected. A work around is to register the library
manually using regsrvce.exe.
================(Build #1874 - Engineering Case #357982)================
A problem affecting any of the ASA executables running on CE devices only,
that could have caused memory to be corrupted, has been fixed. The only behavior
actually seen as a result of this problem was the inability of CE based executables
to create UI elements, such as windows or messageboxes, however it may have
caused other problems as well.
================(Build #1875 - Engineering Case #358564)================
Two new Traditional Chinese collations have been added, 950ZHO_HK and 950zho_tw.
The collation 950ZHO_HK provides support for the Windows Traditional Chinese
character set CP950, plus Hong Kong Supplementary Character Set (HKSCS).
The collation 950ZHO_TW provides support for the Windows Traditional Chinese
character set cp950, but doesn't support HKSCS. Ordering is based on byte-by-byte
ordering of the Traditional Chinese characters. The collation 950TWN is now
obsolete.
Please note that in version 9.0.0 and lower, 950TWN has no support for HKSCS,
the new collation that is equivalent to it is 950ZHO_TW. In 9.0.1 and higher,
950TWN does support CP950 plus HKSCS and the new collation that is equivalent
to it is 950ZHO_HK.
The reason for splitting 950TWN into two collations is that Microsoft Windows
allows users to create their own characters in the End User Defined Character
(EUDC) area in Windows code pages. HKSCS, is an extension to CP950, which
defines its characters also in the EUDC area for CP950 and thus may conflict
with existing private characters defined by users who don't use HKSCS and
don't have HKSCS installed.
================(Build #1933 - Engineering Case #369278)================
The stored procedure sp_jdbc_stored_procedures is used by jConnect to retrieve
stored proc metadata. Unfortunately the definition of the stored procedure
was incorrect and the PROCEDURE_TYPE column of the metadata result set was
returning whether or not the particular stored proc returned a result set.
In actuality, the PROCEDURE_TYPE column should return whether or not the
particular stored proc returns a return value. This procedure has now been
corrected.
Note, new databases will have the corrected procedure, but to update existing
databases, run the Upgrade utility dbupgrad.
================(Build #1939 - Engineering Case #371843)================
When running an EBF install on Linux or Solaris systems, selecting the Japanese
license agreement would have caused the install to exit before applying the
EBF. The problem has now ben fixed.
================(Build #1944 - Engineering Case #370722)================
The QAnywhere Stop utility qastop, was added in an EBF after the GA release
of 9.0.1. The CE EBF installer was updated at the same time to expect qastop
to be present. If a subsequent EBF was applied, the installer would have
failed, with the following message:
Error: File 'c:\program files\sybase\asa9\ce\arm.30\qastop.exe' not
found.
This has been fixed.
================(Build #1970 - Engineering Case #374870)================
The Palm HotSync Conduit Installation Utility (dbcond9.exe) may have crashed
when installing or uninstalling a conduit on Windows NT 4.0, if the HotSync
Manager was not available. This has been fixed.
================(Build #1978 - Engineering Case #376202)================
Installing an EBF may have abnormally terminated as it was about to copy
files to the target system. This has been fixed.
================(Build #1992 - Engineering Case #379106)================
A multithreaded Embedded SQL application could, depending on timing, have
failed with the error "Invalid statement" (SQLCODE -130). For this to have
occurred, the application had to use the syntax "EXEC SQL DECLARE ... CURSOR
FOR SELECT ..." in code which could be run by multiple threads concurrently.
This has been fixed so that the SQL Preprocessor generates code for the syntax
"EXEC SQL DECLARE ... CURSOR FOR SELECT ..." that is thread safe.
Note the syntax "EXEC SQL DECLARE ... CURSOR FOR :stmt_num" is thread safe
(and is not affected by this problem), while the syntax "EXEC SQL DECLARE
... CURSOR FOR statement_name" is not thread safe (and cannot be made thread
safe).
================(Build #2038 - Engineering Case #384130)================
If the length of an indexed table column was increased on a big endian machine,
using the index may have caused the server to crash due to an unaligned memory
reference. This has been fixed.
================(Build #1842 - Engineering Case #351306)================
If the SNMP Extension Agent was connected to a database and the connection
was forcibly closed (due to DROP CONNECTION, liveness timeout, or the database
or server being unconditionally shut down), the agent may have crashed. This
now has been fixed.
================(Build #1752 - Engineering Case #344838)================
Performance of the 8.0.2 server when run on Solaris 9 or 10 machines, was
poor compared to the server running on Solaris 6, 7 or 8, especially
with large databases. The problem was due to dynamic cache sizing not being
enabled for servers running on Solaris 9 or 10, thus the server never grew
the cache to accomodate large databases. This has now been fixed, dynamic
cache sizing is now enabled when the server is run on Solaris 9 or 10. A
work around is to start the server with a sufficiently large cache.
================(Build #1752 - Engineering Case #344861)================
When run on Mac OS X, HP-UX and Compaq Tru64 platforms, the server was not
listening to the correct UDP ports, which could have caused applications
to fail to find the server when attempting to connect.
The server listens to UDP ports and responds to requests on these ports
so that applications can locate the server by server name, even if the server
starts on a TCP/IP port other than the default port (2638). Since Mac OS
X, HP-UX, and Compaq Tru64 platforms do not allow multiple processes to bind
to the same UDP port, connections to server running on these platforms, must
specify the TCP/IP port number (via ServerPort) if the server is not using
the default TCP/IP port (2638).
If the server's TCP/IP port number is 2638 (the default), the server listens
to UDP port 2638, otherwise the server should listen to the same UDP port
as the TCP/IP port. For the Mac OS X, HP-UX, and Compaq Tru64 platforms,
the server should NOT listen to UDP port 2638, even though servers on other
platforms do additionally listen to UDP port 2638. The reason servers on
these platforms should not listen to UDP port 2638 is if a second server
starts on TCP/IP port 2638, the UDP port 2638 must remain available for the
second server.
Note that in order to connect over TCP/IP to a server using a TCP/IP port
other than 2638 running on the Mac OS X, HP-UX or Compaq Tru64 platforms,
the client must specify the port of the server.
For example, if the server is started with the command "dbsrv8 -n MyASAServer
asademo.db", a client on the same subnet can find the server using the connection
parameters "eng=MyASAServer;links=tcpip".
If the server is started on Mac OS X, HP-UX or Compaq Tru64 with the command
"dbsrv8 -n SecondASAServer -x tcpip(port=7777) asademo.db", a client on the
same subnet can find the server using the connection parameters "eng=SecondASAServer;links=tcpip(port=7777)".
Note that if the server was running on a different platform, then the client
would not need to specify the port TCP/IP parameter.
Additionally, on Mac OS X, HP-UX and Compaq Tru64 platforms, if a server
was already using port 2638, and a second network server was started without
the ServerPort parameter, this second network server should fail to start.
Before this change the second network server would have chosen a different
port and started. The reason the network server should fail to start in
this case is so that user can specify the server's port number which all
clients must also specify (if the server were allowed to start, the port
number could change if the second server was restarted, causing clients to
fail to connect in the future). Note that personal servers will start even
if port 2638 is in use, since shared memory is normally used to connect to
personal servers.
This has been fixed so that on Mac OS X, HP-UX and Compaq Tru64, servers
listen to the correct UDP ports, and network servers fail to start if TCP/IP
port 2638 is in use and no port is specified.
================(Build #1752 - Engineering Case #348407)================
The server would have failed to start when run on a Linux system using the
2.6.0 kernel. The Asynchronous I/O (AIO) support in th 2.6.0 kernel is not
compatible with the server's usage for the 2.4.x kernel. AIO is now disabled
when the server runs with the Linux 2.6 kernel. Note that AIO is still supported
by the server when used with the 2.4 kernel.
================(Build #1752 - Engineering Case #348765)================
The network server, running on Unix platforms, could have incorrectly started
even though another server with the same name was also running on the network.
This has been fixed.
================(Build #1754 - Engineering Case #352050)================
When run on Macintosh systems, the Server startup options dialog did not
parse multiple arguments when entered in the "Options" text box, (eg -x tcpip
-m). The result was a usage message on the console window. This has now been
fixed.
================(Build #1812 - Engineering Case #341244)================
If request-level logging was invoked with the option 'SQL+hostvars', either
by calling sa_server_option or via the -zr command line option, the calling
property('requestlogging') would incorrectly have returned 'NONE'. This has
been fixed.
================(Build #1812 - Engineering Case #341616)================
If the server ran out of memory, it could have crashed instead of reporting
an error. On Windows CE, the server would have started without errors, but
would not have accepted any connections if the system was low on memory.
This has been fixed so that the server reports out of memory errors.
================(Build #1812 - Engineering Case #341761)================
If an application that connected to a server via shared memory, did not disconnect
before closing, the message "Disconnecting shared memory client, process
id not found" could have been displayed multiple times for a single connection
in the server console. This problem was more likely to have occurred if all
the server tasks were busy processing requests. This has been fixed so the
message is only displayed once per connection.
================(Build #1812 - Engineering Case #342148)================
The system procedure sa_index_density() could have returned density values
for an index that were greater than 1.0 (correct values are between 0 and
1). This has been fixed.
================(Build #1813 - Engineering Case #339153)================
A FETCH INTO statement in a stored procedure, did not place fetched values
into procedure variables if the ROW_UPDATED warning was returned, (this warning
is returned for SCROLL cursors). This has now been corrected.
Note, in 8.0.0 and above, some cursors are automatically converted to SCROLL
cursors. For example because an updatable cursor is opened over a query with
a sort.
================(Build #1813 - Engineering Case #339768)================
Queries that use the MIN or MAX aggregate functions can have an optimization
applied, if they satisfy certain conditions. In version 8.0.0 and later,
these conditions were more restrictive than they were in previous versions.
With this change, the set of conditions that queries must satisfy befor the
MIN/MAX optimization will be applied has ben relaxed.
Queries for which the MIN?MAX optimization is applied, must satisfy the
following conditions:
(1) the query block is of the form:
select MIN(/MAX) (T.X)
from T
where some_condition
- T is a base table and T.X is one of its columns
- some_condition can be any type of predicate. This
(2) the query block does not contain an ORDER BY clause, any DISTINCT aggregates,
or a GROUP BY clause.
(3) one of the following types of indexes must exist:
- an index having the column T.X as a first column; or
- an index having the column T.X as the n'th column, and for each i'th column
with 0 < i < n , an equality predicate of the form "i'th column = constant"
exists in the WHERE clause
An example of some_condition would be: "T.A = 10 and T.B = 20 and ...".
Then an index on columns T.A and T.B would qualify to be used for the MIN/MAX
optimization.
The MIN/MAX optimization tries to choose an index <idx> that returns rows
ordered by T.X. During execution, only a limited number of rows are retrieved
from the index until a qualifying row is found. The execution stops when
this first qualifying row is retrieved.
================(Build #1813 - Engineering Case #342151)================
Positioned update statements could have failed in one of the following ways:
- the server may have crashed when using an aliased expression, subselect
or subquery predicate
- an incorrect 'column not found' error when using a view column on the
right-side of a SET item
- the wrong table instance may have been updated, when a table was referenced
more than once
- no error returned when no row was updated
The exact behaviour of a positioned update was changed between versions
5.5 and version 6.0 and between 7.0 and 8.0 and later. Until now, none of
these versions provided the correct semantics. This has been fixed.
Documentation Change:
The following change should be applied to the "UPDATE (positioned) statement
[ESQL] [SP]" topic. Note that this describes only Syntax 2
UPDATE update-table, ...
SET set-item, ...
WHERE CURRENT OF cursor-name
update-table :
[owner-name.]table-or-view-name [[AS] correlation-name]
set-item :
[correlation-name.]column-name = expression
| [owner-name.]table-or-view-name.column-name = expression
Each update-table is matched to a table in the query for the cursor as follows:
If a correlation name is used in the update-table, it is matched to a table
in the cursor's query that has the same table-or-view-name and the same correlation-name
Otherwise, if there is a table in the cursor's query that has the same table-or-view-name
that does not have a correlation name specified or has a correlation name
that is the same as the table-or-view-name, then the update table is matched
wit this table in the cursor's query
Otherwise, if there is a single table in the cursor's query that has the
same table-or-view-name as the update table, then the update table is matched
with this table in the cursor's query.
Otherwise, an error is returned.
Each set-item is associated with a single update-table, and the corresponding
column of the matching table in the cursor's query is modified.
The expression on the right-hand side of each set-item can refer to columns
of the tables identified in the UPDATE list; they may not refer to aliases
of expressions from the cursor's query, nor may they refer to columns of
other tables of the cursor's query which do not appear in the UPDATE list.
Subselects, subquery predicates, and aggregate expressions can not be used
in the set-items.
================(Build #1813 - Engineering Case #342173)================
The server was allowing the creation of multiple indexes with the same name
on local temporary tables. This problem has now been resolved, the server
will now generate an "Item already exists" error in this situation.
================(Build #1813 - Engineering Case #342661)================
The system procedures sa_index_levels() and sa_index_density() will now provide
more information in their respective result sets.
The new columns for the two procedures are:
TableId: table id of the table the index is on
IndexId: index id of the new index
= 0 for Primary Keys,
= SYSFOREIGNKEY.foreign_key_id for Foreign Keys
= SYSINDEX.index_id for all other indexes
IndexType: the type of the index
= "PKEY" for primary keys
= "FKEY" for foreign keys
= "UI" for unique indexes
= "UC" for unique constraints
= "NUI" for non unique indexes
The procedures will continue to behave as before for old databases. It will
also be possible to revert to the old behaviour on newly created and/or updated
databases by simply dropping the corresponding stored procedure and recreating
it with the old result set.
The new columns provide more information and also make the result set much
more useful for the purposes of joining with other catalog tables.
================(Build #1814 - Engineering Case #342538)================
After a fatal server error, attempting to stop a server with dbstop would
not have stopped the server or would have displayed an error.
Now, dbstop with the -y parameter, but without the -c parameter, will stop
a server after a fatal error, as long as the server was started with the
-gk command line option set to "all", (this is the default for the personal
server). If the -y parameter is not used, dbstop will display the fatal
error and prompt if the server should be stopped.
================(Build #1814 - Engineering Case #342968)================
If a view contained a SELECT list expression which was a CASE expression
(syntax 1), then the corresponding expression could have been non-NULL in
the query result.
For example:
WITH V AS ( SELECT dept_id, CASE dept_id WHEN 100 THEN 'A' ELSE 'B' END
FROM department )
SELECT *
FROM rowgenerator R left outer join V on R.row_num = V.dept_id
WHERE R.row_num IN (99,100)
would have returned 'B' incorrectly instead of NULL for row 99. For the
problem to have occurred, a branch of the CASE expression must have returned
a non-NULL value when all of the columns from the underlying table were NULL.
This problem did not affect Syntax 2 CASE expressions.
Similarly, if a column of a view was a string concatenation of the form
" T.x || 'string' ", the resulting column would have been non-NULL, when
it should be NULL.
These two problems have been fixed.
================(Build #1815 - Engineering Case #342643)================
When unloading a database with a Multi-byte Character Set collation, using
the Unload utility dbunload, if a database object, (such as a stored procedure,
fuction etc) contained a MBCS character with 0x7b or 0x7d as a follow byte,
the character would have been mangled. With 7-bit ASCII, the 0x7b and 0x7d
correspond to brace characters.
As well, if the opbject contained more than 100 such MBCS characters with
0x7b or 0x7d as follow-bytes, dbunload could have crashed.
These problems have been fixed.
================(Build #1815 - Engineering Case #342900)================
When a cached plan was used in a stored procedure for an UPDATE or INSERT
statement, columns marked as DEFAULT TIMESTAMP could have received a stale
value from a previous invocation of the statement. This has been fixed.
As a workaround, the database option Max_plans_cached can be set to 0 to
get correct behaviour, although this may negatively impact performance.
================(Build #1815 - Engineering Case #342914)================
Two new connection properties have been added, 'ClientPort' and 'ServerPort'.
Calling connection_property( 'ClientPort' ) will return the client's TCP/IP
port number, or 0 if the connection is not via the TCP/IP link. Calling connection_property(
'ServerPort' ) will return the server's TCP/IP port number, or 0 if the connection
is not via the TCP/IP link.
================(Build #1815 - Engineering Case #343084)================
Set expression queries, (ie UNION, EXCEPT, or INTERSECT), with grouped queries
using correlated subselects, may have crashed the server. This has now been
fixed.
The crash would have occurred if the grouped query block was an immediate
child of a set expression query, the GROUP BY clause contained a correlated
subselect and the outer reference of the subselect was an alias defined in
the grouped query
For example:
SELECT file_id A1
FROM SYSTABLE T
GROUP BY file_id, ( select row_num from rowgenerator R where R.row_num=A1
)
UNION ALL
SELECT 1
FROM DUMMY
================(Build #1815 - Engineering Case #343130)================
Expression caching is used to avoid re-computing the result of a user-defined
function or a subquery within a query. If a function or subquery is evaluated
with the identical parameters to a previous invocation, the cache is used
to return the previous answer.
Previously, the cache compared arguments for equality using the database
comparison rules. For example, in a case-insensitive database, argument 'x'
was compared equal to 'X'. This could lead to different answers than expected
because the two strings are not identical. Now, arguments to the expression
cache are only considered equal if they are identical.
================(Build #1815 - Engineering Case #343308)================
If a query used a blob that was stored in a work table, which was then read
from the work table and copied to another expression, and that expression
was materialized into a block of rows in memory, then the server could have
crashed or reported an assertion failure, when attempting to returning the
blob value as part of the result set.
For example, the following query could have generated the failure:
create procedure P_Blob2( @x int )
begin
select @x || repeat('long string',1000) x
from rowgenerator
where row_num = @x
end
select x, sum(row_num)
from (
select row_num, DT.x
from rowgenerator R, lateral( P_Blob2(R.row_num) ) DT
union all
select 1, '' ) DT2
group by x
This has now been fixed.
================(Build #1816 - Engineering Case #343155)================
A LOAD TABLE statement, executed as part of a stored procedure or an event,
could have crashed the server when run a second time. This has been fixed.
================(Build #1816 - Engineering Case #343253)================
If the database option "Close_on_endtrans" was set to ON, then the server
would have failed to close some cursors when executing a COMMIT.
For example, the second fetch should fail with a "cursor not open" error,
but does not:
BEGIN
DECLARE c_fetch CURSOR FOR SELECT 1;
SET TEMPORARY OPTION CLOSE_ON_ENDTRANS = 'ON';
OPEN c_fetch;
FETCH c_fetch;
COMMIT;
FETCH c_fetch;
END;
As well, if the database option "Ansi_close_cursors_on_rollback" was set
to ON, then the server could have crashed in some situations where a rollback
occurred inside a call to a user defined function.
For example, the following would have caused a server crash:
create function DBA.foo()
returns int
begin
grant select on systable to non_existent_user;
return 1
end
create procedure bar()
begin
set temporary option ANSI_CLOSE_CURSORS_ON_ROLLBACK = 'on';
select foo()
end ;
select * from bar()
Both of these problems have now been resolved.
================(Build #1816 - Engineering Case #343380)================
The server runninmg on Windows NT, 2000, XP or 2003 could have hung, with
100% CPU on shutdown. The database file and server name were no longer in
use by the hung server, so another server could have been started with the
same name and on the same database. The hung server had to have been shutdown
using the Task Manager. On machines where this hang occurred, it would have
happened every time a server was shutdown.
This problem has been fixed by adding a two second wait for the shutdown
event. If this two second timeout is reached, the message "The TCP listener
failed to shutdown cleanly within the timeout period" will appear in the
server's output log, likely more than once and with more than one listener
type (for example TCP and UDP).
This problem occurred when the event which normally signals the listener
thread to shutdown was lost, or intercepted by other software running on
the machine. For personal servers using only shared memory connections,
a workaround to this lost event problem is to use the -x none option.
================(Build #1816 - Engineering Case #343401)================
Attempting to query or alter a table with a foreign key that referenced a
table in another dbspace, could have caused the server to crash or fail with
an assertion error. This has now been fixed.
================(Build #1816 - Engineering Case #343445)================
Preparing statements with the WITH <temporary-views> clause and using question
marks '?' as place-holders for bound variables, could have caused the error
"Syntax error near 'host variable'". This has been fixed.
================(Build #1816 - Engineering Case #343618)================
A very long diagnostic message from a communication link could have caused
the client or the server to have crashed. In order for this to have occurred,
which would have been rare, the LOGFILE connection parameter or -z server
option would have had to have been used. This is now fixed.
================(Build #1816 - Engineering Case #343619)================
The error message text returned to the application after a server fatal error
or assertion may have had "???" where it should have had details of what
failed. This was always true on subsequent requests to the server after
the fatal error occurred. This has been fixed so that details of the fatal
error or assertion are now included in the error message text.
================(Build #1816 - Engineering Case #343627)================
The server could have failed with assertion 100904 during recovery on a database
involved in Mobilink synchronization. This is more likely to occur on a
blank padded database. This fix will now allow recovery to complete.
================(Build #1816 - Engineering Case #343683)================
It was possible for the server to fail with a "divide by zero" exception.
The chances of this failure taking place were very small. The problem has
now been resolved.
================(Build #1816 - Engineering Case #343815)================
If the expression for a COMPUTE or DEFAULT clause was very long, it could
have resulted in a server crash. An error message is now given if the expression
is longer than is supported by the server. The maximun expression length
for a COMPUTE or DEFAULT clause is based on the database page size. It is
approximately page_size - 64 bytes.
================(Build #1816 - Engineering Case #343820)================
If a subselect that had no outer references, or a non-deterministic function
that had no non-constant parameters, was used below a work table and in a
prefilter above the work table, as shown in the graphical plan, then the
server could have crashed or returned the wrong answer.
For example,the following query shows the problem on the asademo database
if the JHO join method is selected:
select ( select '12' from dummy D1, dummy D2 ) SubQ1,
( select '13' from dummy D3, dummy D4 where SubQ1 > 10 ) SubQ2
from rowgenerator R1 left outer join employee E on R1.row_num = E.salary
where (R1.row_num between SubQ1 and 255,100)
and SubQ2 = '13'
This problem has now been fixed.
================(Build #1816 - Engineering Case #343952)================
Executing a query with the following conditions, would have crashed the server:
1) it had a constant or subselect in the select list of a derived table
2) the derived table was on the null supplying side of an outer join
3) the derived table had another derived table in its FROM clause
4) the second derived table was a query expression (ie UNION, INTERSECT
or EXCEPT)
5) one of the branches of the query expression had a constant or subselect
as the last item of its select list
For example:
select * from employee left outer join
(select 1 from (select emp_id from employee union select 1) dt) dt1(x)
on 1=1
This has been fixed.
================(Build #1816 - Engineering Case #344019)================
Cancelling a CREATE DATABASE or DROP DATABASE statement, could have caused
the server to hang, if the cancel occurred concurrently with the cache shrinking.
This has been fixed.
================(Build #1817 - Engineering Case #343687)================
An attempt to create column statistics on a server running on a multi-CPU
machine, could have caused a server crash if the statistics being generated
were also being used by other concurrently running queries. The chance of
the crash occurring was proportional to the amount of concurrent access to
column statistics taking place. This problem has been resolved.
================(Build #1817 - Engineering Case #343705)================
The Histogram utility dbhist would have generated only "00,000" labels in
the Excel Sheet if the Windows decimal separator under Regional Options was
a comma. This has been fixed.
================(Build #1817 - Engineering Case #344063)================
Fully-enumerated plans that included sorts, materialization, etc, may have
had incorrect costs reported by the optimization logger. The effect was that
plans that were marked as picked by the optimizer, may have had a higher
reported cost than plans that had been rejected. This has been fixed.
================(Build #1817 - Engineering Case #344159)================
A hash join with an IF or a CASE expression with a predicate used in a join
condition could have caused a request to fail with the error:
"Run time SQL error -- *** ERROR *** Assertion failed: 102501 (10.0.0.1267)
Work table: NULL value inserted into not-NULL column (SQLCODE: -300; SQLSTATE:
40000)"
This also affected INTERSECT implemented with hash join. This problem has
been fixed.
================(Build #1817 - Engineering Case #344211)================
Subquery flattening is now disallowed in some cases when a procedure call
appears in the FROM clause. Specifically, it will not be used when a rowid
for a procedure call would be required for a distinct on rowid as in the
following example:
CREATE PROCEDURE product_proc()
BEGIN
DECLARE varname integer;
set varname = 1;
SELECT * from product
END;
SELECT description
FROM product_proc() p
WHERE EXISTS
(SELECT *
FROM sales_order_items s
WHERE s.prod_id >= p.id
AND s.id = 2001)
group by description
The symptom was unwanted duplicate rows in the result.
================(Build #1817 - Engineering Case #344257)================
The ordered distinct operator may have returned duplicate rows. For this
to have occured, there must have be a sort immediately below the distinct,
the distinct must have been beneath a group-by operator, and the distinct
must have included rowid expressions (usually due to the presence of a subquery).
Because a hash-based distinct is usually selected over an ordered distinct
with a sort, the occurance of this problem will be rare.
================(Build #1817 - Engineering Case #344347)================
Creating a view with a deeply nested structure (e.g. a large number of UNIONs)
could have caused the server to crash. These types of queries will now return
the error -890 "Statement size or complexity exceeds server limits".
A work-around is to increase the size of the server's stack using the -gs
or -gss commandline options.
================(Build #1817 - Engineering Case #344354)================
Version 9.0.0 made changes to the rules for when an index can be used to
satisfy a search argument with values of different domains. These changes,
required for correctness, prevented an index from being considered for comparisons
of the form:
numeric_col <comparison operator> double_value
where numeric_col is a column of type NUMERIC (or DECIMAL), <comparison
operator> is one of ( <, <=, =, => > ), and double_value is a value of type
FLOAT, REAL, or DOUBLE. The server does such comparisons in the DOUBLE domain.
The value of numeric_col is converted to a DOUBLE and compared to the double_value
(promoted to type DOUBLE if necessary).
Since DOUBLE is an approximate data type, there are NUMERIC values that
can not be precisely represented as a DOUBLE. For example, consider the following:
CREATE TABLE T( id int, n NUMERIC(30,6) );
INSERT INTO T VALUES( 1, 9007199254740992 );
INSERT INTO T VALUES( 2, 9007199254740993 );
CREATE VARIABLE double_value DOUBLE;
SET double_value = 9007199254740992;
SELECT * FROM T WHERE n = double_value;
The correct answer to this query is both rows of the table, because both
9007199254740992 and 9007199254740993 convert to the same DOUBLE value, and
therefore compare equal to double_value when compared in the DOUBLE domain.
If, on the other hand, the server was to have used an index scan, it was
equivalent to the following query:
select * from T where n = CAST( double_value as NUMERIC )
This latter query returned no rows because the value of CAST( double_value
as NUMERIC ) is 9007199254740994. When compared as numerics, neither of the
rows of T match this value.
The change to avoid selecting an index for this case guaranteed correct
results, possibly at the cost of performance. An enhancement has now been
implemented for the rules for determining whether an index can be selected,
so that an index can be selected for some cases which are guaranteed to give
the correct results. After this change, an index can be used if the following
conditions are met:
1 the numeric column has a precision that is 15 or lower
2 the comparison operator is equality (=), or the value double_value is
known at query open time and can be converted to a NUMERIC value without
loss of precision.
That is, the double can be converted to a NUMERIC with precision and scale
limited by the connection options PRECISION and SCALE such that CAST( CAST(
double_value AS NUMERIC ) AS DOUBLE ) = double_value.
The value 15 is the DBL_DIG quantity for normalized floating point numbers,
and represents the number of decimal digits that can be represented by a
double without loss of precision. If a column is declared as NUMERIC with
a precision higher than 15, then the column can contain values that can not
be represented exactly by a DOUBLE type. Therefore, it is possible for an
index scan to return the wrong answer in this case, and the server does not
consider using an index for these cases.
================(Build #1817 - Engineering Case #344415)================
The URL supplied to an HTTP service created with URL ON would have contained
HTTP-encoded characters. For example, if there was a space in the URL, the
URL supplied to the service would have contained "%20" instead. This has
been fixed by removing the HTTP escapes.
================(Build #1817 - Engineering Case #344417)================
If a successful database connection is terminated without first doing a database
disconnect, a "connection terminated abnormally" warning message is displayed
in the server console. Common reasons for this condition to occur include
the application was terminated, the application crashed, or the application
ended without correctly closing the connection. If an ESQL app did a db_string_connect
or EXEC SQL CONNECT but did not do a db_string_disconnect or an ESQL SQL
DISCONNECT, this warning would have occurred. Similarly if an ODBC application
did a SQLConnect or SQLDriverConnect without doing a SQLDisconnect this warning
would have occurred.
If the connection was a TCP/IP or an SPX connection and the client end of
the connection was closed without first disconnecting, and -z wasn't used,
the server would have displayed the message "Connection terminated abnormally;
SERVER socket shut down" in the console. This has been fixed so that it
displays "Connection terminated abnormally; client socket shut down."
================(Build #1817 - Engineering Case #344440)================
If multiple concurrent connections were made to a database that had just
been started (e.g., multiple connections autostart the same database), then
it was possible for the server to crash. Although the probably of the crash
taking place was extremely low. The problem has now been resolved.
================(Build #1817 - Engineering Case #344503)================
CUBE, ROLLUP, and grouping set queries would have had grouping expressions,
that appeared in the select list, incorrectly described as not nullable.
This has been fixed. (Note that only ROLLUP was available in 9.0.0).
================(Build #1818 - Engineering Case #343567)================
A recurring scheduled event could have failed to fire at each of the times
specified by its schedule. The interval between scheduled executions needed
to exceed one hour for this problem to appear. For servers prior to 8.0.2
build 4335 or 9.0.0 build 1232, the interval needed to exceed one day for
the problem to occur. The problem was also dependent upon whether or not
the server was restarted between event executions. This has now been fixed.
================(Build #1818 - Engineering Case #343573)================
A query that contained an EXISTS subquery with a large number of outer references,
could have caused the server to fail with assertion 101505 - "Memory allocation
size too large"
Now, such queries fail with the error message: "Statement size or complexity
exceeds server limits", (SQLCode -890). This error can be avoided by reducing
the number of outer references from the subquery or by increasing the server
page size.
================(Build #1818 - Engineering Case #343689)================
If a checkpoint occurred concurrently with the sending of blob data to an
application, the server could deadlock. This is now fixed.
================(Build #1818 - Engineering Case #343935)================
The server could have crashed when an attempt was made to calibrate the database
server using the ALTER DATABASE statement. The crash was most likely to occur
when the temporary dbspace had not yet been written out to disk. This has
been corrected.
================(Build #1818 - Engineering Case #344313)================
A query would have failed with the error "column ... not found" under the
following conditions:
1) it contained an outer join
2) the null-supplying table was a grouped derived table
3) one of the select list items of the derived table was a constant or null-tolerant
function
4) one of the tables in the FROM clause of the derived table was a view,
that could have been flattened
This has been fixed.
================(Build #1818 - Engineering Case #344618)================
Executing a query where the optimizer had chosen a hash join, could have
caused the server to fail with a fatal error, "dynamic memory exhausted".
This has been fixed.
================(Build #1818 - Engineering Case #344643)================
Executing a DROP STATISTICS statement with the database option Blocking =
'OFF', that failed because the table being modified was under use by another
connection, could have caused the server to crash. This problem has been
fixed.
================(Build #1818 - Engineering Case #344653)================
When performing an update of a cursor using host variables, or the ODBC SQLSetPos(
..., SQL_UPDATE, ... ) function, the server could have incorrectly returned
the error:
-121 "Permission denied: you do not have permission to update ... "
if the select list referenced columns of an updatable view. This problem
has been fixed.
================(Build #1818 - Engineering Case #344703)================
Queries using predicates with subqueries containing outer joins might have
returned incorrect result sets, when the following conditions were true:
(1) the subquery qualifyed for being flattened
(2) the subquery contained an outer join.
(3) the preserved side of the outer join didn't have a join predicate with
the null-supplying side beside the ON condition of the outer join.
(4) the preserved side of the outer join had a semijoin (i.e., JE or JHE)
in the access plan for the query
For example:
If the following query had the plan: T<seq> JE R<seq> JNLO S<seq> incorrect
result set might have been returned for certain instances of the tables T,
R, and S.
select *
from T
where exists( select 1
from R left outer join S ON (R.x = S.x)
where S.y is NULL or S.y = T.y )
This has now been fixed.
================(Build #1818 - Engineering Case #344705)================
The system function db_property( 'name' ) could have returned garbled data
if the database character set was not equal to the OS character set, but
only if the database was created on the command line or autostarted. Databases
started by the 'START DATABASE' statement were unaffected.
There were numerous other instances where the character set of various strings
was not tracked, converted, or maintained correctly. The following properties
were also corrected:
db_property( 'alias' )
db_property( 'file' )
db_property( 'logname' )
db_property( 'logmirrorname' )
db_property( 'tempfilename' )
property( 'name' )
Also, cursor names were not being converted to database charset, but were
left in the application's charset. The database name sent for a dbstop request
was also left in the application's charset. The database name for a STOP
DATABASE statement was left in the database charset, rather than the required
OS charset. When autostarting a database, the database filename (DBF connection
parameter) was converted to the database charset, rather than OS charset.
These problem have now been corrected.
================(Build #1818 - Engineering Case #345450)================
An unintended side-effect of the changes for issue 332134 to prevent a server
crash, was to disallow correlated subselects in COMPUTE and CHECK expressions.
This restriction has now been removed.
================(Build #1819 - Engineering Case #343581)================
When a row was inserted into a table with a COMPUTE or CHECK clause that
contained a correlated subselect, where the outer references was to a column
of the base table, the server may have crashed. This has been fixed.
In the example below, the outer reference T.a is used in a subselect of
the COMPUTE clause for the column T.b:
create table T(
a int,
b char(10) not null COMPUTE (left( '0001', (select
max(row_num) from rowgenerator where row_num = T.a )) ))
insert into T values (1, '1')
================(Build #1819 - Engineering Case #344205)================
For a query with a predicate of the form 'EXISTS( subquery)' subquery flattening
was always done if the subquery was correlated. However, when the subquery
was correlated with the rest of the query only by a Cartesian product, the
execution time of the rewritten query could have been much lomger than the
execution time for the original query (if the EXISTS subquery had not been
flattened). The optimizer now trys to determine if flattening the subquery
is beneficial for finding a better plan for the main query block (for example,
if the subquery contains equijoins with the tables from the main query block,
or it contains sargable or local predicates on the tables from the main query
block). The flattening of EXISTS subqueries is now done only if the subquery
is correlated and the optimizer can determine that flattening the subquery
will result in finding a better access plan for the main query block.
Example of an EXISTS subquery which is not flattened after this fix:
select *
from T
where EXISTS( select 1 from R where R.X LIKE T.X + '%')
================(Build #1819 - Engineering Case #344414)================
Executing a SQL statement containing a string that was larger than the page
size of the database, could have caused the server to crash. This problem
has now been corrected.
================(Build #1819 - Engineering Case #344958)================
This is an addendum to the original fix for issue 344313.
Description from issue 344313:
A query would have failed with an incorrect error message, under the following
conditions:
1) it contained an outer join
2) the null-supplying table was a grouped derived table
3) one of the select list items of the derived table was a constant or null-tolerant
function
4) one of the tables in the FROM clause of the derived table was a view,
that could have been flattened
================(Build #1819 - Engineering Case #345174)================
Complex expressions used in DISTINCT and ORDER BY clauses were not correctly
matched, resulting in the syntax error -152 "Invalid order by specification".
This has now been fixed.
As a general rule, for query blocks using both ORDER BY and DISTINCT clauses,
the ORDER BY expressions must reference only the expressions from the select
list with DISTINCT clause.
For example:
select distinct emp_lname, isnull (city, ' ') as address
from employee
order by upper(address)
================(Build #1819 - Engineering Case #345185)================
If the system procedure sa_validate, was called in unchained mode (ie the
database option CHAINED='off'), then no information is returned. This problem
was most noticeable when using a jConnect or Open Client application. This
was due to the local temporary table declared in sa_validate to hold the
results, did not have the ON COMMIT PRESERVE ROWS clause. This problem has
now been fixed for newly created databases. To correct this problem for existing
databases, modify the sa_validate procedure so that the temporary table result_msgs,
has the clause ON COMMIT PRESERVE ROWS.
================(Build #1819 - Engineering Case #345223)================
If a procedure, trigger or view was created with redundant quotes around
the object name and the next token started immediately after the trailing
quote, the saved source for the object would have contained no separation
between the object name and the next token.
For example:
create procedure "p"as begin return end
would have been saved as:
create procedure pas begin return end
This has been fixed.
================(Build #1819 - Engineering Case #345284)================
If the database used a multi-byte character set (eg. UTF8), and a different
character set was requested by an HTTP client, the HTTP interface could have
returned some characters in the database character set. This has been fixed.
================(Build #1819 - Engineering Case #345571)================
In very rare situations, the server could have hung when stopping a database.
================(Build #1819 - Engineering Case #345642)================
The HTML generated for a server generated HTTP error, was partially in the
US-ASCII character set and partially in the character set requested by the
client. This has been
fixed. Now the entire body of the reply will be in the requested character
set.
================(Build #1819 - Engineering Case #345785)================
If a procedure executed by an HTTP connection called sa_set_http_header(
'CharsetConversion', 'OFF' ), the Content-Type header would still have contained
the requested character set, even though the data was not converted. This
has been fixed, the Content-Type header will now contain the correct character
set.
================(Build #1819 - Engineering Case #348620)================
A query where a predicate refers to a windowing function in a Union Derived
table or view, may have returned incorrect results, or caused a server crash.
For example:
select * from
(select rank() over (order by emp_id) r from employee
union all
select rank() over (order by emp_id) r from employee) dt
where r > 10'
This has been fixed.
================(Build #1820 - Engineering Case #342308)================
On Windows NT, 2000, XP and 2003, when the server is run as a service, or
if a fatal error or assertion occurs, one or more message is logged to the
Application Event Log.
The Event source could have been "ASA", the server name, or the service
name. The error or information message would have been something like:
The description for Event ID ( 1 ) in Source ( ASA ) cannot be found. The
local computer may not have the necessary registry information or message
DLL files to display messages from a remote computer. You may be able to
use the /AUXSOURCE= flag to retrieve this description; see Help and Support
for details. The following information is part of the event:
This message is now better formatted. The Event source is "ASA 9.0", and
the message may be prefixed by the server name, or service name. The previous
text will no longer be displayed.
Note, when deploying ASA servers to Windows NT, 2000. XP or 2003, the following
registry key should be added so that Event Log messages are formatted correctly:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application\ASA
9.0
and within this key, the REG_SZ value name EventMessageFile and value data <path>\dblgen9.dll.
(dblgen9.dll can be used regardless of the language being used).
================(Build #1820 - Engineering Case #345558)================
If an UPDATE statement included an assignment to a local variable and the
value being assigned was not of the same type as the variable, a server crash
could have resulted. This has been fixed.
================(Build #1820 - Engineering Case #345927)================
Queries containing subqueries with DISTINCT aggregates, may have returned
incorrect result sets,when the following conditions existed:
- subquery was used in a conjunct of the form 'expr theta (subquery)"
- subquery was a grouped query with DISTINCT aggregates
- subquery referenced only one base or derived table, or view T
- the same object T was used in the main query block
- the access plan chosen for this query used WINDOW operator
This has been fixed.
For example:
Select *
from T, R
where T.z = R.z and T.Y = (select sum(distinct T.X) from T)
The access plan must use WINDOW operator: R<seq> JH* Window [ T<seq>]
================(Build #1820 - Engineering Case #346037)================
If a RAISERROR statement was executed in a procedure and a subsequent statement
in the procedure caused a trigger to fire, an error would have been generated
when the trigger completed. This would have prevented the statement which
fired the trigger from completing successfully, and might have prevented
the remainder of the procedure from executing. This has been fixed.
Note that a RAISERROR executed inside a trigger will still cause the triggering
statement to fail.
================(Build #1820 - Engineering Case #346145)================
The server allowed, in rare circumstances, a database user with limited permissions
to manage to exceed those permissions. This is now fixed.
================(Build #1821 - Engineering Case #344946)================
Large or complicated statements could have caused the server to crash with
a stack overflow error, or to return a fatal error "memory exhausted". Both
situations would have lead to the server failing to respond to further requests.
For example, a statement of the following form could have caused the problem:
select 1
+ (1+1+...+1) -- 10,000 '1' literals in the expression
+ ...
+ (1+1+...+1) -- above parenthesized expression repeated 11 or more times,
depending on cache size
Requests could also fail with error -890 - "Statement size or complexity
exceeds server limits", if the main heap grew to be near <maximum cache size>/<number
of workers>.
These problems have now been fixed.
================(Build #1821 - Engineering Case #346364)================
If the MYIP tcpip parameter was specified on Unix servers, client applications,
(such as the system utility dblocate), may not have been able to find the
server. This has been fixed.
================(Build #1822 - Engineering Case #345734)================
If a predicate qualified to be pushed into a view, then the process of inferring
new predicates in the view query block, might not have used this pushed predicate.
This may have resulted in less than optimal access plans, due to the fact
that useful sargable predicates were not inferred. This has been fixed.
The following conditions must have been met for a query to have exhibited
this problem:
(1) the main query block was a grouped query on a view V1
(2) the main query block contained a local view predicate on V1 (e.g., "V1.col1
= constant")
(3) the view V1 contained other predicates that, with the help of the pushed
predicate, could have been used to infer sargable predicates on base tables
(e.g., "col1 = T.x")
(4) the view V1 was grouped as well
Example:
select V1.col2, count(*)
from V1
where V1.col1 = c
group by V1.col2
V1: select V2.col3, count(*)
from V2, T
where V2.col1 = T.x
group by V2.col3
V2: select *
form R1
UNION ALL
select *
from R2
================(Build #1822 - Engineering Case #345855)================
A message statement with a comma-separated list of expressions, could have
caused a server crash, (ASA 8.0.x) or a syntax error, (ASA 9.0.x), if the
first expression
did not contain a table reference and a subsequent expression did contain
a table reference.
For example:
message 'Version: ',string(( select @@version ))
This has been fixed.
================(Build #1822 - Engineering Case #346443)================
The server was evaluating the expression <empty_string> LIKE <empty_string>
as FALSE, when it should, in fact, have been TRUE. The expression is now
evaluated correctly.
================(Build #1822 - Engineering Case #346497)================
The LDAP timestamp would not have been updated by a server if either the
server had no connections, or the server had only remote TCP or SPX connections
all of which had liveness disabled. This has been fixed.
================(Build #1822 - Engineering Case #346507)================
In databases using 1253ELL (Greek) collation, identifiers containing Greek
letters required double-quotes because the Greek letters were not properly
identified as alphabetic. This has been corrected, so that Greek letters
can now be used without quotes.
================(Build #1822 - Engineering Case #346508)================
If an EXECUTE IMMEDIATE statement in a stored procedure was used to execute
a query involving a UNION, the results of the query would not have been returned
as the result set from the procedure. If the query returned a single row,
no error would have been reported, nor would a result set have been generated.
If the query returned more than one row, the error "SELECT returns more than
one row" would have been reported. This has been fixed. A workaround is to
define the query such that the UNION is contained within a derived table.
================(Build #1822 - Engineering Case #346511)================
If a tautology was discovered on a null-supplying column of an outer join,
the query may have returned incorrect results. This has been fixed.
The following conditions must have occurred for a query to exhibit this
problem:
1. the tautology was in the WHERE clause of the main query block, and was
not in a conjunction
2. the column referenced in the tautology was declared not NULL
3. the column referenced in the tautology belonged to a NULL supplying table.
For example:
select count( *)
from product p1 left outer join product p2 on (p2.quantity < 0 )
where (p2.id < 10 or p2.id > 5 or p1.quantity > 100)
where "p2.id < 10 or p2.id > 5" is a tautology, p2.id is a column of the
null-supplying table "product p2" and p2.id is declared NOT NULL.
================(Build #1822 - Engineering Case #346517)================
When generating values for AUTOINCREMENT columns, if the next available value
was out of range, the server's behavior varied based on the datatype of the
column. For SMALLINT columns, a "value out of range for destination" was
reported. For INT and BIGINT columns, a negative value was generated. For
UNSIGNED INT, the value would wrap to 0. For UNSIGNED BIGINT values, the
value could wrap past the maximum signed bigint.
The server's behaviour is now consistant, it generates a NULL value for
an AUTOINCREMENT column if the next available value is out of range for the
column.
================(Build #1823 - Engineering Case #345195)================
Queries with predicates of the form "constant NOT IN (uncorrelated subquery)"
may have taken longer to evaluate in 9.0, compared to earlier versions. This
has been fixed.
Example:
select *
from R
where '123' NOT IN (select T.x
from T)
================(Build #1823 - Engineering Case #346632)================
If a statement-level trigger was defined for multiple trigger events, which
included DELETE, the "inserted" temporary table available within the trigger
would have been populated when a DELETE on the table was performed. Now,
the "inserted" temporary table is empty in this case.
================(Build #1823 - Engineering Case #346715)================
If a Watcom-SQL stored procedure or trigger containes a statement like:
execute var;
a syntax error should be reported, but was not. Instead, when the procedure
or trigger was executed, the message "Invalid prepared statement type" was
reported. A syntax error will now be given when the procedure or trigger
is created. If the intent of the original statement was to treat the contents
of the string variable "var" as a statement to be executed, EXECUTE IMMEDIATE
must be used instead.
================(Build #1823 - Engineering Case #346721)================
Queries with duplicate equijoins predicates, may have returned an incorrect
result set. The duplicate predicates could have come from the original query
or they could have been inferred. The query must have contained a derived
table or view, which is flattened. This has been fixed.
================(Build #1823 - Engineering Case #346730)================
If a procedure called by an HTTP service, set the Content-Type header to
NULL (i.e. telling the server not to send this header) by calling sa_set_http_header,
the server would still have sent it. This has now been fixed so that the
header will not be sent.
================(Build #1823 - Engineering Case #346753)================
Resetting the value of the Min_table_size_for_histogram option to its default,
by using "set option public.min_table_size_for_histogram =", would have
reset it to 1000, rather than the default value of 100. This has been fixed
so that the default value of 100 is now set.
================(Build #1823 - Engineering Case #346758)================
The Index Consultant would generally have not recommended extra indexes,
to take advantage of pipelined access plans, when the option Optimization_goal
was set to 'First-row'. Although the plans recommended would have had a
lower overall cost than plans using indexes, they would not usually have
been pipelined, and so the first row could not be returned early. This has
been fixed.
================(Build #1823 - Engineering Case #346881)================
The PREFETCH database option now has values OFF, CONDITIONAL and ALWAYS.
ON is still accepted and is equivalent to CONDITIONAL.
OFF means no prefetching is done.
CONDITIONAL (the default) means prefetching is done unless either the cursor
type is sensitive, or the query includes a proxy table. For example, prefetching
is done over a forward only cursor, but prefetching is not done over a sensitive
cursor or a proxy table.
ALWAYS means prefetching is done even for sensitive cursor types or cursors
involving a proxy table. Great care must be taken when using this setting.
Using prefetch on a sensitive cursor changes the semantics of the cursor
to asensitive (old values may be fetched if the value was updated between
the prefetch and application's fetch). Also using prefetch on a cursor using
a proxy table could cause the error -668 "Cursor is restricted to FETCH NEXT
operations" to occur when the application attempts to re-fetch prefetched
rows. The application will re-fetch prefetched rows in a number of cases,
including after a rollback, on a fetch relative 0, if a fetch column is re-bound
or bound for the first time after the first fetch, or in some cases when
GET DATA is used.
If the DisableMultiRowFetch connection parameter is set to YES, the PREFETCH
database option is ignored and no prefetching is done.
Note: prefetching is not used by OpenClient or jConnect connections.
================(Build #1824 - Engineering Case #346245)================
Selecting blob data from a variable could have caused the server to stop
with an assertion failure. This has been resolved.
================(Build #1824 - Engineering Case #346314)================
A query such as the following:
select 1 from sys.systable having 1 = 0
that created a grouping operator with no aggregate functions or group by
columns, would have caused a server crash. This has been fixed.
This is fixed.
================(Build #1824 - Engineering Case #346604)================
Calling the system procedure sa_locks, could have caused a deadlock in the
server. This problem was more likely to occur on multiprocessor and unix
systems.
================(Build #1824 - Engineering Case #346766)================
The server could have faulted with an integer divide by zero error, when
executing a memory intensive query that causes the cache to grow dynamically
to the maximum allowed. This is now fixed, but a work around would be to
disable dynamic cache sizing (i.e. specifying -ca 0 on the command line).
================(Build #1824 - Engineering Case #347049)================
Sending the HTTP HEAD request to the server should have only returned the
HTTP headers and no body, but the body was also being returned. This has
been fixed.
================(Build #1825 - Engineering Case #346991)================
Assigning the result of a string concatenation to a variable could have caused
the size limit of the variable to be exceeded. This would only have occurred
with the bar ( || ) operator. This is now fixed, the concatenated string
is truncated to the maximum size.
================(Build #1825 - Engineering Case #347226)================
A zero-length HTTP POST would have caused the server to wait indefinately,
eventually timing out the connection. This has been fixed.
================(Build #1826 - Engineering Case #328695)================
For ADO applications using client-side cursors, when adding a record to a
table containing a char or varchar column, using an ADO recordset, the value
in the ADO recordset was padded with blank spaces up to the maximum length
of the column. This padding behavior occurred as soon as the assignment took
place (well before the new record was transmitted to the database). This
has been fixed. Previously, the DBCOLUMNFLAGS field of the DBCOLUMNINFO structure
for char and varchar columns included DBCOLUMNFLAGS_ISFIXEDLENGTH. This was
incorrect. This flag is no longer set for char and varchar columns.
================(Build #1826 - Engineering Case #347217)================
A checksum validation operation (for example, the VALIDATE CHECKSUM statement)
did not respond to cancel requests appropriately. This has been fixed.
================(Build #1827 - Engineering Case #347493)================
The server could have crashed on startup, if a large number of tasks were
specified, and they could not all be created due to lack of memory. This
problem was more likely to occur on Windows platforms using AWE, with 8.0.2
build 4076, or later. This has been fixed in the Windows server, (a fix
for NetWare and Unix servers to follow), now it will fail with an error indicative
of excessive memory usage.
================(Build #1827 - Engineering Case #347905)================
A long-running query that had a recursive union, may have crashed the server
if it was cancelled. This has been fixed.
================(Build #1828 - Engineering Case #346886)================
If a query contained an expression in the select list using the built-in
finction NUMBER(*), (such as NUMBER(*)+1000) or a non-deterministic function,
then a wrong answer could have been returned if the query also contained
an EXISTS style predicate (or ANY, ALL or IN), where the predicate was re-written
as a join with a DISTINCT operation. The wrong answer could have contained
more rows than expected or an incorrect value for the expression derived
from the NUMBER(*) or a non-deterministic function.
For example, the following query demonstrates the problem, depending on
the plan selected by the query optimizer:
select R1.row_num, rand(), number(*)+100
from rowgenerator R1
where exists ( select * from rowgenerator R2
where R2.row_num <> R1.row_num
and R2.row_num <= 2)
This problem has been fixed.
================(Build #1828 - Engineering Case #347307)================
A new implementation of a sort algorithm that was added in 9.0.1, could have
caused a server crash during query optimization. It was likely that the
crash was only possible for queries with a large number of joins, and in
the presence of multiple indexes with similar definitions on the tables involved.
This has now been fixed.
================(Build #1828 - Engineering Case #347538)================
Running a querey at isolation level 1 could have taken significantly longer
than at isolation level 0. This was particularly evident when the plan called
for repeatedly re-reading the same set of rows. This was due to the cost
of obtaining the short term locks for cursor stability and checking the long
term lock table for conflicting locks. The time to aquire the cursor stability
locks has been reduced by optimizing lookups into the long term lock table.
================(Build #1828 - Engineering Case #347591)================
The result of:
select convert(long varchar,current date,13)
would have returned a string based on the Date_format option, rather than
the format represented by the parameter 13. This has been fixed.
================(Build #1828 - Engineering Case #347825)================
If LOAD TABLE was executed on a Global Temporary table, an exclusive lock
was left on the table until a COMMIT was executed. Other connections attempting
to reference the table for the first time would block until the lock was
released. LOAD TABLE will now execute a COMMIT at the end of the statement
in this situation, releasing the lock. If a Global Temporary table was created
with ON COMMIT DELETE ROWS, an error will be given on an attempt to use LOAD
TABLE with that table.
================(Build #1828 - Engineering Case #348104)================
When retrieving data from HTML services, some characters need to be be HTML-encoded
(eg. double quotes should be returned as "). In some cases (depending
on the character set of database and the value of the Accept-charset header
sent by the client), these characters were not being HTML-encoded properly.
This has been fixed.
Note, to properly convert data to UTF-16 AND HTML-encode it, both operations
have to be done essentially at the same time, which is currently not possible.
Because of this, the HTTP server will not send UTF-16 if requested.
================(Build #1828 - Engineering Case #348104)================
When retrieving data from HTML services, some characters need to be be HTML-encoded
(eg. double quotes should be returned as "). In some cases (depending
on the character set of database and the value of the Accept-charset header
sent by the client), these characters were not being HTML-encoded properly.
This has been fixed.
Note, to properly convert data to UTF-16 AND HTML-encode it, both operations
have to be done essentially at the same time, which is currently not possible.
Because of this, the HTTP server will not send UTF-16 if requested.
================(Build #1829 - Engineering Case #347045)================
Queries containing a full outer join and an IS NULL predicate may have returned
incorrect results. For this to have occurred, all of the following conditions
must have been true:
- the query contained a full outer join above another outer join (potentially
with other joins in between).
- the WHERE clause must have been a conjunction containing a predicate,
or a single predicate, of the form "expr IS NULL" where expr was a simple
expression involving a single base column.
- the affected lower outer join must have been an ON clause of the exact
form column=column, where one of the columns involved in the predicate was
in the IS NULL expression.
For example:
SELECT * FROM TAB1
LEFT OUTER JOIN TAB2 ON TAB1.COL2 = TAB2.COL2
FULL OUTER JOIN TAB3 ON TAB2.COL3 = TAB3.COL3
WHERE TAB1.COL2 IS NULL
This has now been fixed. A workaround is to rewrite the query so that it
does not meet all of the listed conditions.
================(Build #1829 - Engineering Case #348312)================
The server could have crashed on startup when run on NetWare.
================(Build #1829 - Engineering Case #348353)================
If an ALTER TABLE statement was used to rename a column and the column was
referenced in the column list of an UPDATE OF trigger, or was part of a foreign
key defined with an ON UPDATE action, the server could have crashed or reported
an assertion failure. The crash or assertion failure could have occurred
after deleting the primary key for the table. Now, an error will be given
when attempting to rename the column.
================(Build #1829 - Engineering Case #348356)================
If a stored procedure contained a statement like:
BEGIN TRANSACTION trans_name
the string "trans_name" would have been missing in the definition stored
in the catalog. If a ROLLBACK TRANSACTION trans_name was executed later in
the procedure, the error "Savepoint 'trans_name' not found" would have been
issued. The name for the transaction is now included in the BEGIN TRANSACTION
statement stored in the procedure's definition in the catalog
================(Build #1829 - Engineering Case #348469)================
If a request log that contained statements executed by a Java application,
was analyzed by the system procedure sa_get_request_times, any host variable
values recorded in the log would have been missing from satmp_request_hostvar.
Also, the last character of the host variable value would have been truncated.
This has been fixed.
================(Build #1829 - Engineering Case #348516)================
Attempting to generate the plan for an UPDATE or DELETE statement, that involved
proxy tables, would have caused the server to crash. A proper error message
is now returned.
================(Build #1829 - Engineering Case #348517)================
Dropping a declared temporary table was not permitted if the database was
running in read-only mode. This operation is now allowed.
================(Build #1830 - Engineering Case #348352)================
The first parameter to the system procedure sp_password() is the caller's
password, which is to be verified before modifying the password of the current
user or another user. The caller's password was not being checked before
changing the user's password. Now, this password is checked, and an error
is reported if the password provided does not match the password of the current
user.
Note that the previous behaviour would not have allowed a user without dba
authority to change another user's password. This change to sp_password()
does not prevent someone from changing the current user's password if a DBISQL
or Sybase Central session is left unattended, since that person could simply
enter a GRANT statement to accomplish the change. An example situation where
this change is beneficial is an application which provides a "change my password"
feature using sp_password and prompts for the original password.
================(Build #1831 - Engineering Case #340743)================
If a LOAD TABLE statement was executed on a global temporary table, an exclusive
lock was left on the table until a COMMIT was executed. Other connections
attempting to reference the table for the first time would have been blocked
until the lock was released. Now, executing a LOAD TABLE statement will cause
a COMMIT to be executed at the end of the statement, releasing the lock.
If a global temporary table was created with ON COMMIT DELETE ROWS, an error
will be given on an attempt to use LOAD TABLE with that table.
================(Build #1831 - Engineering Case #348512)================
When connected to the utility database, (utility_db), executing a SET OPTION
statement would have caused the next statement to fail with a "Connection
error". This has been fixed. SET OPTION statements will now return an error,
as they are not supported when connected to the utility_db. Subsequent statements
will work as expected.
================(Build #1831 - Engineering Case #348751)================
If the server had multiple, memory intensive transactions running concurrently,
it may have erroneously failed with an 'out of memory' error'. This would
only have occurred on multiprocessor systems. This has been fixed.
================(Build #1831 - Engineering Case #348773)================
The server could have crashed when started with a very long command line.
================(Build #1831 - Engineering Case #348902)================
If two or more TLS or HTTPS connections to the same server were initiated
at the same time (from the same or different clients), it was possible for
one or more of the connections to time out during the handshake, or for the
server to crash. This has now been fixed.
================(Build #1831 - Engineering Case #348906)================
The character set GB2312 (aka csGB2312 or GB_2312-80) was not supported by
the server. This has been fixed - any of the above names is now a valid alias
for GB2312.
================(Build #1831 - Engineering Case #348935)================
If a multi-threaded client application attempted to make simultaneous TLS
connections, one or more of the connection attempts may have failed with
a handshake failure or have displayed an error that TLS initialization failed.
This has been corrected.
================(Build #1831 - Engineering Case #348944)================
Attempting to run a query involving proxy tables, in full passthrough mode,
where there are a large (more than 300) number of UNIONs, then the server
may crash with a stack fault. This problem has now been fixed, regular stack
checks are performed when recursing through the UNIONs.
================(Build #1831 - Engineering Case #348946)================
When evaluating the number of rows in a partial index scan of the form 'idx<T.A
= constant>' (the index 'idx' is defined only on the column T.A) the optimizer
was not using the selectivity estimate of the predicate 'T.A = constant'.
Instead the number of distinct values in the index 'idx' was used to estimate
the number of rows returned by the partial index scan. Often, in the presence
of the skewed data, the selectivity estimate of the predicate 'T.A = constant'
is more accurate than using the number of distinct values in an index. This
has been fixed.
================(Build #1831 - Engineering Case #349187)================
If the optimizer chose to use the alternative Nested Loops strategy, inside
a Join Hash execution node, then it was possible for the query to return
incorrect results. For this to have occurred, certain other conditions had
to have been met as well. In particular, the equi-join condition had to involve
comparisons of an indexed column with values of different datatypes, so that
the server needed to convert the values to the datatype of the indexed column.
For example:
CREATE TABLE foo( c1 NUMERIC(6,0), ... );
CREATE TABLE bar( c1 int, ... );
CREATE INDEX idx on bar( c1 );
...
SELECT * from foo, bar where foo.c1 = bar.c1
The server could have returned incorrect results if all of the following
were true:
- The join of foo and bar was done using Join Hash with foo being the build
table and bar the probe table,
- The hash join node contained an alternative Join Nested Loops strategy
that involved lookups using index bar.idx,
- The actual number of rows on the build side was such that the alternative
Join Nested Loops strategy was in fact employed during query execution.
This has been corrected so the the server will return the correct result
set.
================(Build #1831 - Engineering Case #350836)================
If a LOAD TABLE statement was executed on a Global Temporary table, an exclusive
lock was left on the table until a COMMIT was executed. Other connections
attempting to reference the table would have been blocked until the lock
was released. This lock is unnecessary, so now, executing a LOAD TABLE statement
on a Global Temporary table no longer aquires an exclusive lock.
================(Build #1832 - Engineering Case #349073)================
If any of the statements listed below did not qualify a table name with an
owner, then it was not possible to translate the transaction log and re-execute
the generated statements using a different userid than that used to execute
the original statements.
CREATE/ALTER VIEW -- affected tables referenced in the view's query
CREATE INDEX
ALTER TABLE
TRUNCATE TABLE
GRANT
REVOKE
DROP VIEW
LOAD TABLE
Now, these statements recorded in the transaction log will now have the
table name qualified with its owner name.
================(Build #1832 - Engineering Case #349238)================
If a Remote Data Access query, involving FIRST or TOP n, was executed in
full passthrough mode, then a "non-deterministic result set" warning may
have been incorrectly generated. This problem has now been fixed.
================(Build #1832 - Engineering Case #349655)================
If a database was started and then shutdown before the cache warming request
was picked up by one of the server's worker threads, the server could have
crashed, accessing memory that had already been released. This small window
was much more likely to have occurred on a very heavily loaded machine.
Disabling cache warming would eliminate the problem. This has been fixed.
================(Build #1833 - Engineering Case #346750)================
If a Remote Data Access query involved proxy tables on multiple servers and
used host variables, then the query may have failed with a "Not enough values
for host variables" error. This problem has now been fixed.
================(Build #1835 - Engineering Case #326454)================
Creating a proxy table that referenced an ASE remote server, using server
class aseodbc, would have failed if the remote table name contained a '$'
character. If the server class was asejdbc, then creating the proxy table
would have succeeded, but the proxy table would have been unusable. Both
problems have now been fixed.
================(Build #1836 - Engineering Case #345949)================
If a query contained an IF expression with an ANY, ALL, or IN predicate in
the select list, and a keyset-driven (scroll) cursor was opened over the
query, the server could have crashed, or returned incorrect results, when
fetching from the cursor. This has been fixed.
For example, the following query demonstrates the problem:
create table SQScroll(
pk int primary key,
x char(10),
y int
);
insert into SQScroll
select row_num, row_num, row_num from rowgenerator;
SELECT pk, x,
IF x IN ( SELECT ''||row_num FROM rowgenerator R WHERE S1.pk = R.row_num
) THEN '1' ELSE '0' ENDIF b_in,
IF x = ANY ( SELECT ''||row_num FROM rowgenerator R WHERE S1.pk = R.row_num
) THEN '1' ELSE '0' ENDIF b_any,
IF x = ALL ( SELECT ''||row_num FROM rowgenerator R WHERE S1.pk = R.row_num
) THEN '1' ELSE '0' ENDIF b_all
FROM SQScroll S1
ORDER BY 2,3,4,5
================(Build #1836 - Engineering Case #349653)================
When run on Unix platforms, the server could have crashed on startup. This
has been fixed.
================(Build #1836 - Engineering Case #349938)================
If a remote connection has become inactive or is no longer needed, it can
now be closed by executing the following:
ALTER SERVER server CONNECTION CLOSE
where "server" is the name of the remote server.
This new feature is most useful in the case where the remote server has
gone away or has dropped the connection due to a timeout and the server is
not able to detect that the remote connection is no longer useable. Issuing
the "ALTER SERVER server CONNECTION CLOSE" statement in such cases will let
the server know that the remote connection is no longer useable and should
be dropped. Once a remote connection is closed, the server will create a
new connection to the remote server when one is needed.
Note that this statement does not drop all connections to the remote server,
but only the remote connection associated with the local connection. Also,
the user does not require DBA authority to dop a remote connection.
================(Build #1837 - Engineering Case #349359)================
When determining the broadcast address to use, the server was using the IP
address of the host and ignoring the subnet mask, which was resulting in
an incorrect broadcast address. This meant that client applications that
used broadcasts to find servers may have failed to find them, and similarily,
servers may have fail to find existing servers with the same name. This has
now been fixed.
================(Build #1837 - Engineering Case #349450)================
After recovering a database which used no transaction log file, shutting
down the server before modifying the database could have caused assertion
failures 201810 "Checkpoint log: the database is not clean during truncate"
or 201117 "Attempt to close a file marked as dirty". If the server was killed
or the machine or server crashed before the database was modified, then subsequently
checkpointed, the database could have been corrupt. Only databases created
with 8.0.0 or later are affected. This problem has now been corrected.
================(Build #1837 - Engineering Case #349811)================
Queries involving ORDER BY and index hints may have unnecessarily included
a sort of the output. For this to have occurred, the query must have contained
no sargable predicates on the hinted index, and the ORDER BY clause must
have been satisfiable with a backwards scan of the hinted index.
For example, the following query was unnecessarily using a sort on top of
the hinted index scan:
select id from tab1 with( index( tab1 ) ) order by id desc;
but the following two queries were correctly recognizing that a scan of
the index would supply the correct ordering:
select id from tab1 with( index( tab1 ) ) where id > -9999 order by
id desc;
select id from tab1 with( index( tab1 ) ) order by id asc;
This problem has now been fixed. A workaround is to include a sargable
predicate with 100% selectivity on the index (as in example 2 above).
================(Build #1837 - Engineering Case #349913)================
A checkpoint occurring during index creation could have caused a server deadlock.
For the deadlock to have occurred, the index must have been large relative
to the cache size. This has been corrected.
================(Build #1837 - Engineering Case #349915)================
A query, where both join elimination and subquery flattening took place,
may have returned too few rows.
For example:
select emp_id
from employee
where emp_id in (select distinct sales_rep
from sales_order key join sales_order_items)
for read only
This has now been fixed.
Note that this will not happen for updateable queries, since join elimination
cannot take place in that case. Cases where it may happen include, "for
read only" queries and "insert into ... select ..." queries.
A workaround is to prevent join elimination either by specifying "for update",
or by using additional columns from the eliminated table in a way that doesn't
change the result.
For example:
select emp_id + (manager_id-manager_id)
from employee
where emp_id in (select distinct sales_rep
from sales_order key join sales_order_items)
for read only
================(Build #1837 - Engineering Case #349929)================
Calling the function property( 'LicenseType' ), would have incorrectly returned
'cpu-based' if the server was not licensed. This has been fixed so that 'not
licensed' is now returned in this case.
================(Build #1837 - Engineering Case #349954)================
A new connection could have been refused, in very rare timing dependent cases,
when it should have been allowed. In order for this to have occurred the
network server must have been at its maximum number of licensed clients (Note,
each unique client address from a remote machine counts as one license),
and the last connection from a client machine must have been disconnecting
at the same time a new connection was being made at the same client machine
This has been fixed so that the server calculates licenses used accurately.
================(Build #1838 - Engineering Case #349219)================
A Transact-SQL query of the form "SELECT ... INTO #temptable ..." could have
failed with the error "Table {tablename} not found" (SQLCODE -141), if a
warning occurred when the query was being optimized. The warning most likely
to have caused this error is "The result returned is non-deterministic" (SQLCODE
122).
For example:
select first id into #temptab from tab1
Now the query will proceed, but the original warning will be reported. A
workaround is to test the query without the INTO clause and resolve the warning
(for instance, by adding an ORDER BY clause to resolve the non-deterministic
warning).
================(Build #1838 - Engineering Case #349473)================
When in passthrough mode, all EXECUTE IMMEDIATE statements within stored
procedures, would have failed with "Statement is not allowed in passthrough
mode". This has been fixed.
================(Build #1838 - Engineering Case #349958)================
When shutting down a server running on Windows 95, 98, ME or NetWare, with
a number of active TCP/IP or SPX connections, the server may have unnecessarily
paused for several seconds at 100% CPU usage. The unnecessary delay has now
been removed. Note that it is not uncommon or incorrect for the server to
take several seconds with significant CPU to shutdown.
================(Build #1838 - Engineering Case #350038)================
If the Safari browser for Mac OS X, made an HTTP or HTTPS connection to an
ASA server, the server may have failed the second and subsequent requests.
This was due to the Safari browser erroneously using keep-alive, which is
not supported. This has been fixed, now the "Connection: close" header will
be sent to tell the browser that the server is not doing keep-alive.
================(Build #1838 - Engineering Case #350058)================
When run on Windows 95, 98, ME or NetWare, the server could have crashed
when receiving BLOBs over TCP/IP or SPX connections. The probability of this
crash was a very slight and timing dependent. This has now been fixed.
================(Build #1838 - Engineering Case #350949)================
This is an enhancement to the new feature added in Engineering Case 349938:
If a remote connection has become inactive or is no longer needed, a user
can now close the remote connection by executing:
ALTER SERVER server CONNECTION CLOSE [CURRENT|ALL|connection_id]
where "server" is the name of the remote server.
ALTER SERVER server CONNECTION CLOSE, and
ALTER SERVER server CONNECTION CLOSE CURRENT
are the same and will drop the remote connection to the server associated
with the local connection. The user does not require DBA authority in this
case. Also, both ODBC and JDBC remote connections can be dropped using this
syntax.
ALTER SERVER server CONNECTION CLOSE connection_id
will drop the remote ODBC connection associated with the local connection
identified by connection_id. If the local connection identified by connection_id
is not the current local connection, then the user does require DBA authority
in this case. Attempting to drop a remote JDBC connection using this syntax
will generate an error. Closing a remote JDBC connection for a local connection
that is not the current connection, would require interacting with another
connection's VM, which is not possible.
ALTER SERVER server CONNECTION CLOSE ALL
will drop all remote ODBC connections to the server. Attempting to drop
all remote JDBC connections using this syntax will generate an error.
This feature is most useful in the case where the remote server has gone
away or has dropped the connection(s) due to a timeout. In these cases, the
server may not detect that the remote connection(s) is/are no longer useable.
Issuing the "ALTER SERVER server CONNECTION CLOSE" statement in such cases
will let the local server know that the connection(s) is/are no longer useable
and should be discarded. Once a remote connection is closed, the server will
create a new connection to the remote server when one is needed.
================(Build #1839 - Engineering Case #349746)================
In some rare situations, the server could have updated column statistics
incorrectly, which could then have resulted in the optimizer choosing poor
query plans. This was more likely to have occurred with a plan that used
the alternate Nested Loop Join strategy in a Hash Join operator. This problem
has now been fixed.
Additionally, for some predicates on string columns with low selectivity,
the server will now make better use of index probes to determine selectivities.
================(Build #1839 - Engineering Case #351110)================
This change fixes three problems of a similar nature to that addressed by
Engineering Case 344946.
Attempts to execute a statement that contained an IN list with too many
elements would have lead to the following assertion failure:
104010, "Internal vector size too large"
This was due to an IN list optimization that was failing. The maximum number
of IN-list elements was limited by database page size. Now, IN-list size
is limited by cache size. However, if an IN list is larger than the above
limit, the IN-list optimization will not be used, which may impact performance,
but the statement will not fail.
Attempts to execute a statement that referred to too many tables could have
lead the to following assertion failure:
101504, "Memory allocation size too large"
The maximum number of tables is limited by database page size.
Attempts to execute a large statement could have lead to the assertion failure:
101503, "Heap index (%d) and heap header page counts (%d) disagree"
if the size of the associated heap was larger than 2^16 pages.
================(Build #1840 - Engineering Case #350546)================
A query with multiple columns in the select list of an ANY or ALL subquery
is invalid. Such a query would have failed with error -151 - "Subquery allowed
only one select list item", or non-fatal assertion 102602 - "Error building
ALL subquery" if the select list contained a * that expanded to multiple
columns. Now such a query will consistently report error -151.
================(Build #1841 - Engineering Case #350348)================
Nested block joins that used parallel index scans (i.e. on a database server
with a RAID array and on which the affected dbspace has been calibrated)
may have returned no rows, if there was a cast() expression on the left hand
side of the join condition. This expression may have been introduced either
explicitly in the text of the query, or automatically by the optimizer. This
has been fixed.
================(Build #1842 - Engineering Case #349048)================
The Database Translation utility dbtran, was ignoring the -q (quiet: do not
print messages) command line option,. This has now been fixed, no messages
will be displayed when -q is used.
================(Build #1842 - Engineering Case #349467)================
The following message:
unaligned access to x, ip=y
was being displayed in the system log when the server, or one of the database
utilities, was used, where x and y were some hex values.
These messages would not have caused any execution correctness problems,
but may have degraded performance slightly.
This has been fixed.
================(Build #1842 - Engineering Case #349901)================
The server would have crashed during execution of a query that used an index,
if all of the following conditions were true:
- the index was a compressed B-tree index
- the index contained a character or binary key column
- the length of the search value for this key column was almost the page
size, or was longer
- the search value for this key column was not a constant
This has now been fixed.
================(Build #1842 - Engineering Case #350245)================
The server could have crashed while a large volumne of data was being fetched
through a TDS connection when the '-z' (display debugging information), command
line option was used. This has now been fixed.
================(Build #1842 - Engineering Case #351009)================
When using a procedure in place of a table in the FROM clause, that consisted
of nothing but a single select statement in the body of the procedure, if
the arguments and result columns were the bigint datatype, the server could
have crashed.
For example:
create procedure p( rstart bigint )
result ( x bigint )
begin
select row_num from dbo.rowgenerator;
end;
select * from p(1);
This has now been fixed. A workaround is to add some code to the procedure
body so that it is not just a single select.
================(Build #1842 - Engineering Case #352035)================
If a procedure was called in a FROM clause, and the procedure body was a
single SELECT statement with a common table expression and nothing else,
then the SELECT statement within the procedure would have failed with the
error 'Table {common table expression name} not found'.
For example, given the procedure:
create procedure p1()
begin
with c1(a) as (select 1) select * from c1
end;
The query 'select * from p1()' would have failed with the error 'Table 'c1'
not found'.
There are two possible workarounds
1) Add some code like "DECLARE varname integer; set varname = 1;" to the
procedure to prevent inlining.
2) If the procedure takes no arguments, use a view instead.
This problem has now been fixed.
================(Build #1843 - Engineering Case #346649)================
The Sybase Central Plugin installed via the Merge Module could not be registered
with Sybase Central. This has now been fixed.
================(Build #1843 - Engineering Case #350530)================
If a mirror log existed, then running the Database Backup utility dbbackup,
with the -x (delete and restart the transaction log) command line option,
would have failed to delete the renamed mirror log file created during the
backup process. This has now been fixed.
================(Build #1843 - Engineering Case #351368)================
The fix for Engineering Case 350348, might have missed a cast() on multiple
columns. There was a chance that this error would have caused the optimizer
to hang on machines that have RAID arrays and have had their dbspaces calibrated.
This problem has now been corrected.
================(Build #1844 - Engineering Case #351130)================
Calling the trim() function on a string consisting only of whitespace, with
a length of less than approximately 30 characters. (The actual length required
was platform dependent) could have caused the server to crash. This has
now been fixed.
================(Build #1845 - Engineering Case #346247)================
If a view or derived table that made use of string concatenation using the
'+' operator, was used as the NULL-supplying table in an outer join, the
column containing the concatenation may not have been NULL-supplied in some
cases. This has been fixed.
================(Build #1845 - Engineering Case #346277)================
If a Full Outer Nested Loops join were used when processing a query, and
one of the inputs to the query was a single-row GroupBy, then the aggregates
computed by the GroupBy could have been impropery NULL-supplied.
For example:
SELECT *
FROM
( select count(*)
from dbo.rowgenerator R1
) T1( c1 )
full JOIN (
select * from
rowgenerator R2
where R2.row_num < 2) R2 on R2.row_num = T1.c1
This has been corrected.
================(Build #1845 - Engineering Case #346362)================
Some statements, containing outer joins and derived tables or views with
constants or complex expressions, could have incorrectly failed with a syntax
error "derived table T has no name for column x".
For example, the following query would have failed this way:
SELECT *
FROM dbo.rowgenerator R1 LEFT JOIN
( select T1.a1
from ( select
T.table_name || 'def'
from dbo.rowgenerator R2, sys.systable T
) as T1(a1)
) T2
ON 1=0
This has now been fixed.
================(Build #1845 - Engineering Case #346416)================
If a GroupByOrdered method was used to compute a distinct aggregate function
with a constant argument, the wrong answer could have been returned. This
has now been fixed.
For example:
SELECT T1.a1,
count( DISTINCT T1.a1 ) sumd0
FROM
( SELECT 1 a1
FROM dbo.rowgenerator ) T1
GROUP BY T1.a1
================(Build #1845 - Engineering Case #350090)================
In version 8.0, histogram entries are made for a single value only when it
has a selectivity less than 1%. This strategy was found to be deficient for
some user databases. Consequently, version 9.0.0 was changed to create entries
for single values for the top N frequencies in the distribution of a column.
No lower bound was placed on the selectivity of frequencies for which single
value entries were made. As a result, the histogram became "noisy", with
a subsequent degradation in quality. An example of such a scenario was a
table that contained a column that was not declared unique, but actually
had only unique values. The server then collected too much information in
the column statistics, which could have caused an inefficient plan to have
been chosen.
Now, a lower bound of 0.01% is enforced on the selectivity of frequencies
that will be stored.
================(Build #1845 - Engineering Case #351643)================
Updating a column over a TDS connection (via ct_send_data) could have failed
with a syntax error. This is now fixed. The problem was caused by the server
reporting the full file path of the database as the dbname portion of the
described column rather than just the database name. For example, the server
would have described the column name as "e:\dir\MyDB.db.MyTable.BlobCol"
rather than "MyDB.MyTable.BlobCol".
================(Build #1845 - Engineering Case #351709)================
Executing queries that needed a sort operation, on servers with very little
available memory, may have crashed the server. This has been fixed.
This error is likely to have occurred only when the cache size was inadequate
for the server's workload. A workaround. which will reduce the chances of
the crash occurring, is to increase the cache size (if practical).
================(Build #1845 - Engineering Case #351739)================
The server could have crashed while recovering a database, if the following
had occurred:
1 - a connection had begun a transaction with an Insert, Update or Delete
2 - a checkpoint subsequently occurred prior to a commit or rollback of
the transaction
3 - another modification was made by the same connection on a table which
had a trigger defined
4 - the modification was written to the redo log
5 - the server crashed or was killed prior to the next checkpoint
This has been fixed.
================(Build #1846 - Engineering Case #351732)================
Executing a procedure, first with a CALL statement, and then with a SELECT,
may have failed if the procedure body was a single select. Any use of grouping,
aliases or subqueries in the single select could potentially have lead to
the problem, but the exact conditions cannot be simply described. This has
been fixed.
A workaround is to execute the procedure using only CALL statements or only
SELECT statements. Another workaround is to add some useless code to the
procedure body so that it consists of more than a single SELECT. For example,
the following code could be added:
declare v int;
set v=1;
================(Build #1846 - Engineering Case #351821)================
A complex query involving many predicates and outer joins, may have crashed
the server. This has been fixed.
Note, cases where this would have happened should have been very rare. Rearranging
predicates in the WHERE clause of the query may be a workaround for the problem.
================(Build #1846 - Engineering Case #352048)================
The native methods that supported 'java.lang.reflect.Method.invoke()' did
not consider the case where a private method was called, using Java Reflection,
from a method of that same class. Thus a java class that used reflection
to return and invoke a method on itself, would have throw an exception when
changing the access of the method being called from public to private. This
has been fixed, by examining the class of the caller, and allowing the private
method to be invoked if the method's class and the caller's class are the
same.
================(Build #1848 - Engineering Case #352129)================
If a Remote Server did not support correlation names, then there was a chance
the Remote Server would have encountered a syntax when an UPDATE statement
involving remote tables was executed. This problem has now been fixed.
================(Build #1849 - Engineering Case #352147)================
In some circumstances, parallel index scans may have left some index pages
locked when finished. This was more likely to occur with a cold cache.
If this problem occurred, the following three symptoms would have resulted:
- The locked index pages would have remained permanently in cache, decreasing
the amount of available cache.
- The engine would have hung at shutdown.
- Connections making updates, inserts or deletes to column values contained
in the scanned index, such that the locked index pages need to be modified,
would have blocked indefinitely.
This problem has now been fixed.
================(Build #1849 - Engineering Case #352149)================
The hash table scan operator has now been implemented.
================(Build #1849 - Engineering Case #352289)================
The server could have crashed when dynamically growing the cache, if it had
previously been shrunk. This problem was unlikely to have occurred on single
processor systems. It has now been fixed.
================(Build #1850 - Engineering Case #352275)================
If an application exceeded the limit for prepared statements set by the database
option Max_statement_count, but ignored the error returned in this situtation,
it could eventually have caused the server to run out of memory. The temporary
file would also have grown for each statement prepared. This has been fixed.
================(Build #1850 - Engineering Case #352471)================
When the server was run on multi-processor Windows plarforms, tasks for connections
where the database option Background_priority was set to 'On', would have
been scheduled by the OS such that only one was running at a time, even if
other processors were idle. This has been corrected.
================(Build #1852 - Engineering Case #348610)================
Committing a transaction that deleted rows containing blobs, whose aggregate
size was larger than the current cache size, could have taken a very long
time. The time to do these deletes has been reducedsignificantly. As well,
blob columns created by servers containing this change, can be deleted even
more efficiently.
================(Build #1852 - Engineering Case #352938)================
If a query referencing proxy tables, used common table expressions (i.e.
the WITH clause), the local server could have crashed or returned incorrect
results. The server will now return an error indicating the such statements
are no longer supported with remote servers.
================(Build #1852 - Engineering Case #352939)================
If a server listening on multiple IP addresses, that had registered with
LDAP, failed to deregister (ie. if it crashed), another server that tried
to start up with the same name on the same machine within the next 10 minutes
(or the value of search_timeout in asaldap.ini) would have failed with the
error "A database server with that name has already started". This has been
corrected.
================(Build #1852 - Engineering Case #353056)================
The result set of the compatibility procedure dbo.sp_sproc_columns, was not
ordered, contrary to ASE's behavior. The results are now ordered by "colid".
A workaround is to edit the procedure definition (e.g. using Sybase Central)
and add the following ORDER BY clause:
ORDER BY parm_id
================(Build #1853 - Engineering Case #347481)================
For queries with multiple predicates in the WHERE clause, if one predicate
had a very small selectivity, the server could have picked an inefficient
plan by choosing an inferior index. This has now been fixed.
================(Build #1853 - Engineering Case #350675)================
If a procedure was defined to return multiple result sets, debugging the
procedure and selecting "Step into", on a statement that returned one of
the result sets, would have caused the debugger to report the following message:
"The source could not be shown for the procedure because the database filter
is excluding it."
This has now been fixed.
================(Build #1853 - Engineering Case #353025)================
If a statement in a request log contained ASCII the characters 0x1a or 0x1c,
the request log could not have been read using sa_get_request_times. This
has been fixed.
================(Build #1853 - Engineering Case #353026)================
Calling the system procedure "sa_get_request_times", could have caused a
server crash. This has now been fixed.
================(Build #1854 - Engineering Case #351603)================
In very rare circumstances, the server could have crashed with ambiguous
symptoms during query execution. The cause of the crash was likely to have
been memory corruption. This problem has now been fixed.
================(Build #1854 - Engineering Case #353148)================
For very expensive queries which returned a large result set, an access plan
having materializing operators was more likely to be used when the option
Optimization_goal was set to 'First-row'. Now, if a plan that doesn't contain
materialization exists, it is more likely to be picked as the best plan when
Optimization_goal is 'First-row'.
================(Build #1854 - Engineering Case #353334)================
Calling the system extended procedures xp_scanf, xp_sprintf or xp_startsmtp
could, in very rare circumstances, have caused the server to crash. These
procedures have now been fixed.
================(Build #1854 - Engineering Case #353430)================
If the transaction log was renamed while one or more connections had open
transactions, the log offset of the oldest incomplete transaction could have
been recorded incorrectly. This might have prevented the MobiLink client
dbmlsync from deleting out-of-date transaction logs. This has been fixed.
================(Build #1855 - Engineering Case #353437)================
The column statistics for string columns with a declared length of up to
7 characters in blank padded databases can be incorrect, resulting in poor
query plans. This problem has now been fixed.
For existing databases where incorrect statistics on some columns are suspect,
the problem can be rectified by recreating the statistics on the suspect
columns. If nothing is done, however, the server will automatically correct
the faulty statistics over time.
================(Build #1856 - Engineering Case #349043)================
Selectivity estimates of 0% from histograms, for a predicate of the form
"T.x = constant", may have been ignored when a multi-column index was costed.
For example, the number of rows for an unique index on table T, on columns <T.A,
T.B>, for which the fence posts were built using two equality predicates
- "T.A = constant1" and "T.B = constant2" - was set to 1 row. If the selectivity
of the predicate "T.A = constant1" was known to be 0%, that selectivity was
incorrectly ignored.
This has been fixed.
================(Build #1856 - Engineering Case #352418)================
For a given query, the estimation of the number of rows returned by an index
scan, where the index was over multiple columns, could have had a large degree
of inaccuracy if the data was skewed, (ie some data values occurred significantly
more often than others), and sargable equality or IN predicates existed for
all columns. This has been fixed.
================(Build #1856 - Engineering Case #353463)================
If the decrypt() function was passed a zero-length string (not NULL), it
would have caused the assertion failure 105001 "Decrypt input is too short".
This has been fixed; a decryption error (-851) will now be returned.
================(Build #1856 - Engineering Case #353712)================
If connections were being made concurrently which required a server to be
autostarted, in rare timing dependent cases, the server could have hung or
crashed, or the client could return one of a number of errors including SQLCODEs
-832, -816, -308, -100, -85, -82. This has been fixed.
Note it is still possible for the client to get errors when concurrent connects
are done which require a server to autostart and different database files
are involved, if the EngineName, ServerName or ENG parameter is not specified.
================(Build #1856 - Engineering Case #353793)================
When parsing a statements of the type "select rewrite( 'some_string' )" the
server may have crashed, if 'some_string' was not a valid SQL statement,
for example 'XXXXXXX'. This has now been fixed.
================(Build #1857 - Engineering Case #353913)================
The second and subsequent connections over SPX would not have used PORT connection
parameters. PORT options were only read when the port was created, and ignored
at all other times, which caused problems with subsequent connections that
want to use different options.
For example:
connect using 'con=a;eng=g1;links=spx(dobroadcast=NO;host=host1)'
connect using 'con=b;eng=g2;links=spx(dobroadcast=NO;host=host2)'
If done from the same application, the second connect would not have worked,
as the PORT options would have been ignored, so host2 would not have been
looked at.
This has now been fixed.
================(Build #1857 - Engineering Case #354107)================
A new collation 1255HEB has been added which provides support for the Windows
Hebrew character set cp1255. Ordering is based on byte-by-byte ordering of
the Hebrew characters.
On Hebrew Windows systems, the database server will choose 1255HEB as the
collation for a new database, if no collation is specified. 1255HEB will
now appear in the output from "dbinit -l", and in the list of collations
in Sybase Central.
================(Build #1858 - Engineering Case #353753)================
Running the Validation utility dbvalid to validate a read-only database would
have caused the error:
A write failed with error code: (5), Access is denied.
Fatal error: Unknown device error
This has been fixed.
================(Build #1858 - Engineering Case #354117)================
If the time on the machine running an LDAP server was ahead of the time on
a machine running an ASA server or client application, LDAP entries would
have been considered stale. Using search_timeout=0 is a workaround. This
has been fixed, timestamps in the future are now considered current.
================(Build #1858 - Engineering Case #354138)================
If an server running on a multi-homed machine, (ie more than one IP address),
registered itself through LDAP, ASA clients may not have been able to connect
to it, depending on network topology. The clients would only have attempted
to use the first IP address listed in LDAP. This has now been fixed, all
IP addresses listed will be tried until a successful connection is made.
================(Build #1858 - Engineering Case #354150)================
The optimizer may have chosen a less than optimal plan for simple queries,
if more than one index candidate existed, and at least one was an unique
index. This has been fixed.
For example, for the query:
select * from T
where T.A = c1
and T.B = c2
if there existed an unique index i1 on columns T.A, T.X1, T.X2 and T.X3
and an index i2 on columns T.A and T.B, the query would have used index i1
instead of i2.
================(Build #1858 - Engineering Case #354151)================
A new Arabic collation 1256ARA has been added, which supports the Arabic
Windows character set cp1256. The collation will now appear in the list generated
by the "dbinit -l" listing, and the Sybase Central Create Database wizard.
On an Arabic machine, creating a database without specifying the collation,
the default will now be 1256ARA instead of 1252LATIN1.
The collation is a byte-by-byte ordering which should give reasonable ordering
for Arabic. It does not produce an ordering that follows all of the standard
rules for ordering Arabic, as this is outside the capabilities of existing
ASA collation support.
================(Build #1858 - Engineering Case #354157)================
When starting a database "read-only" using the "-r" command line option,
the server could have failed with assertion 201851, if the database had been
created by ASA 8.0.0 or newer. This has now been fixed.
================(Build #1858 - Engineering Case #354766)================
Normally the BACKUP and RESTORE statements append a line to the backup.syb
file each time they are executed. This is done to record the backup or restore
operation that was performed. To prevent the backup.syb file from being updated,
the HISTORY OFF clause can now be added to the statement.
For example:
BACKUP DATABASE DIRECTORY 'd:\backup' HISTORY OFF
================(Build #1859 - Engineering Case #353893)================
If an event was scheduled to execute only once, and the event completed at
the same time as the database was shut down, a server crash could have resulted.
This has been fixed.
================(Build #1860 - Engineering Case #354300)================
HTTP requests with Message Headers containing empty field-values were rejected
with HTTP status code 400 "Bad Request". Since RFC 2616 section 4.2 states
that a Message Header field-value is optional, such requests are now allowed.
================(Build #1861 - Engineering Case #354599)================
A transaction attempting to checkpoint could have deadlocked with other transactions.
The most likely scenario was deadlocking with a transaction attempting a
rollback. This was more likely to appear on multi-processor and Unix platforms.
This problem has been fixed.
================(Build #1862 - Engineering Case #353803)================
SQL statements in the transaction log containing 128 byte identifiers would
have prevented the database from recovering after a dirty shutdown. Identifiers
shorter than this would not have caused a problem. This has been fixed.
As a side effect of this fix, users can no longer create objects which have
a length of zero bytes.
================(Build #1862 - Engineering Case #354116)================
If two transactions perform concurrent operations that involve scanning the
table sequentially, and the table contains more than 100 pages of data, there
was a chance of database corruption. One of the operations must have been
an insert or update, while the other must have performed a sequential scan
of the table. Indexed access in conjunction with inserts/updates would not
have triggered the problem. This was more likely to occur on multiprocessor
and Unix systems [while still possible, it was unlikely to occur on single
processor Windows systems].
================(Build #1862 - Engineering Case #354773)================
If an external procedure attempted to return a string longer than 65535 bytes,
via a single call to the set_value callback function, the string would have
been truncated. This has been fixed. A workaround is to call set_value multiple
times to build up the result in pieces, each being shorter than 65535.
================(Build #1863 - Engineering Case #354096)================
The server would go into an infinite loop, with nearly 100 percent cpu usage,
when executing a query like the following:
select (select systable.first_page from systable where systable.table_id
= 1) as id0,
id0 as id1
from syscolumn
group by id1
The problem occurred under the following conditions:
- the query had a subselect in the select list or in the WHERE clause
- the subquery had an alias name ("id0" in the above query) and the alias
name was aliased by a second alias name ("id0 as id1" see above), so that
both alias names were syntaxtically identical
- the second alias name was part of a GROUP BY element
This problem has been fixed
================(Build #1863 - Engineering Case #354381)================
When running on Windows 2003, the server could have crashed on startup, if
the machine had no TCP/IP address, or was unplugged from the network. This
has been fixed.
================(Build #1863 - Engineering Case #355126)================
When running on Unix platforms, the server could have stopped transferring
data over a TCP/IP connection that was using ecc_tls encryption. This would
most likely to have happened while transferring large amounts of data, such
as blobs, to slow clients. The server would eventually have disconnected
the client if the default idle time-out was used. This has been fixed.
================(Build #1864 - Engineering Case #353678)================
If either of the -ar or -an command line options was used with Unload utility
DBUNLOAD, column statistics from the unloaded database would not have been
preserved in the new database. This could have resulted in different query
execution plans being chosen when using the new database. This has been fixed.
================(Build #1864 - Engineering Case #355245)================
Attempting to unload a database created prior to SQL Anywhere version 5 would
have resulted in an error that user "dbo" did not exist. If the dbo user
was created, a different error would have been given, since the view dbo.sysusers
would not have existed. This has been fixed. A workaround is to run the Upgrade
utility dbupgrad before unloading the database.
================(Build #1864 - Engineering Case #355299)================
When the server ran databases created with ASA versions 4.x and earlier (or
databases upgraded from ASA versions 4.x and earlier), queries that made
use of index scans over fully hashed indexes could have returned incorrect
results. An optimization for index scans in older databases was incorrect.
This optimization has now been removed, so a drop in performance when using
older databases will likely be noticed. An unload/reload is recommended if
the resulting performance is not acceptable.
================(Build #1865 - Engineering Case #355459)================
When sending or receiving multi-piece strings on a heavily loaded system,
the server could have deadlocked, causing a hang. This has been fixed. A
work around would be to increase the number of tasks available to service
requests (-gn). Alternatively, a dba user could use a pre-existing connection
with the DEDICATED_TASK option set, to manually break the deadlock by cancelling
one or more executing requests.
================(Build #1865 - Engineering Case #355557)================
In some circumstances, idle HTTP client connections did not get disconnected
from the server after the HTTP Time-out period had expired. This has been
fixed.
================(Build #1866 - Engineering Case #355831)================
Executing an ALTER TABLE statement which attempted to modify a column and
then drop the column in the same statement would have caused the server to
crash. Attempting to modify and drop a column in the same ALTER TABLE statement
will now generate the error "ALTER clause conflict". These changes must be
made with separate ALTER TABLE statements.
================(Build #1866 - Engineering Case #355965)================
When run on SMP systems using processors from Intel's P6 family, (as well
as Pentium 4 and XEON), the server could have hung when receiving multi-piece
strings via shared memory connections. This problem also affected ODBC, OLEDB
and Embedded SQL clients as well. It has been fixed.
================(Build #1868 - Engineering Case #355527)================
If a server goes down dirty (eg. due to a power failure), there can be a
partial operation at the end of the log. If such a log was applied to a database
by using the -a (apply named transaction log file) server command line option,
restarting the server using that database and log file (without -a) could
have caused the server to fail to start with the message "not expecting any
operations in transaction log". The problem would only have occurred if the
incomplete operation was the first operation of a new transaction and there
were no other transactions active after all complete operations had been
applied. The problem has been fixed by removing the partial operation from
the log after the log is applied (or recovery is completed).
================(Build #1868 - Engineering Case #356223)================
If the same outer reference in a subquery appeared in both the GROUP BY list
and the SELECT list then error -150 (Invalid use of an aggregate function)
would have been reported.
For example:
select 1 from employee e1
where 1 = (select e1.emp_id from employee e2 group by e1.emp_id)
This has been fixed.
================(Build #1868 - Engineering Case #356446)================
The evaluation of the LIKE predicate could have returned incorrect results
in some circumstances. For example, the following would have evaluated to
FALSE, when it should be TRUE: " '5554' LIKE '%554%' ". This problem has
been resolved.
================(Build #1870 - Engineering Case #352793)================
If the database option Truncate_date_values was set to OFF before populating
a row containing a DATE column with a value including both date and time,
and the Truncate_date_values option was subsequently set back to its default
of ON, updating the row in any way which caused the row to be moved to another
page, would have resulted in an assertion failure 105400. This has been fixed.
A workaround is to set the option to OFF, manually update any DATE values
to eliminate the time component, then set the option to ON. A statement such
as the following could be used:
update t set datecol = date(datecol)
where datepart(hour,datecol)<>0
or datepart(minute,datecol)<>0
Normally this option should be left at its default setting.
================(Build #1870 - Engineering Case #356795)================
On Windows 95, 98 or ME, if a network server had both TCP/IP and SPX connections,
the server could have hung with 100% CPU usage. This has been fixed.
Note if using a network server, Windows NT, 2000, XP or 2003 are recommended
over Windows 95, 98 or ME, to ensure better performance and reliability.
================(Build #1871 - Engineering Case #351658)================
When executing statements that required sorting large result sets, during
certain phases of the sort, canceling the request would not have been processed
in a timely manner. Extra checks for CANCEL have been added.
================(Build #1872 - Engineering Case #353299)================
The server could have failed with a "file-system full" error, if more temporary
file space was requested than was available on the drive or volume that hosted
the temporary file. This has now been fixed. The server will now check the
amount of temporary file space that a connection needs, and if it is greater
than its allowable quota (see below), it will fail the request. This check
is enabled by a new public option 'Temp_space_limit_check', which defaults
to 'Off'. When set to 'Off', no limit checking occurs. When set to 'On',
if a connection uses more than its quota of temporary file space, then any
requests will fail with an SQLSTATE_TEMP_SPACE_LIMIT error.
A connection's temporary file quota is based on two factors: the maximum
size of the temp file (i.e., the maximum size it can grow to), and the number
of active database connections. The maximum size of the tempfile is calculated
as the sum of its current size and the amount of disk space available on
the partition containing it. When limit checking is enabled, a connection
will be checked for quota violation only after the temp file has grown to
80% or more of its maximum size, AND it requests more temp file space. When
this occurs, any connection that uses more than the maximum temp file space
divided by the number of active connections will fail.
================(Build #1872 - Engineering Case #353667)================
When the server was running on Windows CE, the encrypt() function may have
returned an invalid string, as well, it may have had problems loading strongly-encrypted
databases. This could also have ocurred on Unix, and other Windows platforms,
though it was extremely
unlikely. This has now been fixed.
Note, as of this change the cryptographic library, (dbaes.dll on Windows,
and libdbaes.so or libdbaes.sl on Unix), is no longer used and can be removed.
================(Build #1872 - Engineering Case #356262)================
The string representation of a double value may be generated with too many
digits when the server was run on Linux x86 systems.
For example:
select cast( cast( 9.9 as double ) as char(30) )
would have returned '9.90000000000000036' instead of '9.9'. This has been
corrected.
================(Build #1872 - Engineering Case #356595)================
If a RAISERROR or PRINT statement containted a subselect in the format string
or in a PRINT expresstion, the server may have crashed or returned an error.
This has been fixed.
================(Build #1873 - Engineering Case #346382)================
If a Transact-SQL SELECT INTO statement referenced a view of a base table
that contained at least one CHECK constraint, the SELECT INTO statement could
have caused an erroneous syntax error, particularly if the view was a grouped
view. In that example, the SQLCODE returned would be -149. This has been
fixed.
As a workaround, the Transact-SQL SELECT INTO statement can be split into
two separate statements. The first declaring a local temporary table and
the second being a non-Transact-SQL SELECT INTO statement.
================(Build #1873 - Engineering Case #355098)================
An ALTER TABLE statement that added, modified or deleted a table's CHECK
constraint, a column's CHECK constraint or renamed a column, had no effect
on INSERT, UPDATE or DELETE statements inside stored procedures and triggers,
if the procedure or trigger was executed at least once prior to the ALTER
TABLE statement. This problem has been fixed.
================(Build #1873 - Engineering Case #356762)================
A non-fatal assertion failure: 105200 "Unexpected error locking row during
fetch" could have been reported when executing an outer join with a temporary
table on the null-supplying side of an outer join. This would have appeared
to an application as error -300 "Run time SQL error" or -853 "Cursor not
in a valid state". This has now been fixed.
================(Build #1873 - Engineering Case #356858)================
Attempting to use the ntile() function would have caused the server to report
the non-fatal assertion 106500, "Error building aggregate", rather than report
the expected error, "olap extensions not supported". This problem has been
fixed.
================(Build #1874 - Engineering Case #351001)================
On CE devoces, queries could have failed with the error "Dynamic memory exhausted",
if a Join Hash operator was used in an access plan and the server cache size
was too small. This has been fixed by disabling the Join Hash operator for
CE devices during optimization if the available cache has less than 2000
pages. Thus resulting in access plans that do not contain such joins.
================(Build #1874 - Engineering Case #355012)================
Calling any of the external mail routines (such as xp_sendmail or xp_startmail)
could have caused the server to crash. A problem with passing NULL parameters
has been fixed.
================(Build #1874 - Engineering Case #355292)================
Updating the version of jConnect to a newer version than the one shipped
with ASA (ie newer than 5.5), would have likely have resulted in postioned
updates failing with an exception. Versions of jConnect newer than 5.5 support
the new status byte that was added to allow distinguishing between a NULL
string and an empty string. When performing positioned updates, jConnect
sends the KEY column values so that the row being updated can be uniquely
identified. This status byte was not supported for KEY columns, but the server
was still expecting it, resulting in a protocol error. This has been fixed.
================(Build #1874 - Engineering Case #356604)================
Compound statements containing Transact-SQL 'SELECT INTO #temptable' syntax
would have incorrectly reported error -141 "Table 'temptable' not found",
if workload capturing was enabled in the Index Consultant. This has been
fixed.
================(Build #1874 - Engineering Case #357653)================
The HTTP web services component of the server could have crashed if it received
a SOAP request with an argument that had a length greater than the database's
pagesize. This has been fixed.
================(Build #1874 - Engineering Case #357689)================
A row limit (ie 'FIRST' or 'TOP nnn') could have been ignored on DELETE and
UPDATE statements. This was most likely to occur on smaller tables and for
very simple statements.
Example:
create table temp1 (nID int not null);
insert into temp1 (nID, nid2) values (1);
insert into temp1 (nID, nid2) values (1);
commit;
delete first from temp1; //deletes both rows
This problem has been corrected. A workaround is to attach a redundant ORDER
BY clause to the statement.
delete first from temp1 order by 1+1; //deletes just one row
================(Build #1875 - Engineering Case #357842)================
If the server received an HTTPS request where the URI (including parameters)
or post data was longer than 1024 bytes, it may have responded with error
code 408: "Request Time-out". This has been fixed
================(Build #1875 - Engineering Case #357853)================
Calling the graphical_plan() function on query where a host variable appeared
in the document argument to the openxml() function that was inside another
graphical_plan() call, would have caused the server to crash.
For example:
select graphical_plan( 'select * from openxml( graphical_plan(?), ''/'')
with (type xml ''.'' ) ')
This is now fixed.
================(Build #1875 - Engineering Case #357981)================
The presence of duplicate indexes increased the optimizer's search space
when looking for an appropriate index to use, thus possibly preventing it
finding an optimal access plan for some statements. Duplicate indexes are
no longer considered during optimization, unless named in an index hint.
An index idx1 is considered a duplicate of the index idx2 if and only if
the following conditions hold:
- idx1 and idx2 are defined on the same columns, having the same order,
and having the property ASC/DESC exactly the same for each column.
- idx2 is a primary key index, but idx1 is not
OR idx2 is declared unique index and idx1 is not
OR idx2 is a foreign key index and idx1 is not
OR idx2 is declared a clustered index and idx1 is not
OR none of the above, in which case the engine randomly chooses idx1 to
be the duplicate of idx2.
For example, if idx1 is declared as:
create index idx1 on T (T.A ASC, T.B DESC )
and idx2 is declared as :
create unique index idx2 on T (T.A ASC, T.B DESC)
then idx1 is considered a duplicate of idx2.
For base tables, an analysis of the index list is performed to flag the
duplicate indexes when indexes are loaded, or an index is added or dropped.
Note 1:
The property of being a duplicate is used only by the optimizer when a statement
is optimized.
Note 2:
It is strongly recommended to drop useless duplicate indexes whenever possible.
Note 3:
The properties of uniqueness and clusterness are derived from one index
to another. In the example above, the engine derives that the index idx1
is also unique because idx2 is unique.
================(Build #1875 - Engineering Case #358040)================
In rare low-memory situations, the server could have crashed or quietly ended.
This has been fixed.
================(Build #1875 - Engineering Case #358149)================
Operations in a stored procedure or batch, on string variables that were
exactly 255 bytes in length, could have caused the resultant string to have
become corrupted. This has been fixed.
================(Build #1875 - Engineering Case #358197)================
Executing a statement with a view that referenced remote tables and used
Common Table Expressions, would likely have caused a server crash. The fix
for Engineering Case 352938 was incomplete, as it did not handle views. This
has now been corrected.
================(Build #1875 - Engineering Case #358197)================
Executing a statement with a view that referenced remote tables and used
Common Table Expressions, would likely have caused a server crash. The fix
for Engineering Case 352938 was incomplete, as it did not handle views. This
has now been corrected.
================(Build #1876 - Engineering Case #357967)================
Optimizer statistics were also being flushed when procedures were unloaded.
This has now been corrected.
================(Build #1876 - Engineering Case #358312)================
The server's HTTPS web services could eventually have exhausted the memory
available to the server, which could have resulted in a fatal error. This
has been fixed.
================(Build #1877 - Engineering Case #351255)================
When running on Unix platforms, a Remote Data Access connection via shared
memory to another (or the same) ASA server, may have failed if the connection
persisted longer than the value set by the -ut server command line option,
(30 minutes by default), on the server that made the connection. This problem
has been fixed.
Note, both the client libraries and server must be updated.
================(Build #1877 - Engineering Case #356216)================
The creation or execution of a stored procedure may have caused a server
crash if the parameter list contained the special values SQLCODE or SQLSTATE,
and in the procedure body a Transact-SQL variable was declared (ie variables
that start with @). This has nowbeen fixed.
================(Build #1877 - Engineering Case #358369)================
When connected to a remote server via ODBC and using callable or prepared
statements, getting the update count could have returned a function sequence
error, instead of giving the proper count. This has now been fixed.
================(Build #1877 - Engineering Case #358497)================
If a database with auditing enabled required recovery, the server may have
indicated during recovery that the log file was invalid. If an audit record
in the transaction log was only partially written, the audit record would
have appeared corrupt. This is now ignored if the partial audit record is
at the end of the log.
================(Build #1878 - Engineering Case #357683)================
When an application closed a cursor, the server was not freeing the cursor's
resources before dropping the associated prepared statement or when the connection
ended. This caused problems for applications that open many cursors on the
same prepared statement. These applications would get errors when attempting
to open a cursor, such as "Resource governor for 'cursors' exceeded", if
the option MAX_CURSOR_COUNT was not set, or "Cursor not open". Now the cursor's
resources are freed when a cursor is closed.
================(Build #1878 - Engineering Case #358354)================
Insert statements with queries containing non-deterministic functions and
EXISTS, ANY, ALL or IN predicates, that were rewritten as a join with a DISTINCT,
could have produced incorrect results. In these cases, the distinct operation
was applied to the non-deterministic function in addition to the other columns
where distinct elimination was required for correctness. Hence, rows which
would otherwise have been considered identical were made different by the
non-deterministic function, causing the number of rows performed by the insert
to be greater than expected. This has been fixed.
A workaround is to embed the insert in a compound statement, where the query
is run and the inserts are done using a cursor ranging over the query.
This is a followup issue to Engineering Case 346886, which missed this special
case.
================(Build #1879 - Engineering Case #358785)================
Valid nested queries that utilized aliased outer reference expressions could
have failed with a syntax error (SQLCODE -149). As an example, the following
query (over the asademo database schema) would have failed due to the use
of "Q" in the GROUP BY clause of the subquery:
select line_id as Q, count(*)
from sales_order_items
where quantity != 89999
group by line_id
having count(*) not in (select max(quantity) from product
group by Q, description )
order by 1, 2
It was necessary that both the outer block and the nested subquery both
contain aggregation and/or GROUP BY clauses for this problem to have occurred.
It has now been fixed.
================(Build #1879 - Engineering Case #358956)================
Executing a CREATE DOMAIN statement in a database with a multibyte collation,
such as Japanese, that defined a default value that contained multibyte characters
with \x5c as the second byte, would have been displayed as a hexadecimal
string in Sybase Central. This problem did not exist if the collation was
UTF8. The problem is now fixed.
================(Build #1881 - Engineering Case #357983)================
A correlated subquery used in a predicate of the form "expr = (correlated
subquery)" which, in addition, satisfied the conditions to be executed as
a decorrelated subquery, may have caused a crash of the server during optimization.
This has been fixed.
For example:
select *
from R
where R.X = (select max(T.X)
from T
where T.Y = R.Y)
The above query may be executed by computing the decorrelated subquery,
equivalent to the following:
select R.*
from R, (select max(T.X), T.Y
from T
group by T.Y) as DT( max, Y)
where R.X = DT.max and R.Y = DT.Y
Note 1:
The decision to decorrelated subquery is done cost-based during optimization.
Note 2:
The crash would have occurred, if and only if a Sort Merge Join method was
considered for joining the decorrelated subquery with the rest of the query
block's tables. The Sort Merge Join strategy must have been chosen in the
best plan for the crash to happen.
================(Build #1881 - Engineering Case #358667)================
On busy servers, TLS connections may have timed out with the error -829 "TLS
handshake failure", after 10 seconds. The TLS timeout time has now been changed
to be the same as the connection's liveness timeout time.
Note, this problem also affected HTTPS connections as well. The timeout
in that case
is now set to the same as the idle timeout (the TIMEOUT parameter to HTTPS).
This also fixed a server crash that may have occurred if an older dbtls
DLL was used.
================(Build #1881 - Engineering Case #359142)================
Two new collations have been added, 1252SPA Spanish, and 874THAIBIN Thai.
Collation 1252SPA is similar to 1252LATIN1, but causes N and Ñ (N-tilde)
to be sorted as separate characters. On a Windows system configured for Spanish,
dbinit and the CREATE DATABASE statement will now default to this collation,
and will also appear in the Sybase Central Create Database wizard for ASA.
Collation 874THAIBIN does not attempt to provide linguistically-correct
sorting for Thai characters, but sorts the characters in binary order. The
collation does provide character set mappings to cp874 (and TIS-620, which
is compatible with cp874). On a Windows system configured for Thai, dbinit
and the CREATE DATABASE statement will now default to this collation, and
will also appear in the Sybase Central Create Database wizard for ASA.
================(Build #1882 - Engineering Case #359212)================
The debug server log, (output generated bu -z command line option), could
have contained extraneous "Communication function <function name> code <number>"
diagnostic messages. For example after stopping a database using dbstop.
Similarly there could have been extraneous instances of this diagnostic message
in the client LogFile.
These extraneous diagnostic messages have been removed. Please note that
this diagnostic message can still validly occur under some circumstances
and may be useful to help Technical Support diagnose other problems.
This change also prevents some spurious "connection ... terminated abnormally"
messages.
================(Build #1882 - Engineering Case #359242)================
The server could have crashed when attempting to get a db_property for a
database which is in the process of being started. When a database is being
started, a "stub" database object is added to the server's list of databases,
which has NULL pointers for a number of fields, including the statistics
counter array. Calling the db_property() function would have attempted to
get statistics for the stub database. Fixed by only getting properties for
active databases.
================(Build #1882 - Engineering Case #359265)================
When run on Sun Solaris systems, the server would have tried to load an incorrect
LDAP shared object name, libldap_r.so. This has been corrected, the server
now loads libldap.so.
================(Build #1884 - Engineering Case #360584)================
The Join Nested Block (JNB) operator will no longer be considered a fully
pipelined join operator while optimizing with the database option Optimization_goal
set to 'first-row'. Although the JNB operator is not a fully materializing
operator, it may block and force the usage of a work table. Now, while optimizing
for the 'first-row' optimization goal, this operator is less likely to appear
in the plan chosen.
================(Build #1887 - Engineering Case #360464)================
If a grouped query had a base table's column T.A in its select list, and
the table T qualified to be eliminated by the join elimination algorithm,
then T.A might have been incorrectly renamed. This has been fixed.
For example:
The query below returned a result set which had the first column renamed
"A2":
create table T1 ( A1 int primary key);
create table T2 ( A2 int,
foreign key (A2 ) references T1(A1) );
select A1
from T1 key join T2
group by A1
Note that the query was rewritten by the join elimination algorithm into:
select A2
from T2
group by A2
Now, the query is rewritten into:
select A2 as A1
from T2
group by A2
A work around for this problem is to alias all the columns referenced in
the SELECT list with their own names.
For example, the query Q' below is not affected by this bug:
Q':
select A1 as A1
from T1 key join T2
group by A1
================(Build #1888 - Engineering Case #357965)================
Under certain conditions, SELECT DISTINCT queries with complex ORDER BY expressions
may have received an erroneous syntax error.
For example, the following query would have failed with SQLCODE -149:
Select distinct e.emp_lname + space(50) + '/' + e.emp_fname
from employee e, employee e2
where e.emp_id = e2.emp_id and e2.dept_id = 100 and (e.city = 'Needham'
or e2.city = 'Burlington' )
order by e.emp_lname + space(50) + '/' + e.emp_fname
This problem has been corrected.
================(Build #1888 - Engineering Case #360455)================
During unloading or rebuilding a database a non-clustered index may have
been recreated as a clustered index. This would have happened if there was
at least one table with a clustered index and subsequent unloaded table definitions
had a non-clustered index with the same index id as the clustered index.
This has now been fixed.
================(Build #1889 - Engineering Case #356739)================
If a trigger on table T referred to a column of T that had been dropped or
renamed, then the server could have crashed when processing a query referring
to T after the server was restarted. For the crash to have occurred, the
referencing query must have been sufficiently complicated to allow predicates
to be inferred. The cause of the crash has been fixed, and other changes
have already made it impossible to rename or drop columns referenced by triggers.
================(Build #1891 - Engineering Case #360311)================
A query with a large number of OR'ed predicates (about 20,000 on Windows
systems) may have caused the server to crash.
For example:
select T2_N.b
from T1, T2, T2_N
where T1.a = T2.b and T2.b = T2_N.b and
( T1.a = 1 or T1.a = 2 or T1.a = 3 or .... or T1.a = 20000)
The number of OR conditions to cause the crash depended on the available
stack size. This problem has been fixed. Now these queries return an error
"Statement size or complexity exceeds server limits".
================(Build #1892 - Engineering Case #358006)================
An estimated cost for using a work table is now added to the total estimated
cost of the plan that needs a work table at the root. When both materializing
plans and non-materializing plans are costed for a query, the extra cost
added when a work table is needed gives a more accurate cost estimate for
the whole query. This will improve the optimizer's choice of the most appropriate
plan for a query.
================(Build #1892 - Engineering Case #359741)================
If a synchronization was performed prior to changing the database options
Truncate_timestamp_values to ON and Default_timestamp_increment to a value
which would disallow timestamp values with extra digits of precision, the
next synchronization would have caused a server crash. The server will now
display an assertion indicating that an attempt to store an invalid timestamp
value in a temporary table was made. The options must be changed before the
first synchronization is performed.
================(Build #1892 - Engineering Case #360936)================
When two instances of the same table are joined on the primary key then,
under certain conditions, it is possible to eliminate one of the instances.
For example, given the following table:
create table T ( A int primary key,
B int);
the statement:
DELETE from T as T0
from T as T0 natural join T as T1
is equivalent to:
DELETE from T as T0
where T0.B = T0.B
This optimization was only done for SELECT statements when the table to
be eliminated was not updatable. Now, this optimization is done for DELETE
and UPDATE statements when the table to be eliminated is not the table to
be modified.
================(Build #1892 - Engineering Case #361184)================
A query with a large WHERE clause containing the conjunction and disjunction
of many literals could have caused the server to hang with 100% cpu usage
and eventually run out of memory and fail with a "Dynamic Memory Exhausted"
message. This has been fixed.
================(Build #1892 - Engineering Case #361188)================
The collation 1250LATIN2 was missing case conversions for "Letter O with
double acute" and "Letter U with double acute". As a result, the functions
UPPER() and LOWER() would have failed to convert these letters to their corresponding
case, and comparisons would also have failed to match these characters with
other O and U characters when the case was different. This has now been fixed,
but existing databases will need to be rebuilt to get the new conversions.
================(Build #1895 - Engineering Case #360237)================
Memory-intensive operations, such as a sort, hash join, hash group-by, or
hash distinct, could have caused the server to fail with a fatal memory exhausted
error, if they were executed in an environment where the operation could
not be completed completely in the available memory. This issue affected
all platforms, and has now been fixed.
================(Build #1895 - Engineering Case #360694)================
The server could have deadlocked, and appear to be hung, if a transaction
yielded to a checkpoint (by context switching, waiting for a lock or waiting
for network I/O) after rolling back to a savepoint. This has been fixed.
================(Build #1895 - Engineering Case #361965)================
On Unix systems, the server use 100% of the CPU while shutting down unconditionally
if there were still active TCP/IP connections, although the server does eventually
shut down. This has fixed.
================(Build #1895 - Engineering Case #361999)================
Index statistics for SYSATTRIBUTE could have become out of date, resulting
in errors being found when running the Database Validation utility dbvalid.
This problem has now been resolved.
================(Build #1895 - Engineering Case #362004)================
When a server registered its IP addresses in LDAP, it included the localhost
address as well (127.0.0.1), which was not useful to clients in finding the
server. This address will now no longer be included when the server registers
with LDAP.
================(Build #1898 - Engineering Case #360885)================
Using variable assignments in a positioned update, (e.g update T1 set id
= @V1, @V1 = @V1 + 1 where current of curs), would have caused a server crash.
Now, variable assignments in a positioned update are supported.
================(Build #1898 - Engineering Case #362312)================
The restructuring of column statistics in the server could have caused memory
corruption, which can result in various symptoms, including server crashs
and assertions. The chances of this problem happening was remote. It could
only have occurred if the memory allocator returned the same memory as that
used in a previous invocation of the restructuring of the same histogram.
This problem has now been resolved.
================(Build #1899 - Engineering Case #362585)================
Queries that used a multi-column index could have returned incorrect results.
For this to have occurred, all of the following must have been true:
- The index must have been comparison-based.
- One of the first few columns being indexed must have been a short character
(or binary) column [the column must have been fully hashed]. This column
must not have been the last column being indexed.
- The query must have contained a comparison involving this column [say
with domain char(n)] and a string s with length > n whose n-byte prefix appeared
as a value in the column.
This problemhas now been corrected.
================(Build #1899 - Engineering Case #362723)================
When running on linux systems using the 2.6.x kernel, Core files generated
when the server crashed, would have contained only the stack trace of the
thread causing the fault. This has been fixed. Core files will now have full
stack trace information for all threads. On Linux 2.4.x kernels though, the
core file will continue to contain only the stack trace of the thread responsible
for the fault.
================(Build #1900 - Engineering Case #359745)================
If a stored procedure declared a temporary table and returnd a result set,
opening a cursor on the procedure would have made the temporary table visible
outside the procedure. This has been fixed.
================(Build #1900 - Engineering Case #361512)================
If a stored procedure contained a CREATE VIEW statement, the second execution
of the CREATE VIEW statement may have failed with a "column not found". This
has been fixed.
A workaround is to use EXECUTE IMMEDIATE to create the view.
================(Build #1900 - Engineering Case #362220)================
If another transaction attempted to query or modify a table while a the fast
form of TRUNCATE TABLE was executing on the same table, the server could
have failed an assertion, and in some cases, possibly corrupted the database.
This was not likely to occur on single processor Windows platforms. This
problem has been corrected.
================(Build #1901 - Engineering Case #363883)================
If a query has sargable predicates of the form "T.col = constant" or "T.col
IS NULL" for at least two leading columns of an index, the estimate of number
of rows returned is now computed by probing the index.
For example, given the following index:
create index NAME_COLOR on product( name, color);
the estimated number of rows returned by the query:
select count(*) from product
where color = 'white'
and name ='tee shirt'
with "color = 'white'" and "name ='tee shirt'" is computed by probing
the index NAME_COLOR with the tuple ('tee shirt', 'white').
Note, if the index does not exist, the estimated number of rows with "color
= 'white'" and "name ='tee shirt'" is computed as: [ size of table 'product']
X [selectivity of the predicate "color = 'white'"] X [selectivity of the
predicate "name = 'tee shirt'"]. Such computation, unless the data is completely
uncorrelated, can be inaccurate.
================(Build #1901 - Engineering Case #363884)================
If the access plan for an outer join's null-supplying side provided an order
for the columns in the ON clause, and any of the columns in the ON clause
was equated with a constant, the join operator JMO was preceded by an unnecessary
SORT. This has been fixed.
Example:
The query below:
select *
from dbo.rowgenerator R1 left outer join rowgenerator R2 ON ( R2.row_num
= R1.row_num and R2.row_num = 10 )
where R1.row_num = 10
may have had the access plan:" R1<RowGenerator> JMO [ SORT [R2<RowGenerator>
]]". The SORT was unnecessary as all the rows generated by the null-supplying
side "R2<RowGenerator> " had R2.row_num equal to 10, hence it was ordered
already on R2.row_num column. Now, the plan should be " R1<RowGenerator>
JMO [ R2<RowGenerator> ]".
================(Build #1902 - Engineering Case #362727)================
The server could have evaluated the LIKE predicate incorrectly for some patterns
containing multiple wild card characters. In order for the incorrect answer
to have been computed, the search pattern must have been "simple", containing
% as the only wild card character, but with multiple instances. For example,
the predicate " '1.1.' LIKE '1.%.%.' " would have incorrectly evaluated to
TRUE. The server will now compute the correct answer.
================(Build #1903 - Engineering Case #357550)================
An UPDATE or DELETE statement attempting to modify a local table, when joined
with a remote table, or a remote table joined with a local table, should
fail with an error indicating that the query could not be executed. However,
a recent change removed this error and the server would have either hang
or crashed. The problem has been resolved and the original error is now given.
Also, attempting to use the PUT statement on a remote table would have caused
a server crash. An error is now given.
================(Build #1903 - Engineering Case #363251)================
When using the FORWARD TO statement in interactive mode (i.e. issuing a "FORWARD
TO <server>" statement first and then issuing individual statements, all
of which are to be executed on the remote server), there was a pobbibility
that one, or all, of the statements would have been executed locally instead
of remotely. There was also a pobbibility that the statements will not have
been executed at all. This problem was most likely to have occurred when
connected via jConnect, or if the remote server name had a space in it, or
any character that would required quoting. This problem has now been fixed.
================(Build #1903 - Engineering Case #363394)================
A procedure call that used a WITH clause and a column of type CHAR, VARCHAR,
BINARY, or VARBINARY with no length argument, would have defaulted to the
maximum size for the type, rather than 1.
For example:
select c from p() with (c char)
would have returned a column of type char(32767), rather than the correct
type of char(1). A NUMERIC column, with no precision or scale argument,
would have defaulted to (255,0) rather than the database defaults. In addition,
a server crash may have occured when attempting to create a procedure containing
a SELECT that had this problem. These problems have now been fixed.
================(Build #1903 - Engineering Case #363397)================
The server could possibly have crashed on shutdown, if the Java VM had been
used. This would occurred if the VM needed to load additional classes during
shutdown. Now, failure to load a new class during shutdown is handled by
the Java's exception mechanism and the VM will still shutdown.
================(Build #1904 - Engineering Case #362311)================
When estimating the selectivity of a predicate of the form "column = (correlated
subselects)", the server was treating the predicate as "column = (single
value)". This assumption could have lead to overly low selectivity estimates
resulting in poor plans. In reality, the correlated subselect can cause a
large number of values to be compared with the column, resulting in many
rows.
Now, the server will use a "guess" estimate of 50% for the selectivity of
these predicates.
================(Build #1904 - Engineering Case #363615)================
Attempting to use GROUPING SETS, CUBE or ROLLUP, in the GROUP BY clause of
a remote query that would have been executed in full passthrough mode, would
have resulted in an incorrect GROUP BY clause being sent to the remote server.
It did not contain any GROUPING SETS, CUBE or ROLLUP clauses. This problem
has now been fixed.
================(Build #1904 - Engineering Case #363722)================
The access plan for the cursor in a FOR statement, when defined by a SELECT
query block, was not being cached and reused, when used inside a stored procedure
or a function. This has now been fixed.
In the example below, the SELECT query block will now be considered for
caching:
for for_q as c_q dynamic scroll cursor for
select p.quantity as Q
from product p, sales_order_items s
where s.quantity = p.quantity
order by Q
do
set V = Q
end for;
Please see the following documentation for more information:
ASA SQL User's Guide
Query Optimization and Execution
How the optimizer works
Access plan caching
================(Build #1905 - Engineering Case #351765)================
The REPLACE() function was slow when operating on long strings. The execution
time was O(n^2), where n is the number of characters in the replacement string.
The REPLACE() function has now been rewritten to scan the input string only
once.
================(Build #1905 - Engineering Case #363754)================
On non-x86 Unix platforms, the server could have crashed on startup if one
of the databases was strongly encrypted but an incorrect encryption key was
specified. This has now been fixed.
================(Build #1905 - Engineering Case #363756)================
If a subquery contained an outer reference to a column from a view that was
part of the null-supplying side of an outer join, and the view defined the
column as a constant value, the server could have crashed when trying to
build a hash join. This has been fixed.
================(Build #1906 - Engineering Case #362722)================
The installer for the 9.0.1 1883 EBF made a call to a function that is not
implemented on Windows NT, and as a result generated the error:
"The procedure entry point, Module32Next could not be located in the dynamic
link library KERNEL32.dll".
This error is typically reported when the version of the DLL found does
not match the expected version, or the function has not been exported from
the DLL. In this case, the problem was related to the later, as "Module32Next"
is not a valid function on Windows NT. The code has been changed to only
call this function if it exists.
================(Build #1906 - Engineering Case #363739)================
If the user DBA was removed, (ie. REVOKE CONNECT FROM DBA), and another user
assigned dba authority, the Database Upgrade utility dbupgrad, would have
failed with the following error:
Error in file upgrad60.sql at line 2682 with sqlcode -140
SQL error (-140) -- User ID 'DBA' does not exist
The user DBA was unnecessarily being granted select authority on some tables.
These GRANT statements have now been removed.
================(Build #1906 - Engineering Case #363861)================
A query such as the following:
select 'a', 'a' from employee group by 'a', 'a'
where a constant string appears in both the select list and the GROUP BY
clause, could have caused the server to crash. This has been fixed.
================(Build #1907 - Engineering Case #364204)================
If an application attempted to get the metadata of a result set from a remote
query, the remote tables would have had names like vt_1, vt_2, ... instead
of the proxy table name. The metadata will now have the proxy table name.
As a result, ODBC, OLEDB and JDBC applications can now use this proper metadata
information to prepare the correct update/delete statements. For example,
if a query selecting a set of rows from a single proxy table was executed
in DBISQL, the user would not be able to use the table editing feature in
DBISQL because the table name for the result columns would be vt_1 instead
of the proxy table name. Now with this fix, users can now edit result sets
from a single proxy table using DBISQL.
================(Build #1908 - Engineering Case #363333)================
Using the Database Unload utility dbunload with the -an command line option
to reload a database into a new file, could have failed if the character
set of the OS was different from the character set of the database being
reloaded. Failures could have taken the form of a syntax error during the
execution of a CREATE DATABASE statement, or a mangled filename for the newly
created database. This problem has been corrected.
================(Build #1908 - Engineering Case #364059)================
If the server was was shut down in the middle of capturing a workload for
the Index Consultant, it could have hung with 100% CPU usage. This has been
fixed.
Note that a DBA can determine whether or not capturing is in progress by
viewing the server console.
================(Build #1908 - Engineering Case #364246)================
A reference to a column not in the GROUP BY list, when made from an IN condition,
was not being reportrd as an error.
For example:
select if emp_id in (1) then 0 else 1 endif
from employee
group by state
This problem is now fixed.
================(Build #1908 - Engineering Case #365508)================
When used with multiple Java Threads, and Synchronized Java Methods, Java
Stored Procedures could have produced unexpected results. This has niw been
fixed.
================(Build #1909 - Engineering Case #362895)================
In some cases, the selectivity estimate for a predicate '<column> IS NULL'
could have been set incorrectly to 0. This could have lead the query optimizer
to select poor execution plans, for example, selecting an index to satisfy
an IS NULL predicate instead of another, more selective predicate.
For this problem to have occurred, a query must have contained a predicate
of the form:
T.col = <expr>
The expression <expr> must have been an expression whose value was not known
at query open time. For example, <expr> could have been a column of another
table, or it could have been a function or expression that was not evaluated
at optimization time. The predicate must have been the first predicate evaluated
for the associated table scan, and the table T must have been scanned once
with the value of <expr> being NULL. In these circumstances, the selectivity
of 'T.col IS NULL' would be incorrectly set to 0. This has been fixed.
If an application opened a cursor over a query that contained an index scan
with a single column in the index being searched for equality, the selectivity
estimate could have been lowered incorrectly if the application scrolled
the cursor using absolute fetches and did not visit all of the rows of the
result set but ended the scan after the last row of the result set. This
problem would have resulted in selectivity values being stored that were
lower than expected, and could have lead the query optimizer to select poor
execution plans by picking an index on this column instead of a better index.
This problem has also been fixed.
================(Build #1909 - Engineering Case #364365)================
When using Interactive SQL dbisql, and connected via the iAnywhere JDBC Driver,
if a "FORWARD TO server {...}" statement was executed, where "server" is
the name of a Remote Data Access server and the information inside the curly
braces "{}" was anything that the remote server understood, it would likely
have failed with one of two errors. If the statement sent to the remote server
returned a result set, then the error "Remote server does not have the ability
to support this statement" would have been given. If the statement sent to
the remote server did not return a result set, then dbisql would have complained
that the result set had errors. This problem has now been fixed.
Note that this problem does not exist if using dbisql connected via jConnect.
Also, if the "FORWARD TO" statement used single quotes instead, then both
jConnect and the iAnywhere JDBC Driver will work fine.
================(Build #1912 - Engineering Case #364680)================
When processing a single-row GROUP BY query, the server could have crashed
if at least one of the aggregate functions had been specified with the DISTINCT
qualifier and the hash group-by method had been selected by the query optimizer.
For example, the following query could generate the crash (depending on
the access plan selected by the optimizer):
select count( distinct dept_name+'a' )
from ( select distinct dept_name
from department
where dept_id < 1 ) T
This problem has been fixed.
================(Build #1912 - Engineering Case #365038)================
If a statement that modified a remote table was executed within a savepoint,
no error was given, even though remote savepoints are not supported. Now,
if an UPDATE, DELETE or INSERT statement attempts to modify a remote table
while inside a savepoint, it will fail with the error "remote savepoints
are not supported". Note that remote procedure calls within a savepoint will
also fail with this error, as there is a chance the remote procedure will
modify tables on the remote database.
================(Build #1915 - Engineering Case #365188)================
If a view was defined with the "WITH CHECK OPTION" clause and had predicates
using subqueries, then opening an updatable cursor or executing an UPDATE
statement might have caused a server crash. This has been fixed.
For example:
create view products
as select p.* from
prod as p
where p.id =
any(select soi.prod_id from sales_order_items soi KEY JOIN sales_order
so
where so.order_date > current date )
with check option
The following INSERT statement would have crashed the server:
insert into products (id) values ( 1001 )
================(Build #1915 - Engineering Case #365498)================
Executing a query that involved a proxy table and the openxml() function,
as in the following example:
select * from proxy_test p, (select PKID, C1 from openxml( @varxml, '//testxml/test')
with(PKID int 'PKID', C1 varchar(255) 'C1')) t where
p.PKID = t.PKID
could have caused a server crash. This problem has been fixed.
================(Build #1915 - Engineering Case #365603)================
Making an RPC call, or executing a FORWARD TO statement, may have failed
to return a result set, even though one was returned by the remote server.
Note that this problem only happened when the Remote Data Access class was
either ASAJDBC or ASEJDBC. This has been corrected.
================(Build #1915 - Engineering Case #365707)================
The JOIN MERGE FULL OUTER join method was not considered by the optimizer
during optimization of queries using FULL OUTER JOINs. This has been fixed.
================(Build #1915 - Engineering Case #365730)================
If a server attempted to start what appeared to be a valid database file,
and the database failed to start for any reason, then unexpected behavior
could have occurred on future requests to the same server. The unexpected
behavior could have included server crashes, assertions, and possibly database
corruption. This has been fixed.
================(Build #1918 - Engineering Case #358350)================
If a table was part of a publication, altering a trigger on that table could
have caused the server to fail with assertion 100905 on a subsequent INSERT,
UPDATE or DELETE. For this to have occurred, the table must have been referenced
in a stored procedure and the procedure must have been called at least once
before the ALTER and once after. This has been fixed.
================(Build #1918 - Engineering Case #365147)================
When using the -m command line option (truncate transaction log after checkpoint),
if a transaction log file was being actively defragmented or virus scanned
at the time a checkpoint occurred, then the server could have failed with
assertion 101201. The operating system will not allow the file to be recreated
until the virus scan or defragmentation has completed. As a result, the server
will now wait and retry the operation several times. A workaround would be
to remove the transaction log file from the list of files that are actively
scanned or defragmented.
================(Build #1918 - Engineering Case #366096)================
A call to the RANK(), DENSE_RANK(), PERCENT_RANK(), or CUME_DIST() functions
now reports error -154 : "Wrong number of parameters to function", when called
with an argument. Previously, the error -134 : "Feature not implemented",
would have been given.
================(Build #1918 - Engineering Case #366150)================
The server could have become deadlocked when executing the system procedure
sa_locks. For this to have occurred, two connections must have been issuing
sa_locks calls concurrently, or the user definition for the owning connection
was not in cache, which is not a likely occurence. This problem has been
fixed.
================(Build #1918 - Engineering Case #366282)================
The CREATE SCHEMA statement was not being logged correctly in the transaction
log if it contained at least one CREATE VIEW statement. The statement logged
was the last CREATE VIEW statement, instead of the entire CREATE SCHEMA statement.
As a consequence, the CREATE SCHEMA statement was not recoverable. Also,
recovery could have failed with a "Failed to redo an operation" assertion,
if the logged CREATE VIEW statement could not have been executed, e.g., because
it referred to a table created within the original CREATE SCHEMA statement.
This problem has been resolved.
================(Build #1918 - Engineering Case #366292)================
If a database created with version 6, 7 or 8 software, was upgraded using
the Upgrade utility dbupgrad, or by executing the ALTER DATABASE UPGRADE
statement, then the resulting reload.sql script generated by the Unload utility
dbunload, would have contained a CREATE PROCEDURE statement for sa_proc_debug_detach_from_connection
which would have failed. This has been fixed.
Workarounds include dropping the procedure after the upgrade, or simply
not performing the upgrade (since it should be unnecessary).
================(Build #1918 - Engineering Case #366532)================
Computed columns with JAVA expressions may have be incorrectly parsed, causing
the error: "ASA Error -94: Invalid type or field reference". This problem
occurred if the computed column belonged to a table B, and there existed
another table A, used in the same statement, having a column with the same
name as the JAVA class name. This has been fixed.
For example:
The following query returned the error "ASA Error -94: Invalid type or field
reference":
select * FROM A WHERE A.ID NOT IN ( SELECT B.ID FROM B );
Table B has the computed column "EntityAddressId" referencing the JAVA class
"Address", and table A has a base table column named "Address". Note that
the computed column doesn't have to be referenced in the query.
CREATE TABLE A
(
ID int,
"Address" varchar (10)
);
CREATE TABLE B
(
ID int,
"EntityAddressId" numeric(10,0) NULL COMPUTE (Address >> FindAddress(0,
'/', 0)),
);
================(Build #1919 - Engineering Case #364540)================
As of version 9.0.0, java objects in the database are no longer supported.
A query of a computed column that referenced a Java object would have failed
with an "Invalid type or field reference" error. Now, querying a Java computed
column will fail with a "Not Implemented Java Objects" error.
================(Build #1919 - Engineering Case #366167)================
Database recovery could have failed when mixing Transact-SQL and Watcom SQL
dialects for Create/Drop table statements. This has been fixed. The following
example could have caused database recovery to fail if the server crashed
before the next checkpoint.
create global temporary table #test (col1 int, col2 int);
drop table #test;
create global temporary table #test (col1 int, col2 int);
drop table #test;
A workaround is to only use #table_name for creation of local temporary
tables.
================(Build #1919 - Engineering Case #366167)================
Database recovery could have failed when mixing Transact-SQL and Watcom SQL
dialects for Create/Drop table statements. This has been fixed. The following
example could have caused database recovery to fail if the server crashed
before the next checkpoint.
create global temporary table #test (col1 int, col2 int);
drop table #test;
create global temporary table #test (col1 int, col2 int);
drop table #test;
A workaround is to only use #table_name for creation of local temporary
tables.
================(Build #1919 - Engineering Case #366233)================
If a stored procedure contained a statement that performed a sequential scan
of a global temporary table, executing the statement could have caused the
server to crash. This problem would have occurred if the following conditions
held:
- The statement plan was cached
- The table was declared as "ON COMMIT DELETE ROWS"
- The table had more than 100 pages when the plan was cached
- COMMIT was called before the statement was executed
This problem has been fixed. The crash could be avoided by setting the option
'Max_plans_cached' to 0.
================(Build #1919 - Engineering Case #366364)================
Database validation, using either the Validation utility dbvalid, or executinmg
the VALIDATE DATABASE statement, could have failed to detect some corrupted
LONG VARCHAR columns. Assertion 202000 is now generated when a corrupted
LONG VARCHAR is encountered.
================(Build #1919 - Engineering Case #366375)================
When evaluating the use of an index to satisfy a range predicate on a table
that was occupying a large fraction of the server cache, the query optimizer
could have failed to make use of the index, resulting in poor query access
plans. This problem has been resolved.
================(Build #1919 - Engineering Case #366422)================
If a procedure had a subquery that involved a remote table, and if that subquery
generated a warning, then the server would have incorrectly given an error
rather than returned the result of the subquery.
For example:
SET @c = (SELECT count(*) FROM t)
where t is a remote table. In this case, if the table t was empty, then
a NOT FOUND warning would have been generated when the subquery was evaluated
and instead of setting the variable @c to 0, the server would have returned
the error "ASA Error -823: OMNI cannot handle expressions involving remote
tables inside stored procedures ".
This problem has now been fixed.
================(Build #1919 - Engineering Case #366562)================
If the subsume_row_locks option is on and a table T is locked exclusively,
the server should not obtain row locks for the individual rows in T when
executing an UPDATE. This was not the case if T was updated through a join
(or if T had triggers, computed columns, etc.), or if T was modified via
a keyset cursor. Now, no locks are aquired in this situation.
================(Build #1919 - Engineering Case #366705)================
A user-defined data type can now be renamed using:
ALTER {DOMAIN | DATATYPE} usertype RENAME newname
The name of the user type is updated in SYSUSERTYPE.
Note that any procedures, triggers, views or events which refer to the user
data type must be recreated, as they will continue to refer to the old name.
Renaming of Java data types is not permitted.
================(Build #1920 - Engineering Case #366574)================
Queries specifying the "TOP n" clause, that were optimized with the Optimization_goal
set to "all-rows" may have had a less than optimal access plan. Such queries
are now costed based on the estimated cost to produce first n rows instead
of using the estimated total cost.
================(Build #1921 - Engineering Case #365453)================
Calling a wrapper procedure for a Java class which returned a result set
would have leaked memory and could have crashed the server. This has now
been fixed.
================(Build #1921 - Engineering Case #366267)================
When the database option Ansi_close_cursors_on_rollback was set to 'ON',
the Validation utility dbvalid would have failed to validate all the tables
in the database. The error 'cursor not open' would have been displayed. This
has been fixed.
================(Build #1921 - Engineering Case #366552)================
When making a remote procedure call to a remote server whose class was either
ASAJDBC or ASEJDBC, if the remote procedure was a Transact-SQL procedure,
with either an INOUT or OUT argument that returned a result set, then it
was likely that the rows in the result set will not have been returned. The
INOUT or OUT parameters were incorrectly being fetched first, prior to fetching
the result set. In JDBC, fetching the value of an OUT or INOUT parameter
will close all result sets. Now the values of OUT or INOUT parameters are
fetched only when the procedure has completed execution.
================(Build #1922 - Engineering Case #365953)================
If a user-defined function contained a COMMIT, calling the function in a
SELECT statement within a batch or procedure would have caused the cursor
for the batch or procedure to be closed if the cursor was not declared WITH
HOLD. This may have resulted in unexpected error messages like "Column '@var'
not found". Now these cursors will not be closed.
================(Build #1922 - Engineering Case #366920)================
Calling the DATEPART() function, with the date-part CalWeekOfYear, would
have returned the wrong week number if the year started with a Friday, Saturday
or Sunday and the day of the date-expression passed was a Sunday, but not
the very first one. For example: DATEPART( cwk, '2005/01/09' ) would have
incorrectly returned 2 instead 1. This has now been fixed.
================(Build #1922 - Engineering Case #367116)================
Executing a LOCK TABLE ... IN EXCLUSIVE MODE statement on a table did not
prevent other transactions from subsequently obtaining exclusive locks on
rows in the table when executing INSERT ... ON EXISTING UPDATE statements.
Although it would have prevented explicit UPDATE statements from subsequently
updating rows. This could have resulted in applications deadlocking unexpectedly.
This has been fixed.
================(Build #1922 - Engineering Case #367221)================
When selecting from a single proxy table, the rowcount value returned by
the server was incorrectly always 1, instead of -1. As a result, applications
which rely on the rowcount information would have assumed that the query
only had a single row in the result set. Returning a rowcount of -1 tells
the application that the rowcount information is an estimate only and should
not be relied upon. Note that after this fix, the rowcount for remote queries
will always be -1 even if the ROW_COUNTS option is set.
================(Build #1922 - Engineering Case #367222)================
LDAP functionality had been inadvertently removed from both the server and
client software. As a result, the LDAP TCPIP parameter would not have been
recognized, and connection attempts would not have tried LDAP. This functionality
has now been restored.
================(Build #1922 - Engineering Case #367252)================
If the get_identity() function was used to allocate an identity value for
a table, but the table itself was not modified by the current connection,
or any other connection, then the value of the SYSCOLUMN.max_identity column
was not updated at the next checkpoint. If the database was shutdown and
restarted, get_identity() would then have re-used values previously generated.
This has been fixed.
Note that the use of an empty table having an autoincrement column, together
with get_identity(), may still have resulted in values being re-used if the
database was not shut down cleanly and values were allocated since the last
checkpoint. Depending on how the values were used, it may have been possible
to correct the starting value in a DatabaseStart event by calling sa_reset_identity()
with the next value to use. For example:
declare maxval unsigned bigint;
set maxval = (select max(othercol) from othertab);
call sa_reset_identity('IDGenTab', 'DBA', maxval);
================(Build #1923 - Engineering Case #367337)================
Attempting to get the application information of a jConnect or Open Client
application, using SELECT CONNECTION_PROPERTY( 'APPINFO', ... ), would have
always returned NULL for APPINFO. Now, the server will attempt to display
the application name, the application host and the application PID, if that
information has previously been provided by the application at connect time.
It should be noted that in some cases, the application information provided
by the client is not completely accurate; however, the server will still
display the inaccurate information.
================(Build #1923 - Engineering Case #367342)================
If the first byte of the DELIMITED BY string for a LOAD TABLE statement was
greater than or equal to 0x80, the LOAD TABLE statement would not have recognized
any delimiters in the input file. This is now fixed.
================(Build #1923 - Engineering Case #367366)================
If a Remote Data Access server was created in a database to connect back
to the same database, then creating proxy tables would have hung the server.
This problem was originally addressed in ASA 7.0.0, but subsequent changes
necessitated a different fix. Now, the error "Unable to connect, server definition
is circular", (SQLCODE -657), will be generated, but only for ODBC connections.
JDBC connections will still have problems.
================(Build #1923 - Engineering Case #367378)================
The server could have become hung in an endless loop while performing a sort.
When in this loop, the server would not have responded to the cancel request.
For the problem to have occurred, a sort must have been performed with a
small amount of memory available to the connection, followed by a decrease
in the amount of available memory. The problem could have occurred even with
large cache sizes, if a child of the sort consumed most, but not all, of
the connection's allotted memory. Extra checks have now been added for out-of-memory
situations when sorting.
================(Build #1924 - Engineering Case #360460)================
An UPDATE statement may have failed unexpectedly with the error "No primary
key value for foreign key". For this problem to have occurred, multiple rows
had to have been updated, the table had to have a trigger that contained
a SET TEMPORARY OPTION statement, and the table had to have at least on Foreign
Key with a referencial action defined. This problem has been fixed, there
is no longer an error in this situation. The workaround is to
remove the SET TEMPORARY OPTION statements.
================(Build #1924 - Engineering Case #363767)================
Deleting or updating a large number of rows could have taken longer than
a comparable operation done with a server from version 8.0.1 or earlier.
This would only have been observed when using a database created with version
8.0.0 or later. This has been corrected.
================(Build #1924 - Engineering Case #367451)================
If a Remote Data Access server was created in a database to connect back
to the same database, then creating proxy tables would have hung the server.
This is a followup to Engineering Case 367366, which resolved the problem
with circular ODBC connections. Now, the error "Unable to connect, server
definition is circular", (SQLCODE -657), will be generated for circular JDBC
connections as well.
================(Build #1924 - Engineering Case #367533)================
The server could have crashed when requested to generate an access plan for
a remote query, either by the dbisql plan feature, or an explicit call to
the explanation() function, when that query would have been executed in full
passthrough mode. An example of such a query is 'INSERT INTO T VALUES(2)'
where T is a proxy table. The server will now return no plans for such queries.
================(Build #1925 - Engineering Case #366401)================
Rebuilding databases on Unix systems, using the Unload utility dbunload with
the -ar or -an command line options, would have failed during the rebuild
operation, if the source database had table or row constraints that specified
stored procedures. This has been fixed.
================(Build #1925 - Engineering Case #366510)================
The quality of access plans chosen for queries with joins to tables with
few rows, that can make use of indexes, has been improved.
================(Build #1925 - Engineering Case #367456)================
If a web client made an HTTP or HTTPS connection to a the database server,
it could have caused other connections to hang, until the HTTP or HTTPS connection
either completed, or was killed. This now has been fixed.
================(Build #1925 - Engineering Case #367661)================
Executing an INSERT ... ON EXISTING UPDATE statement could have caused a
deadlock in the server, if another transaction was updating the table that
was being modified, and a checkpoint (or DDL statement) was pending. This
has been fixed.
================(Build #1925 - Engineering Case #367663)================
The server could have failed to drop a temporary table on a database opened
read-only. This would only have occurred if the temporary table was declared
using Transact-SQL syntax, (ie "#table_name"). This has been fixed.
================(Build #1925 - Engineering Case #367716)================
The userid 'dbo' was unable to use EXECUTE IMMEDIATE to execute a string
containing a multi-statement batch. This restriction has now been removed.
================(Build #1925 - Engineering Case #367743)================
Attempting to use a view that referenced a proxy table, and contained a subselect
which used an aggregate function, would have failed with the error "invalid
use of an aggregate function". This has now been fixed.
================(Build #1926 - Engineering Case #367688)================
Support for textual options to the Transact-SQL statement SET TRANSACTION
ISOLATION LEVEL, have been added for compatibility with Sybase ASE and Microsoft
SQL Server. Applications can now issue the following variants:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
which correspond to setting the isolation level of the connection to 0,
1, 2, or 3 respectively.
================(Build #1926 - Engineering Case #367913)================
On systems rinning Windows 95, 98, or ME, with the LDAP feature in use, the
database server would have registered the invalid IP address "0.0.0.0" in
LDAP, as well as the real machine's address. The list of IP addresses from
these versions of Windows includes 0 (i.e. 0.0.0.0). This address is now
ignored when creating the list for LDAP.
================(Build #1926 - Engineering Case #367935)================
When run on Unix systems, the server could have crashed when a non-DBA user
was connected, if auditing was enabled. This has been fixed.
================(Build #1926 - Engineering Case #367936)================
Predicates of the form "constant IS NULL" may have been incorrectly evaluated
as FALSE, for special constants such as "CURRENT REMOTE USER" or "CURRENT
PUBLISHER". This has been fixed.
================(Build #1927 - Engineering Case #368127)================
If a remote server was created using one of the Remote Data Access ODBC classes,
opening a cursor on a proxy tables from that server would have leaked about
8 bytes of memory for each remote column. Memory allocated at cursor open
time, to hold the indicator for each column, is now freed when the cursor
is closed.
================(Build #1928 - Engineering Case #368168)================
Error messages returned to applications connected to the server via TDS (
e.g. connections using jConnect ), would have been mangled when the OS character
set and the client character set were different. The problem was caused by
the header of the TDS error message having been mistakenly created in the
OS character set, while the main body of the error message was in the client
character set. The header is now also created in the client character set.
================(Build #1928 - Engineering Case #368274)================
It was possible for the server to drop TLS connections, when under heavy
load. Although rare, it was more likely to occur on a multi-processor machine.
If the -z command line option, ("display debugging information"), was used
on the server, a message indicating that a bad packet was received would
have been displayed on the server console. This has been fixed.
================(Build #1929 - Engineering Case #364372)================
Sargable predicates using subqueries were not used for partial index scans
in some cases. The sargable predicates having this problem were only the
ones referencing at least two different tables from the main query block.
An example of such predicate is "T1.col1 = (select max(R.col2) from R where
R.col3 = T2.col4)" where T1 and T2 are two tables of the main query block.
This has been fixed.
An example:
The sargable predicate " t1.t1_id = (select MIN(t3.t1_id) from t3 where
t3.t2_id=t2.t2_id)" can now be used to access the table t1 through a partial
index scan on the primary key index "t1". Hence, the query may have the
access plan
" t2<seq> JNL t1<t1> : GrByS[ t3<t3> ]" .
create table t1 ( t1_id int primary key);
create table t2 ( t2_id int );
create table t3 ( t1_id int, t2_id int primary key );
select * from t1, t2 on ( t2.t2_id = t1.t1_id +1 )
where t1.t1_id = (select MIN(t3.t1_id) from t3 where t3.t2_id=t2.t2_id)
================(Build #1929 - Engineering Case #365837)================
After completing an Index Consultant analysis, if the 'Requests' pane of
the analysis was selected and a particular statement was viewed, the server
would have appeared to be hung at 100% CPU usage. This only occurred when
there were a large number (many thousands) of captured SQL statements. It
was also more likely to occur when the captured SQL statement strings were
very similar to each other, or were very long. The server was not actually
hung, and would have eventually displayed the details for the request, although
this would have taken an inordinate amount of time. This has been corrected.
================(Build #1929 - Engineering Case #368231)================
Executing an ALTER VIEW statement with a select statement that would have
returned the warning "The result returned is non-deterministic." would have
crashed the server. This has been fixed.
================(Build #1929 - Engineering Case #368551)================
The server could have crashed when executing Java code. This has been fixed.
================(Build #1930 - Engineering Case #361210)================
A SELECT INTO #temp statement, using a recursive view, may have caused the
server to crash if used inside a stored procedure. This has been fixed.
For example:
create procedure P1( arg int )
begin
with recursive v( a, b ) as (
select T1.a,T1.b from T1 where T1.a = arg
union all
select T1.a,T1.b from T1 join v on(T1.a = v.b)
)
select * into #temptab from v;
end
================(Build #1930 - Engineering Case #365072)================
Installing ASA 9.0.1 for HP-UX on Itanium from the CD-ROM results in the
error:
/usr/lib/hpux32/dld.so: Unable to find library 'libstdc++.so.4'.
This has been fixed and a new CD-ROM is available.
There is also a workaround, create a symbolic link from libstdc++.so.5 to
libstdc++.so.4:
ln -s libstdc++.so.5 libstdc++.so.4
This symlink should be removed once the installation has been completed.
================(Build #1930 - Engineering Case #367863)================
If the EXECUTE IMMEDIATE statement was used to perform a CALL containing
variables as procedure arguments, the parameter values would have failed
to be passed to the called procedure. If the procedure contained OUTPUT parameters,
the output variables would not be set when the procedure returned, or a "variable
not found" error would have been reported. This has been fixed.
================(Build #1930 - Engineering Case #368167)================
When computing string functions that result in strings with lengths greater
than the maximum allowed size of 2GB, the server could have wasted resources.
As an example, the following code caused the server to compute a 24GB string
before discarding the extra 22GB.
begin
declare foo long varchar;
set foo = 'ABCDabcd';
set foo = repeat( foo, 49152);
set foo = repeat( foo, 65536);
end;
The server will now stop the computation after the maximum allowed size
of 2GB has been reached.
================(Build #1930 - Engineering Case #368251)================
The server would have failed to return the result set under certain circumstances.
One such situation was when the option Row_counts was set to 'ON' and the
query access plan had an indexed sort node at the top. This problem has now
been fixed.
================(Build #1932 - Engineering Case #360238)================
The system extended stored procedure xp_cmdshell will now accept command
lines up to 8000 bytes.
================(Build #1932 - Engineering Case #369054)================
Under some conditions when the results of a sort did not fit entirely in
memory, the sort could have returned rows that were only partially ordered.
This has been fixed.
================(Build #1932 - Engineering Case #369122)================
The server may have exhibited poor performance if many connections try to
concurrently truncate a global temporary table. This was due to each connection
attempting to acquire an exclusive lock on the global temporary table definition.
Since each connection
already has a pointer to the table definition, acquiring an exclusive lock
is no longer done.
================(Build #1933 - Engineering Case #368236)================
In rare circumstances, if a database which required recovery was autostarted,
the server could hang with the server window still minimized. One situation
where this could have occurred was when the database had a "disconnect" event.
A workaround is to start the database manually first to allow the database
to recover, and then shutdown this engine.
This issue has been fixed.
================(Build #1933 - Engineering Case #368995)================
If a procedure contained a query that call the OPENXML() function, and the
xpath expression was passed as a variable argument, the error "Feature 'OPENXML
with non-constant query' not implemented" would have been reported reported
on the 11th call. This has now been fixed.
================(Build #1933 - Engineering Case #369275)================
Executing a query that used two or more derived tables which each called
the OPENXML function, one of which contained an illegal XPATH expression,
could have caused the server to crash. This has been fixed.
================(Build #1933 - Engineering Case #369410)================
If a stored procedure was dropped and then another stored procedure with
the same name was immediately created, users who had permission to access
the first procedure, and had already called it, will still have been able
to access the second procedure, even if they have not explicitly been given
permission, until the next time the database was stopped. This has been fixed.
================(Build #1935 - Engineering Case #368249)================
In complex queries, if the optimizer found an expression (prefilter) that
evaluated to FALSE, the optimization time may have been long. This has been
fixed.
================(Build #1935 - Engineering Case #369676)================
The server allowed system datatypes, such as MONEY and UNIQUEIDENTIFIERSTR,
to be dropped using the "DROP DATATYPE" statement. An attempt to drop these
datatypes will now be rejected with a "permission denied" error.
================(Build #1936 - Engineering Case #368459)================
A database used as the Replication Server stable queue database could not
have been upgraded or unloaded. The RepServer system procedures were no longer
executable by rs_systabgroup. This has been fixed.
================(Build #1937 - Engineering Case #370045)================
Insert performance on systems with three or more processors would have been
much poorer than on single processor systems. The drop in performance would
have been more noticable as the number of processors increased (and was likely
even more noticeable on Unix systems). This problem has now been corrected.
================(Build #1937 - Engineering Case #370082)================
Queries using a procedure call in a common table expression would have failed
with the error: "-921 - Invalid recursive query". This is now allowed.
================(Build #1938 - Engineering Case #352482)================
If the Log Translation utility dbtran was run with the -m ('transaction logs
directory')command line option, and any of the log files in the directory
were created while auditing was enabled on the database, dbtran would have
crashed. This has been fixed.
================(Build #1938 - Engineering Case #368116)================
If a stored procedure which returned a result set was called from within
an atomic compound statement (i.e. BEGIN ATOMIC ... END), an error was correctly
given, however, an assertion would result when the database was stopped,
(Assertion 104301 - Attempt to free a user descriptor with non-zero reference
count). This is fixed, the database will now shut down correctly.
================(Build #1938 - Engineering Case #370312)================
A query in a procedure or batch, with an expression that involved remote
tables and a unary minus or simple cast operator, would have failed with
the error:
ASA Error -823: OMNI cannot handle expressions involving remote tables
inside stored procedures.
This problem has now been fixed so that these operators do now work in expressions
involving remote tables.
================(Build #1938 - Engineering Case #370339)================
Correlated subqueries used in computed columns may have caused grouped queries
to fail with the error " Function or column reference to ... must also appear
in a GROUP BY". This has been fixed.
An example:
CREATE TABLE R (
X integer not null,
Z integer not null,
Y integer NULL COMPUTE ((select count(*) from R as old where old.X = R.X))
)
select R.Z from R group by R.Z
================(Build #1939 - Engineering Case #370301)================
The ISNUMERIC() function could have returned TRUE for values which used the
letter 'd' or 'D' as the exponent separator (eg. '1d2') on Windows platforms,
or for values such as 'NAN', '0x12', 'INF', or 'INFINITY' on UNIX platforms.
The function no longer returns TRUE for these values.
================(Build #1941 - Engineering Case #370456)================
Executing a VALIDATE TABLE statement, and using the WITH EXPRESS clause,
(or dbvalid -fx), would have failed with the error "Not enough memory to
start", if the currently available cache space was not large enough. If cache
resizing is possible, the server will now try to increase the cache size
to the amount required.
================(Build #1941 - Engineering Case #370585)================
If an HTTP or HTTPS response was more than a few kilobytes in size, and the
server was on Unix, NetWare or Windows 95, 98 or Me, the response could have
been truncated, contain unexpected data and/or include fatal error text.
This was more likely to occur if the server machine was lightly loaded and
was faster than the client machine, or the client and server machine were
separated by a slow network link. This has been fixed.
================(Build #1941 - Engineering Case #370861)================
If a request log file was generated using:
call sa_server_option('Request_level_logging','sql+plan+hostvars');
and a plan string in the request log exceeded approximately 325 bytes, then
calling sa_get_request_times to process the resulting file may have resulted
in the error:
Primary key for table 'satmp_request_time' is not unique
Whether or not the error was given depended on what other requests were
active at the time the statement with the long plan was executed. The request
log file will now be output correctly.
================(Build #1941 - Engineering Case #371180)================
The server could have failed with assertion 104000 when attempting to execute
a query with a large IN list. An IN list of size roughly (1/32)*(page-size)^2
could have generated the assertion failure on 32 bit platforms, and a value
half that size would on 64 bit platforms. This has now been fixed, and IN
lists are supported up to the available cache memory.
Now, for IN lists bigger than the above limit, the "IN list optimization"
is not used, and a virtual table is not introduced. This may result in a
sudden performance difference when the IN list size crosses this size threshold.
It is not recommended to use very large IN lists.
================(Build #1941 - Engineering Case #374122)================
Any user with DBA authority could have connected to a web service, regardless
of the restrictions placed on that web service by the USER clause. Now the
USER clause restrictions are respected.
For example, if the following SQL was executed:
grant connect to hurz identified by 'sql';
create service test type 'html' user hurz as select * from systable;
grant connect to newdba identified by 'sql';
grant DBA to newdba;
then both hurz and newdba (as well as any other user with DBA authority)
could have connected to the service test.
================(Build #1942 - Engineering Case #370071)================
When the BACKUP DATABASE TO statement failed and returned an error, (for
example if the location for the achive file was not writable), then subsequent
BACKUP DATABASE TO statements that failed would have caused the server to
fail with assertion 104400 (a stack overflow) on Solaris 8 or 9 systems.
This has been fixed.
================(Build #1944 - Engineering Case #370421)================
If the ROUND() function rounded a numeric value, the resulting value may
not have fit into the original NUMERIC data types percision. For example:
The constant 9.995 is of type NUMERIC(4,3). The result of ROUND(9.995,1)
is 10.000 which does not fit into numeric(4,3). As a result the numeric value
generated by the ROUND() function could have been invalid and a conversion
of this numeric value to a string could have returned '?'.
This problem has been fixed. If the numeric value passed to ROUND() is a
constant, the resulting data types percision is increased by one, (numeric(5,3)
in the above example). If it is not a constant and the resulting value does
not fit, then a SQLE_OVERFLOW_ERROR is generated if the option Convertion_error
is set, otherwise NULL is returned.
================(Build #1944 - Engineering Case #371032)================
If a query referenced both proxy and local tables, or proxy tables from different
servers, then it will have to be executed in 'no passthru' or 'partial passthru'
mode. If such a query also contained column references of the form 'user.table.column',
then the query would have failed with error -845 "Owner '<owner name>' used
in qualified column reference does not match correlation name '<table name>'".
This problem has now been fixed.
================(Build #1944 - Engineering Case #371202)================
The system procedure sa_get_request_times, would have stored a conn_id of
0 in the table satmp_request_time for any connections established before
request-level logging was enabled, for the file being processed. If sa_get_request_times
was called with a conn_id parameter to limit the information collected from
the log, it would not have stored any information for a connection started
before logging was enabled. Now, the connection id will match the connection
handle value for this situation.
================(Build #1945 - Engineering Case #372086)================
If a query referenced a procedure in the FROM clause, and the procedure in
turn had a query that referenced a remote server, but could not be executed
in full passthrough mode, and the query also referenced a procedure variable,
then a syntax error could have resulted. This has been corrected.
================(Build #1946 - Engineering Case #371203)================
A warning message that the server was not licenced for the appropriate number
of CPUs on the machine, was being given at each checkpoint. The warning message
is now only given at startup.
================(Build #1948 - Engineering Case #355123)================
The server could have performed poorly relative to 7.x servers when doing
a long sequence of database inserts, updates or deletes. The server was spending
longer than necessary cleaning up the cache in preparation for a checkpoint.
This time has now been reduced. Also, current servers now estimate the recovery
time better. Thus the Recovery_time database option may need to be set to
a larger value in order to have the server more closely match the value the
7.x server would have used.
================(Build #1949 - Engineering Case #372122)================
Engineering Case 304975 added support for handling UUID/GUID columns in proxy
tables to remote servers. Unfortunately, that change had the side effect
of disallowing creation of existing proxy tables with smalldatetime columns.
The problem with the smalldatetime column has now been fixed.
================(Build #1950 - Engineering Case #372074)================
If an INSERT .... ON EXISTING statement used DEFAULT in the VALUES clause
for any primary key column of the table, the server would have crashed. This
has been corrected.
================(Build #1950 - Engineering Case #372231)================
The server could have deadlocked while running simultaneous queries containing
joins of the same tables.
For example, the following might have caused such a deadlock.
select * from a,b where a.a2 = b.b2; //connection 1
select * from a,b where b.b2 = a.a2; //connection 2
This would only have occurred if there was no index or key on the columns
a2 and b2. This would most likely have occurred on a multi-CPU machine. This
has now been fixed.
================(Build #1950 - Engineering Case #372469)================
If EXECUTE IMMEDIATE WITH RESULT SET ON was used to execute a string representing
a multi-statement batch, it would have failed with the error:
Result set not permitted in '<batch statement>'
The batch will now be executed correctly and its result set returned.
================(Build #1950 - Engineering Case #372481)================
The estimate of the rate at which pages are being dirtied has been made less
pessemistic (a smoothed version of the estimates used in 7.x). Also, on
32 bit Windows systems, the server now measures the random write times rather
than using the cost model estimates, as this caused the estimates to be off
by a factor of 50 in some cases.
This is a further performance improvement to the issue originally addressed
by Engineering Case 355123.
================(Build #1951 - Engineering Case #370899)================
Several race conditions in the server while starting and stopping databases
have now been fixed:
- The server could have autostopped a database when it should not have,
or not autostopped a database when it should have.
- In rare timing dependent cases, the server could have deadlocked, asserted
or possibly crashed, when a database was starting up or shutting down.
Also in rare timing dependent cases, the server could have asserted or possibly
crashed if HTTP or HTTPS requests were made to a database that was starting
up or shutting down.
================(Build #1951 - Engineering Case #372605)================
If an application, connected to a server via jConnect, cancelled a request
or closed a JDBC statement, the cancel or close could have failed and/or
dropped the connection entirely. This problem has been fixed.
================(Build #1953 - Engineering Case #373028)================
Stopping a server while a database was in the process of either starting
or stopping, could have caused incorrect behaviour, such as the database
requiring recovery the next time it is started, or the server asserting,
crashing or hanging. Now, server shutdown waits for databases which are not
active, to finish starting or stopping before shutting down, and ensures
that a database is not stopped twice.
================(Build #1953 - Engineering Case #373039)================
An attempt to create two user-defined types, whose names were the same except
for case, in a case sensitive database was permitted. This now results in
an error, since these names should always be case insensitive.
Also, dropping a user-defined type required the name to have matching case,
in a case sensitive database. This is no longer required.
================(Build #1954 - Engineering Case #371549)================
If a query performed a join between a local table and a remote table with
an ON condition, then there was a very good chance that the query would have
been processed in 'partial passthru' mode and returned an incorrect result.
This problem has now been fixed.
================(Build #1955 - Engineering Case #372196)================
When running on Unix systems, the server could have crashed while shutting
down with active TCP/IP connection. This would likely have been very rare,
and has now been fixed.
================(Build #1956 - Engineering Case #373299)================
Using the Unload utility dbunload, with the command line options -an (create
new database and reload) or -ar (rebuild and replace database), against a
server which was not running sharedmemory, would have failed attempting to
conmnect to the new database. The generated connection strings used to by
-an and -ar to connect to the new database did not include the LINKS parameters.
Now, it includes all the parameters specified for the connection to the source
database.
Note, the server used with dbunload -ar must be on the same machine where
dbunload is run, but dbunload -an can now be used against a remote server.
================(Build #1956 - Engineering Case #373382)================
If a query contained a join between a remote table and a lateral derived
table with an ON clause, the chances were very good that the server would
have crashed. This problem has been fixed.
================(Build #1957 - Engineering Case #373462)================
If a CREATE TABLE statement failed, for example because of duplicate column
names, and no commit or rollback was executed so far, the next attempt to
execute a CREATE TABLE statement, on any connection, would have crashed the
server or cause an assertion failure 102801. This has now been fixed.
================(Build #1957 - Engineering Case #373531)================
The fix for Engineering Case 367366 introduced a problem where attempting
to establish a connection to a remote ASA database with a long database name,
or a long engine name, could have crashed the server. This problem has been
fixed.
================(Build #1957 - Engineering Case #373607)================
If the first executable statement of a stored procedure was a SELECT ...
INTO, then using this procedure in the FROM clause of a SELECT statement
would have caused the server to crash. This has been fixed
For example:
create procedure P1 ()
begin
declare var varchar(128);
select first table_name into var from systable;
end
then
select * from P1()
would have crashed the server.
================(Build #1957 - Engineering Case #373613)================
An obsolete Java class could have caused the error "-110 - 'Item ... already
exists'",
when attempting to install a new version of a Java class previously removed.
This has been fixed.
================(Build #1959 - Engineering Case #374123)================
The function http_variable() would have returned a non-obvious ordering for
variables with multiple values.
For example, for the following request:
http://localhost/foo?id=1&id=2&id=3&id=4&id=5
http_variable() would return the values for id in the following order:
1, 5, 4, 3, 2
The function was adding variables (and their values), either after the first
occurrence of the variable, or at the end of the list. Now, variables and
their values are added after the last occurrence of the variable, or at the
end of the list.
================(Build #1961 - Engineering Case #347314)================
If a query contained a LIKE predicate with a literal constant pattern, that
did not contain a wildcard, the query could have incorrectly returned rows
where the expression or column contained additional trailing blanks when
run against a database that does blank padding of strings. For example, given
the value 'abc' and the LIKE pattern 'abc ' (with a trailing blank), the
server would have incorrectly matched the value and caused its row to be
returned. This has been fixed.
================(Build #1961 - Engineering Case #374452)================
If a proxy table to a DB2 table with a CLOB column, was used in a query,
then selecting that CLOB column would have failed with an unsupported datatype
error. This problem has been fixed.
================(Build #1963 - Engineering Case #374308)================
Extremely complex queries with many equality predicates, for which the optimizer
underestimated the number of rows in the result set due to highly correlated
predicates, may have had inefficient query plans. This fix introduces a new
approach for dealing with such queries. Now, the optimizer evaluates for
very complex and cheap queries, some static qualities of the enumerated plans
that are not dependent on cost, or number of rows.
================(Build #1963 - Engineering Case #374752)================
If a server was listening on more than one HTTPS port, it was possible that
HTTP requests for services created with SECURE ON would have been redirected
to an HTTPS port that could not handle the request. This has been fixed.
For example, given a server started with the following command line options:
-xs http,https(port=443;dbn=db1),https(port=444;dbn=db2) db1.db db2.db
If "sec" was a secure service in db2, then HTTP requests for "/db2/sec"
would have been redirected (via the 301 status code) to port 443 rather than
444. In most cases, this would result in a "404 Not Found" status, but could
possibly have executed the wrong service. In this case, if db1 had a service
called "db2/sec".
================(Build #1963 - Engineering Case #374822)================
Recovery of transactions requiring the Wait_for_commit option to be ON could
have failed assertion 100904 "Failed to redo a database operation". For this
to have occurred, the following must have been true:
- The WAIT_FOR_COMMIT option must have been on as a temporary option (not
a user-level option)
- A connection must have started a transaction
- A subsequent checkpoint must have occurred while the transaction started
was still active (ie. had not been committed or rolled back).
- Additional operations on the transaction must have been performed that
*required* the WAIT_FOR_COMMIT option and must have been written to disk
(likely due to a commit)
- The server must have gone down dirty before any other checkpoint occurred.
This has been fixed, the Wait_for_commit option is now always on during
recovery.
================(Build #1964 - Engineering Case #374976)================
The debugger would have shown all connections on the server, instead of only
showing those connections to the database that the debugger was connected
to. This has been fixed.
================(Build #1964 - Engineering Case #374988)================
An INSERT statement, using the ON EXISTING clause to insert the result set
of a query involving a remote table, into a local table, would have failed
with a syntax error. The server will now execute these statements correctly.
For example, instead of generating a syntax error, the following will cause
table 'bar' to contain one row:
CREATE SERVER asademo CLASS 'asaodbc' USING 'driver=Adaptive Server Anywhere
9.0;dbn=asademo';
CREATE TABLE foo(c1 int) AT 'asademo...';
create table bar( c1 int primary key );
insert into foo values(1);
insert into foo values(1);
insert into foo values(1);
commit;
insert into bar on existing skip select * from foo;
select * from bar
================(Build #1965 - Engineering Case #375084)================
If a request-level log contained host variable information for a TDS connection,
the system procedure sa_get_request_times would not have recorded the host
variable information in the satmp_request_hostvar table. This has been fixed.
================(Build #1965 - Engineering Case #375097)================
When running on NetWare 5.1 with service pack 8, the ASA server would not
start up if TCPIP or HTTP was used. When the -z (display debugging information)
command line option was used, the message "TCP/IP link, function bind, error
code 10038" was displayed on the
console. This has been fixed.
Note that at the time of this fix, NetWare 5.1 SP 8 was still in beta.
================(Build #1965 - Engineering Case #375102)================
In rare cases, the optimizer may have produced plans that omitted a sort.
For example, the following query requires 2 sorts to properly calculate the
3 window functions.
select
sum(emp_id) over (order by emp_fname, emp_lname),
sum(emp_id) over (partition by emp_lname),
sum(emp_id) over (partition by emp_lname, emp_fname)
from employee
Previously, it would only have used one. This has been fixed.
================(Build #1965 - Engineering Case #375197)================
A problem with GROUPING SETS, that could have caused server crashes, has
been fixed.
================(Build #1965 - Engineering Case #375236)================
Under rare situations, calls to functions that took string parameters, could
have crashed the server. This was only a problem on Unix systems, and has
now been fixed.
================(Build #1966 - Engineering Case #374846)================
Issuing a CREATE VIRTUAL INDEX statement on a proxy table would have caused
the server to crash. A crash could also have occurred if the Index Consultant
was run against a workload containing queries over proxy tables. This has
been fixed. Note that although virtual index creation is allowed on proxy
tables, such indexes are not meaningful and are not considered by the optimizer.
================(Build #1966 - Engineering Case #375327)================
If the -o switch was used on the server to specify a message log output file,
and the file could not be opened, a message was displayed in the server console
window, and the server started anyway. This has been fixed, now if the output
file cannot be opened, the server will report an error and will not start.
================(Build #1969 - Engineering Case #375757)================
If a BACKUP or RESTORE statement was executed from the Open Client Isql utility,
while the backup.syb file was marked as read-only, the server could have
crashed. This has been fixed.
================(Build #1971 - Engineering Case #368540)================
When used in Java Stored Procedures, cursors and prepared statements were
left open until the connection disconnected. If called repeatedly, they could
accumulate until a "resource governor exceeded error" occured. This has been
fixed.
================(Build #1971 - Engineering Case #375325)================
On non-English platforms, db_property('PlatformVer') and db_property('CompactPlatformVer')
could have returned mangled strings. This problem also affected graphical
plans, which include these properties in their output. The problem could
have caused Interactive SQL to fail to display the results of a query containing
these functions. This has been fix. The descriptive string returned from
the OS was not being converted to the server's character set, now it is.
This was only likely to cause problems when connected to a database using
jConnect, but it may possibly have affected the ODBC and ESQL drivers as
well, and was most likely to affect servers running on Chinese and Japanese
non-Windows OS versions.
================(Build #1971 - Engineering Case #376444)================
If a view V1 caused a warning to be given when referenced (e.g. the result
returned is non-deterministic), and another view V2 referenced V1, and the
definition of V2 was output into the reload.sql script by DBUNLOAD before
the definition of V1, then V2 may not have appeared in the database after
the reload.sql script has run. This has been fixed. ALTER VIEW ... RECOMPILE
did not handle a warning being set while building a cursor for the view.
The warning is now cleared before making catalog changes and then is restored.
================(Build #1972 - Engineering Case #376606)================
Creating a COMMENT on a local temporary table would have caused the server
to fail with assertion 201501 - "Page for requested record not a table page
or record not present on page".
Example:
declare local temporary table temp1(c1 int);
comment on table temp1 is 'my comment';
Now an error is returned when attempting to add a comment to a local temporary
table.
================(Build #1972 - Engineering Case #376608)================
If an Open Client application opened a cursor which cuased the warning: "cursor
options changed", the application would have failed to open the cursor. This
problem has now been fixed. There are situations where Open Client applications
are not expecting warnings, so certain warnings that are known to not be
handled are suppressed, while other warnings are sent as the client actually
expects them. The "cursor options changed" warning has been added to this
list of warnings not to be returned to an Open Client applications.
================(Build #1972 - Engineering Case #376617)================
HTTP connections to a database initialized with the collation 874THAIBIN,
would have incorrectly returned a charset value of 'none', rather than the
correct value of 'TIS-620'. This has been fixed.
================(Build #1972 - Engineering Case #376699)================
On Windows 95, 98 or ME, the build number of the operating system was displayed
incorrectly.
For example:
"Running on Win98 build 67766222"
The correct OS build number is now displayed.
================(Build #1972 - Engineering Case #376742)================
If the sybase_sql_ASAUtils_retrieveClassDescription() function was called
with a very long class name, a server crash could have occurred. This has
been fixed.
================(Build #1973 - Engineering Case #376721)================
When running on NetWare 5.1 with Service Pack 8, the ASA server would have
hung on shutdown after displaying the message "Database server stopped at <date> <time>".
Some of the NLM unloading code changed in SP 8. This has been fixed, the
server now supports these changes..
================(Build #1973 - Engineering Case #376977)================
If an application's operating system used a multibyte character set, which
was different from the character set of the database being unload by the
Unload utility dbunload, then dbunload could have generated an invalid reload.sql
script, and dbunload -an could have failed with a syntax error. Note that
dbunload -an turns off character set translation so the character set used
by dbunload in that case is the same as the database character set. For example,
running dbunload -an on a Chinese (cp936) machine to unload and recreate
a UTF8 database could have failed with a syntax error, or could have crashed.
This has been fixed.
================(Build #1974 - Engineering Case #377155)================
Creating an index on a function and then inserting, updating, or deleting
data into the table on which that index was created, could have created an
invalid transaction log entry. If the index was added before any other operations
are performed on the table or the server was shutdown after the index was
created, and before any other operations were performed on the table, the
log would have been valid. This has been fixed.
================(Build #1975 - Engineering Case #377450)================
If one, or both, of the tables in a full outer join was a proxy table, executing
the query would have caused the server to either crash, or give a syntax
error. Remote data access did not support full outer joins, This has been
fixed, full outer join support has now been added.
================(Build #1976 - Engineering Case #373770)================
Attempts to use the builtin XML functions on proxy tables would have failed
with the error "No name for arguement". This problem has been fixed.
================(Build #1976 - Engineering Case #376210)================
The server could have crashed while performing Index Consultant analysis
on a complex query. This was only likely to happen in queries with numerous
equality predicates, and either an ORDER BY, GROUP BY, or SELECT DISTINCT
clause. This has been fixed.
================(Build #1976 - Engineering Case #377755)================
If a connection updated rows in a table and subsequently left a cursor open
past a commit or rollback, other connections would not have been able to
lock the entire table in share mode (ie LOCK TABLE ... IN SHARE MODE) until
the updating connection closed the cursors and executed either a commit or
rollback. If a cursor is left open past a commit or rollback, the schema
locks persist until the cursor is closed, but other locks are now released.
================(Build #1978 - Engineering Case #377433)================
If a jConnect or Open Client application made several requests to the server
using many host variables, but didn't open any cursors, and then attempted
to use a jConnect or Open Client cursor with host variables, then the server
would likely have crashed. This problem has been fixed.
================(Build #1978 - Engineering Case #378026)================
Executing a query involving a column for which the server estimate for NULL
selectivity had become invalid (ie greater than 100%), could have caused
the server to crash. The server will now deal with this situation without
crashing. The problem can be rectified by recreating the affected column
statistics using the CREATE STATISTICS statement.
================(Build #1978 - Engineering Case #378242)================
If a batch containing a call to an external procedure was executed, and the
external procedure was subsequently canceled, the batch would have continued
execution, instead of being canceled as well. This problem has been fixed.
================(Build #1978 - Engineering Case #378243)================
Repeatedly calling a stored procedure that performed an INSERT, UPDATE or
DELETE into a proxy table, would likely have caused a server crash. This
problem has been fixed.
================(Build #1979 - Engineering Case #378034)================
Executing a procedure that called a function in an external library could
have caused the server to crash if the called routine was using the old-style
API. The problem occurred when the function was declared in SQL with fewer
arguments than it was actually written to handle. Consider the following
example of a user-defined function accepting one output argument.
CREATE FUNCTION "DBA"."MachineName" (out @iMachine char(255))
returns integer external name
'Win32MachineName@C:\\path\\mytools.dll'
begin
declare @iMachine char(255);
call dba.MachineName(@iMachine);
select @iMachine
end
If the actual function was written to accept more than one argument (e.g.,
int Win32MachineName( char *p1, char *p2 ) ) and the function employed the
"callee pops the arguments" protocol, then the stack pointer will be improperly
aligned upon return to the caller (the server in this case). It will point
to a memory location 4 bytes higher than where it should.
After the user function was called. the server checked, and corrected, the
stack misalignment. There was, however, a small window of a few instructions
where the stack pointer was misaligned and the server makes some other function
calls. This resulted in the top of the stack being corrupted. In previous
versions of the server, this did not result in any permanent harm. In more
recent versions of the server, an important pointer is correpted, resulting
in a crash.
Now, the server pushes and pops extra guard words on the stack so that these
unused stack positions will be corrupted instead. The server also now checks
for a serious stack underflow and will issue an assertion (103701 "Function
parameter list mismatch") and stop rather than crash.
================(Build #1981 - Engineering Case #378230)================
Queries containing predicates with subqueries may have crashed the server
during execution. This may hace occurred if one of the following conditions
was met:
1. There were at least two predicates with subqueries referencing the same
table T, and T was also referenced in the main query block. In the example
below, t3 is referenced in both subqueries and in the main query block.
2. There was an index on the outer reference columns used in the subquery.
In the example below, t3 has an index on the column t2_id which is the outer
reference column in both subqueries.
Example:
select * from t1 join t3 as t2 on t2.t2_id = t1.t1_id +1
where t1.t1_id between (select MIN(t3.t1_id) from t3 where t3.t2_id=t2.t2_id)
and (select MAX(t4.t1_id) from t3 as t4 where t4.t2_id=t2.t2_id)
================(Build #1981 - Engineering Case #378936)================
After applying the 9.0.1 1965 EBF for Solaris, the server would no longer
have run on machines with pre-SparcV9 CPUs. A SparcV9 instruction (CAS) was
added to the server without properly detecting pre-SparcV9 chips. This has
been fixed, the server now reverts to an emulated version of CAS on SparcV8
and older CPUs.
================(Build #1982 - Engineering Case #378387)================
If membership in group SYS was revoked from PUBLIC, rebuilding a database
using "DBUNLOAD -an" would have failed with the error: "Table 'SYSOPTIONS'
not found"
This has been fixed.
A workaround is to unload without the -an option.
================(Build #1982 - Engineering Case #378835)================
The INSERT ... ON EXISTING UPDATE statement updates an existing row with
the new column values. If a column list had been specified, then in addition
to modifying the specified columns, the statement also modified columns with
their default values. Now, the server will no longer update default columns
unless explicitly asked to.
The following describes the new server behaviour:
1. When the row does not exist, the new row is inserted as per the
usual rules of the INSERT statement.
2. If the row exists, and ON EXISTING SKIP is specified, no changes
are made to the row.
3. If the row exists, and ON EXISTING UPDATE is specified, the row is
updated as par the following rules:
(a) All columns that have been explicitly specified in the
INSERT statement are updated with the specified values.
(b) Columns with defaults that are meant to be changed on an
UPDATE are modified accordingly. These special defaults include
"DEFAULT TIMESTAMP", "DEFAULT UTC TIMESTAMP", and
"DEFAULT LAST USER".
(c) Columns with other defaults are not modified unless some
of these are explicitly mentioned with a non default
value in the INSERT statement, in which case these columns
are modified as par rule 3(a) above.
(d) Computed columns are re-evaluated and modified using the new row.
(e) Any other columns are left unchanged.
================(Build #1982 - Engineering Case #379071)================
If the server encountered unexpected file errors while starting, it could
have displayed the error: "Unknown error (xxxxx)" (where xxxxx was meaningless).
For example, if asatest.db existed, running:
dbeng9 asatest.db" (the trailing quote is intentional)
would generate the error: "Unknown error (xxxxx)".
This problem has now been fixed so that unexpected file errors now display
the error: "Could not open/read database file: <file name>"
================(Build #1983 - Engineering Case #379190)================
The Dateadd() function would have produced incorrect results when the value
to be added was close to the maximum or minimum 32-bit signed integer values
and the time unit was seconds. For example:
select dateadd(ss,2147483647,'2005-02-03 11:45:37.027')
would have resulted in:
1937-01-16 08:32:**.***
This has been fixed, now, the result is:
2073-02-21 14:59:44.027
The Datediff() function also produced incorrect results when the difference
was close to the maximum or minimum 32-bit signed integer values and the
time unit was second. For example:
select datediff(ss,'2073-02-21 14:59:45.027','2005-02-03 11:45:37.027')
would have resulted in a range error. This has also been fixed, the result
is now:
-2147483648
================(Build #1983 - Engineering Case #379414)================
The Dateadd() function would have produced incorrect results when the value
to be added was close to the maximum or minimum 32-bit signed integer values
and the time unit was milliseconds. For example:
select dateadd(ms,2147483647,'2005-02-03 21:45:37.027')
would have resulted in:
2005-01-10 01:15:**.***
This has been fixed, now, the result is:
2005-02-28 18:17:00.674
The Datediff() function also produced incorrect results when the difference
was close to the maximum or minimum 32-bit signed integer values and the
time unit was milliseconds. For example:
select datediff(ms,'2005-02-28 18:17:00.675','2005-02-03 21:45:37.027')
would have resulted in a range error. This has also been fixed, the result
is now:
-2147483648
================(Build #1984 - Engineering Case #378516)================
Attempting to run an ODBC application with a third-party Driver Manager would
have resulted in a hang, or a crash, if the ASA ODBC driver stub (libdbodbc9.so)
was present, but the actual drivers (libdbodbc9_n.so and libdbodbc9_r.so)
were missing. This situation could occur if, when creating a custom ASA installation,
the driver libraries were deleted, or were not copied. THis has been fixed.
================(Build #1984 - Engineering Case #379688)================
If an application was using Java in the database, or was using Remote Data
Access with a JDBC class, then there was a possibility that the server may
have lost a SQL error. This was most likely to occur if the SQL error was
set, but the SQL error did not get reported to the client prior to the VM
Garbage Collector running. Due to the asynchronous nature of the VM Garbage
Collector, this problem is highly unreproducible. The problem has been fixed.
================(Build #1985 - Engineering Case #379925)================
When run on Unix systems, the server could have hung or crashed, while processing
HTTP requests. This was more likely on slower machines, or when processing
large requests or responses, and has been fixed.
================(Build #1985 - Engineering Case #379940)================
If an HTTP connection was cancelled by the client while the server was still
processing the request, the server may not have closed the socket or releasd
it back to the OS. On Unix systems, this could have caused the HTTP listener
thread to eventually run out of sockets and fail after a period of time.
Clients then attempting to connect to the HTTP server would hang. On Windows
systems, this problem was more likely to exhibit itself as a memory leak.
This has been fixed.
================(Build #1986 - Engineering Case #379371)================
If a CREATE TABLE statement was executed, and the table had a multi-column
primary key, the statistics of the first primary key column would not have
been used until the database had been restarted. This has been fixed
================(Build #1986 - Engineering Case #379740)================
A LIKE condition would have incorrectly evaluated to False, if the pattern
string started with an underscore "_" or percent sign "%", and ended with
at least two non-wildcard characters, (e.g. '%00'); or the string expression
ended with at least two occurences of a non-wildcard character sequence that
overlap, (e.g. '1000' LIKE '%00'). This has been fixed.
================(Build #1986 - Engineering Case #379916)================
Attempting to insert into a local table, selected rows from a proxy table
when the remote server was down or not available, would likely have caused
the server to crash. Note that the crash will only occur if this INSERT-SELECT
performs the first connection attempt to the remote server. This problem
is now fixed and a proper error message will now get displayed.
================(Build #1988 - Engineering Case #380056)================
It was possible, although extremely rare, for the server or client to crash,
or drop connections, when using the TCPIP link. This could also happen to
the server running on Windows for SPX connections. This has been fixed.
================(Build #1988 - Engineering Case #380104)================
On 64 bit Windows systems, dynamic cache sizing did not work correctly. The
server was unable to query the OS to determine its own working set size,
so the cache growth algorithm reverted back to the old (version 6.0), very
conservative growth algorithm. This has been corrected.
================(Build #1989 - Engineering Case #380084)================
The Dateadd() function could have produced incorrect results when the time
unit was milliseconds, minutes, hours, days, or weeks.
For example:
select dateadd(hour,365*24*7923,'0001-01-01 21:45:37.027'),dateadd(hour,69399300,'0001-01-01
21:45:37.027')
would have resulted in:
'****-09-18 00:00:37.027' '9998-07-01 00:00:37.027'
This has been fixed. Now, the results are:
'7918-09-29 00:00:37.027' '7918-01-15 00:00:37.027'
Similarily, the Datediff() function produced incorrect results when the
time unit was milliseconds, minutes, hours, days, or weeks.
For example,
select datediff(minute,'2005-02-03 21:45:37.027','6088-02-26 23:52:37.027')
resulted in a range error. This has been fixed. Now, the result is
2147483647
================(Build #1991 - Engineering Case #379788)================
Windows Mobile 2003 Second edition (aka Pocket PC 2003 SE, running Windows
CE 4.21) is now supported. This Windows version supports both Portrait and
Landscape mode screens, square screens, and also VGA screens (640X480 instead
of 320X240). They can switch from portrait to landscape mode at the push
of a button.
Software installed on this platform would previously have failed to install
with the message: "The program you have installed may not display properly
because it was designed for a previous version of Windows Mobile Software".
The install has been updated to suppress this warning. Any software which
supports a version of Windows CE earlier than 4.21 will still issue this
error. The server was also updated to support screen resizing.
================(Build #1991 - Engineering Case #380378)================
Malformed GRANT statements could have resulted in incorrect behaviour. This
has now been fixed.
================(Build #1993 - Engineering Case #380351)================
The server will no longer display command-line options in the console window
when any part of the command line is derived from an obfuscated file. It
will now display a command-line syntax error message with asterisks, instead
of the text that caused the syntax error when obfuscation is used, (ie Error
in command near "***" )
================(Build #1993 - Engineering Case #380491)================
When adding days to a date using the Dateadd() function, it may have overflowed
without an error, returning an incorrect date value.
For example:
select dateadd(day, 2250318 , '2005-02-16') would have returned 0000-01-00'
This has been fixed, an error message will now be returned.
================(Build #1993 - Engineering Case #380970)================
A CREATE PROCEDURE statement that did not qualify the procedure name with
a userid, would have failed with the error "Item 'procedurename' already
exists", even if the user did not own a procedure with the same name, but
there existed a procedure with the same name in the user's namespace, (ie
owned by a group the user was a member). This has been corrected.
================(Build #1994 - Engineering Case #379951)================
On some new versions of the Windows CE operating system, it was possible
to get a 'Fatal Error: No such file or directory' message from the server,
when bringing the device out of STANDBY mode. This has been fixed.
================(Build #1994 - Engineering Case #381112)================
The Datadd() and Datediff() functions would sometimes have produced incorrect
results when the time unit was Hours, Days, Weeks, Months or Years.
For example,
select dateadd(month, 119987, '0001/1/1'), dateadd(week,521722,'0001-01-01'),
datediff(month,'0001-01-01','9999-12-31')
produced the result:
****-04-01 00:00:00.000 1833-11-10 19:44:00.000 -11085
This has been fixed. Now, the result will be:
9999-12-01 00:00:00.000 9999-12-27 00:00:00.000 119987
================(Build #1995 - Engineering Case #381217)================
If a BACKUP DATABASE statement specified a long directory name for the target
directory and also included TRANSACTION LOG RENAME MATCH, the server could
have subsequently crashed. This has been fixed. A workaround is to use a
shorter directory name in the BACKUP statement.
================(Build #1996 - Engineering Case #381465)================
The SKIP n clause of the LOAD TABLE statement did not skip the initial n
lines of the input file. The SKIP clause was effectively being ignored. This
has been fixed so that the SKIP n clause does skip the first n lines in the
input file.
================(Build #1996 - Engineering Case #381727)================
The server could have unexpectedly closed HTTP connections. This condition
was more likely to occur as other active HTTP connections were terminating,
or within a short period of time, (pprox. 5 to 10 seconds), of their terminating.
This has been fixed.
================(Build #1997 - Engineering Case #381736)================
Calling the system procedure sa_proc_debug_get_connection_name() could have
caused a server crash if given a connection id for a connection that was
attempting to access a database that was currently starting or shutting down.
This has been fixed.
================(Build #1998 - Engineering Case #381724)================
If more than the maximum number of licensed HTTP connections are attempting
to use the server, it will report '503 Service Temporarily Unavailable'.
It was possible that the server could have reported this error, even when
the client had not exceeded the maximum number of connections. This was more
likely to occur when the clients repeatedly made connections to the server,
and the server was heavily loaded. This has been fixed.
================(Build #1998 - Engineering Case #382237)================
The sample program (samples\asa\externalprocedures\tests.sql), was using
the get_piece() function for all chunks rather than just for the 2nd and
subsequent chunks. If the code was modified such that the string in the 7th
argument was prefixed with '=abcd'
set rslt = xp_all_types( null, 127, -12345, null,
'abcdefghijkl', 12.34e15, '=abcd'+repeat('=',300) );
then xp_all_types would have reported that all characters in the 7th argument
match. This is not true since the are different. This has been fixed, the
code in tests.sql has been corrected.
================(Build #1999 - Engineering Case #382026)================
An ALTER PROCEDURE executed via EXECUTE IMMEDIATE within another procedure,
would have failed to define the result set for the outer procedure correctly
if the outer procedure returned a result set without specifying an explicit
RESULT clause and it declared a variable by the same name as one declared
in the inner procedure. The result of executing an ALTER PROCEDURE within
anothert procedure would have been different than if the statement had been
executed by itself. This has been fixed. A workaround is to use unique variable
names in the procedure which executes the ALTER.
================(Build #1999 - Engineering Case #382345)================
Attempting to autostart a database with an invalid DatabaseSwitches connection
parameter could have caused the server to hang. If the server was also being
autostarted, the connection attempt could have hung. If the server was already
running, the connection attempt would not hang, but the server may have hung
when shutting down. These problems have now been fixed.
================(Build #1999 - Engineering Case #382499)================
If the server attempted to use an SSL certificate that had expired, the non-intuitive
error message "Error parsing certificate file, error code -6" would have
been displayed. The message has been improved, it is now "Certificate '<filename>'
has expired."
================(Build #2001 - Engineering Case #377911)================
Using AWE on Windows 2003 would very likely have caused some or all of the
following:
1) a blue screen error 76 with text "Process has locked pages",
2) event log messages indicating that a "driver is leaking locked pages",
3) ASA fatal errors indicating the reads or writes were failing with the
error code 1453 (ERROR_WORKING_SET_QUOTA), and/or
4) other generic fatal read/write errors
Microsoft has fixed this problem in Service Pack 1 of Windows 2003. It is
our understanding that no fix will be made by Microsoft prior to the release
of Service Pack 1.
In order to prevent these serious consequences the database server can no
longer be started on Windows 2003 pre-SP1 while using AWE. Any attempt to
do so will result in a startup error "Windows 2003 does not properly support
AWE caching before Service Pack 1"
At the time this description was written there was no existing Microsoft
Knowledge Base (KB) article describing this issue.
================(Build #2001 - Engineering Case #383178)================
The server disables floating point exceptions by default. If an external
function DLL written in Delphi was used, floating point exceptions could
have been enabled. This could have lead to a subsequent server crash due
to a floating point exception. This has been fixed.
================(Build #2001 - Engineering Case #383414)================
If an older database was upgraded, the system table SYSSQLSERVERTYPE was
not being repopulated with new values. This table is now rebuilt during an
upgrade.
================(Build #2002 - Engineering Case #382022)================
A memory leak in the Java Heap would have caused poor performance for Java
stored procedures, that used internal JDBC to access the database, if it
was called repeatedly. When a procedure like this was called repeatedly without
disconnection, the Java Heap would slowly grow until it reached its maximum,
at which time the Java Garbage Collector would run every time a memory allocation
request was made, causing poor performance. The memory leak has been fixed.
================(Build #2003 - Engineering Case #383786)================
A remote database connection back to the same ASA server, using the shared
memory link, would have hung on Solaris. A workaround is to use TCP/IP. This
has been fixed.
================(Build #2003 - Engineering Case #390008)================
Attempting to unload the ASA server via the NetWare console (i.e. "unload
dbsrv9") would have caused the NetWare machine to hang. This has been fixed.
================(Build #2004 - Engineering Case #383345)================
An incorrect result may been returned for queries with EXISTS subqueries
with correlated HAVING clauses. For this problem to have occurred, the following
conditions must be met:
1. There must have been at least one correlated equality predicate in the
WHERE clause of an EXISTS subquery.
2. There must have been at least one correlated predicate in the HAVING
clause of the subquery which referenced an aggregate function.
3. The EXISTS subquery was used in the WHERE clause of the main query block.
This has been fixed.
Example:
select A.C from A
where exists (
select 1
from A H
where H.b=A.b
group by H.b
having MAX(H.c)=A.c )
================(Build #2005 - Engineering Case #384742)================
The global variable @@procid would have always returned zero on big-endian
platforms, (ie Sun Solaris). This has been fixed.
================(Build #2005 - Engineering Case #385645)================
When the server was run on big endian platforms, (such as Sun Solaris), the
global variable @@procid would always have returned zero. This has been fixed.
================(Build #2006 - Engineering Case #381771)================
When performing a backup to a tape device on Windows systems, the server
would have asked for a tape switch after 1.5 GB of data had been backed up,
even if the tape capacity was larger. This problem has been fixed. The server
will now use the entire capacity remaining on the tape.
As a workaround, the desired capacity can be specified using the "capacity="
option in the device string.
For example:
BACKUP DATABASE TO '\\.\tape0;capacity=20064829' ATTENDED ON etc.
The value specified is in K and is calculated by dividing 20,546,384,896
(which is the capacity in bytes reported by Windows for a 20 GB tape) by
1024.
================(Build #2006 - Engineering Case #384795)================
Attempting to create a procedure or an event containing a LOAD TABLE statement,
which used a variable name to represent the filename, would have resulted
in a syntax error. This has been fixed.
================(Build #2007 - Engineering Case #385158)================
When the schema of a database object was modified by a DDL statement, any
existing views that refered to the modified object could potentially have
become invalid. However, the server did not detect any problems until the
view was subsequently referenced. In order to avoid such problems from happening,
it was necessary to recompile the affected views via the "ALTER VIEW ...
RECOMPILE" statement. If such recompilation was not done after dropping a
column that is referenced by a view for example, then the server could have
crashed in certain situations when the affected view was referenced. This
has been fixed, the server will now generate an error without crashing.
The server will now generate an error without crashing.
================(Build #2008 - Engineering Case #385315)================
The order of columns in an index as returned by the view SYS.SYSINDEXES,
in the column colnames, mayt be incorrect. This has been fixed by adding
an ORDER BY clause in the list() expression used to generate the column list.
================(Build #2008 - Engineering Case #385494)================
The IsNumeric() function would have returned an error when the parameter
was too long. It now returns FALSE, since the parameter can't be numeric.
================(Build #2008 - Engineering Case #385530)================
If an ALTER VIEW statement was executed by the creater of the view, but that
user no longer has RESOURCE authority, a "permission denied" error would
have been reported. The view definition in SYSTABLE.view_def would still
have been updated, but the preserved source for the view in SYSTABLE.source
would not have been updated. This has been fixed, now no error will be reported
in this situation, and the preserved source will be updated.
================(Build #2010 - Engineering Case #384959)================
Changes made for Engineering Case #363767 could have caused the database
file to grow unnecessarily. A page that was in use as of the last checkpoint
is allowed to be reused before the next checkpoint, provided its preimage
has been saved in the checkpoint log. Prior to the changes for case 363767,
the preimage for a freed page was forced to disk and the page was allowed
to be reused immediately. After the changes for case 363767, the freed page
was not allowed to be reused until after the next checkpoint, because the
server no longer forced the preimage to disk for performance reasons. If
an application freed and reused pages frequently (for example, repeatedly
deleting all rows from a table then inserting rows back into the table),
the server would not have allowed many of the free pages to be used until
after the next checkpoint. The problem has been fixed by keeping track of
the set of free pages that would normally be allowed to be reused if only
the preimages were committed to disk.
Note that this growth was not unbounded and was not a 'leak', as the pages
are freed as of the next checkpoint. This problem only affected databases
created with 8.0.0 or later.
================(Build #2010 - Engineering Case #386047)================
When using an AWE cache, the server could have failed with a "memory exhausted"
error (not to be confused with the "dynamic memory exhausted" fatal error),
other fatal errors or other operations problems, such as failures loading
DLL for Remote Data Access or external stored procedures could fail. The
problem was that the server would allocate space for use by the cache up
to the available address space less 64MB; effectively leaving only 64MB of
address space for other purposes, which was frequently insufficient. Note
that the amount of address space allocated never exceeded the amount of physical
memory allocated for the AWE cache. Now, the database server allocates up
to the available address space less 512MB
An undocumented command line option "-cm <size>" has always existed to control
the amount of address space allocated for an AWE cache. As a work-around,
this option can be added to the server's command line to leave more address
space for purposes other than the cache.
Each process on 32-bit Windows is given 2GB of address space in total (that's
an OS and 32-bit architecture limitation) except on Windows 32-bit Advanced
Server, Enterprise Server and Datacenter Server where they are given 3GB
of address space in total, provided "/3GB /PAE" is in the boot.ini and there
is no more than 16GB of RAM installed. Again, these are OS and 32-bit architecture
limitations. 32-bit programs running on Windows x64 Edition are given the
full 4GB of address space, although running native 64-bit programs on Windows
x64 Edition is preferable (they are not limited to 4GB of address space).
A description of "address space" is probably helpful at this point. Address
space essentially refers to the amount of memory that can referenced by a
program at any given time. You can think of this as the number of unique
1-byte memory addresses that can be referenced by a pointer value. On a 32-bit
architecture, pointers are limited to 32-bits, or 4GB, of addressability.
On Windows, the OS uses 1GB or 2GB of each process's address space for its
own purposes leaving 3GB or 2GB for the process to use. Usually, every piece
of memory allocated has address space associated with it. That includes memory
containing code (the database server code & all DLLs loaded) or data. On
Windows, "virtual memory" allows the actual physical contents of a page of
memory to be written to disk to allow the physical RAM to be used temporarily
for another purpose but the address space remains allocated. When the memory
is paged back in, the contents could be in a different physical piece of
memory but its address (pointer value) will be the same.
With an AWE cache, physical memory and address space are allocated separately.
The database server may use, for example, 1GB of address space and 7GB of
physical memory for the cache. Physical memory allocated for an AWE cache
is not virtualized -- it is never swapped out to disk. Because the database
server has only 1GB of address space for the cache, it can only access 1GB
worth of the cache at any given time. If a cached page exists in physical
memory but has not been assigned some address space, the server must pick
a piece of the address space (in 4K chunks) that was allocated for the AWE
cache and ask the OS to change the process's page table so that the given
address space now refers to a different piece of physical RAM. The old piece
of physical RAM is no longer visible to the database server.
Changing the AWE memory mappings is very fast and definitely much faster
than doing an IO; however, it is not absolutely free. For optimal performance,
you want your address space to be as large as possible to minimize the number
of AWE mapping changes that are performed but small enough that you don't
run out of address space.
Since some operations such as loading DLLs can occur after the database
server cache has been created and the database server cannot predict the
address space that will be needed by those operations, the minimum amount
of address space reserved for purposes other than the AWE cache has been
increased to 512MB instead of 64MB (which should be sufficient in most cases)
and can be controlled manually in exceptional cases with the "-cm <size>"
switch.
================(Build #2011 - Engineering Case #371941)================
The server command line options -c, -cl, and -ch allow cache sizes and limits
to be specified in terms of "percentage of total physical RAM installed in
the system". On systems where there was more physical RAM installed than
there was address space available to the server process, using percentage
notation could have caused the server to attempt to allocate a cache larger
than it could possibly allocate. For example, -c75p on a system with 8GB
of RAM installed would attempt to create a 6GB cache. Percentage notation
for these options are now defined as a percentage of available address space
or total physical RAM, which ever is less.
On all 32-bit systems other than Windows, available address space is defined
as 2GB less 256MB. On Windows, available address space is computed accurately
on startup. Note
that each process on 32-bit Windows is given a total of 2GB of address space
(that's an OS and 32-bit architecture limitation) except on Windows 32-bit
Advanced Server, Enterprise Server and Datacenter Server, where they are
given 3GB of address space in total provided "/3GB /PAE" is in the boot.ini
and there is no more than 16GB of RAM installed. Again, these are OS and
32-bit architecture limitations. 32-bit programs running on Windows x64 Edition
are given the full 4GB of address space. Available address space is then
defined as total address space less address space in use by the database
server at startup.
For AWE caches ('-cw' is on the command line), the definition of percentage
notation has not changed and remains as a percentage of total physical memory
on the system.
================(Build #2011 - Engineering Case #383970)================
The database server would have erroneously reported a licensing violation
on hyperthreaded or multicore CPUs. A message such as the following would
have been displayed:
"Warning: The server is limited to use xxx processor(s), but it is executing
on yyy processor(s)."
The server was using the number of CPUs reported by the operating system,
and was not accounting for the fact that they may be hyperthreads (or, soon,
just one of multiple cores in a single physical processor). Also, the server
would only create enough threads of execution to match the CPU license limit;
however, on a hyperthreaded or multicore processor, the server should create
enough threads of execution to run on all hyperthreads and cores on all of
the physical processors for which the server is licensed.
The problem has been corrected as follows:
1. The server adjusts its affinity mask to restrict the server to
run on only the licensed number of physical processors
2. The server creates enough OS threads to be able to execute on all
cores & hyperthreads of all processors for which the server is licensed.
To address #2 above, the default number of OS threads (changeable with the
'-gx' switch) is now 1 plus the number of hyperthreads and cores on all licensed
processors. The previous default number of OS threads was the minimun of
the licensed number of cpus, or 1 plus the number CPUs as reported by the
OS.
The server now also correctly adjusts its affinity mask to restrict the
server to the licensed number of physical processors. If the server is licensed
for 'n' processors, the server will by default run on all hyperthreads and
cores of 'n' physical processors.
A new server command line option has been added ('-gtc') to control the
aximum "processor concurrency" that the server will allow. For example, by
default on a single-processor hyperthreaded system, or on any hyperthreaded
system with a server licensed for just 1 CPU, the server will by default
allow two threads to run concurrently on one physical processor. Adding a
'-gtc 1' switch, for example, allows the user to force the engine to run
on just one core of one processor.
The following will help to clarify the interaction of the switches that
control licensing and concurrency. The following symbols will be used so
that the values can be easily related to the server options that control
them:
gt : the number of physical processors to use
gtc : the maximum processor-level concurrency
gn : the number of concurrent server requests
gx : the number OS threads to create
The default value for gt is the minimum of the number of physical processors
licensed and the number of physical processors in the machine. The user cannot
set gt to exceed the default value.
The default value for gtc is the minimum of the total number of cores and
hyperthreads on the licensed number of physical processors and the total
number of cores and hyperthreads in the physical machine. The user cannot
set gtc to exceed the default value.
The default value for gn is 20 for the network server. For the personal
server on platforms which support Java in the database, gn defaults to 2
times the connection limit (which is also 20 because the connection limit
is 10) and the user cannot specify a gn value less than 2 times the connection
limit or greater than 3 times the connection limit. For the personal server
on platforms that does not support Java in the database, gn defaults to connection
limit and the user cannot specify a gn value larger then connection limit.
Once the server has chosen the gt, gtc, and gn values based on command line
values, license restrictions, physical processors in the machine, and the
connection limit, the server will choose a gx value. The default value for
gx is the minimum of gtc plus 1 and gn. The user cannot specify a gx value
where gx > gn.
An example will also help to demonstrate how the server selects CPUs based
on gt and gtc. For the purposes of the examples below, assume we have a 4-processor
system with 2 cores on each processor. We label physical processors with
letters and the cores with numbers. This 4-processor system therefore has
processing units A0, A1, B0, B1, C0, C1, D0 and D1
Case 1: A 1-CPU license or "-gt 1" specified on the command line
The network server will use gt=1, gtc=2, gn=20, gx=21
Threads will be allowed to execute on A0 and A1
Case 2: An unlimited server with "-gtc 5" on the command line
The network server will use gt=4, gtc=5, gn=20, gx=21
Threads will be allowed to execute on A0, A1, B0, C0, and D0
Case 3: A server with a 3-CPU license and "-gtc 5" on the command line
The network server will use gt=3, gtc=5, gn=20, gx=21
Threads will be allowed to execute on A0, A1, B0, B1, and C0
================(Build #2012 - Engineering Case #386569)================
The index corruption caused as a result of Engineering Case 383145 could
not have been detected by database validation. Validation of an index with
this type of corruption will now generate an assertion failure, (such as
100305, 102300, or 201601). Dropping and recreating the index will solve
the problem.
================(Build #2013 - Engineering Case #385641)================
If a trigger or procedure executed a conditional statement (e.g. an IF statement)
that used a subselect on a procedure, and the procedure call in the subselect
took at least one parameter, a server crash would have resulted. This has
been fixed.
================(Build #2014 - Engineering Case #386918)================
When an error occurred in a subselect that was part of a procedural statement
(for example, SET, MESSAGE, IF/WHILE conditions, etc.), the server would
have failed to release part of the cache that was used by that subselect.
Subselects that are part of queries that return results sets, explicitly
opened cursors, insert...select, or select...into statements, are not affected.
This would not cause any immediate problems, however if a large number of
calls were made to such procedures, an increasing portion of the database
server cache would have become unavailable to the server for normal use.
This would then have caused the cache to grow larger than necessary, and
eventually, given enough such calls, have failed with a 'Dynamic Memory Exhausted'
error. This may also have shown up as steadily decreasing server performance.
This problem was more likely to appear if stored procedures were written
with exception handlers or ON EXCEPTION RESUME. This has now been fixed.
A workaround is to restart the server whenever performance drops below an
acceptable level, or at shorter intervals than the memory exhaustion error
is reported.
================(Build #2014 - Engineering Case #387061)================
The locked_heap_pages and main_heap_pages statistics were being reported
incorrectly if the performance monitor was left running during server restarts.
Further, these counters were not being reported as accurately as they could
have been. These problems have been corrected.
================(Build #2014 - Engineering Case #387156)================
Attempting to grant a large number of column permissions, (the actual number
depended on the cache page size), for one table/grantor/grantee combination,
would have caused assertion failure 101506 - "Allocation size too large when
re-allocating memory". This has now been fixed.
================(Build #2014 - Engineering Case #387180)================
The server may have stopped with an 'out of memory' fatal error when it did
not need to do so. For this to have occurred, there must be very few reusable
pages left in the cache.
While this specfic problem has been fixed, the server may still fail later
on with an 'out of memory' error.
================(Build #2014 - Engineering Case #387224)================
If a query involved the ROLLUP or CUBE operator over more than 255 columns,
memory would have been corrupted, which could have lead to unexpected behaviour.
This was very unlikely to occur in practice, and an error would have been
returned for ROLLUPs or CUBEs on 65 to 255 columns. An error is now returned
for ROLLUP or CUBE operations over 64 columns.
================(Build #2017 - Engineering Case #387625)================
In some cases compressing a database that was using checksums with the Compression
utility, could have corrupted it. This has been fixed.
================(Build #2017 - Engineering Case #388068)================
If a connection that had the Wait_for_commit option set to 'On', issued a
request which violated referential integrity, and the next commit was done
as a result of a SET OPTION statement that set a permanent option for a
different user, the server could have failed assertion 104301 "Attempt to
free a user descriptor with non-zero reference count", when the server was
stopped (e.g. by autostop option), or a subsequent connection revoked CONNECT
permission from the user the option was set for. This has been fixed.
================(Build #2017 - Engineering Case #388338)================
The number of rows returned from a Nested Block Join (JNB) could have been
underreported in a Graphical Plan with Statistics. This was a display problem
only, and did not affect the actual execution of the query. It was also not
a problem in a Graphical Plans with estimates only. This has been fixed.
================(Build #2017 - Engineering Case #388488)================
The server could have failed with "fatal error: unknown device error" when
performing a query over a temp table. The query had to return blob values,
and make use of the SortTopN data flow operator. SortTopN most frequently
appears in plans for queries with an ORDER BY clause and a row limit (ie.
TOP n or FIRST).
For Example:
create procedure test()
result( val char(256) )
begin
declare local temporary table val_table(
val char(256) null );
insert into val_table values( repeat('a',256));
select * from val_table;
end;
select first val from test() order by val;
This problem has now been fixed.
================(Build #2019 - Engineering Case #388072)================
The INPUT...PROMPT statement would never have stopped prompting for column
values when Interactive SQL was run in console mode. This problem did not
occur when run in windowed mode. This bug has been fixed.
================(Build #2019 - Engineering Case #388752)================
If a column's COMPUTE clause contained a string constant, or a string constant
expression, the server would have crashed each time the compute expression
was evaluated. This has been fixed.
================(Build #2020 - Engineering Case #388838)================
If a proxy table was created with a column that contained a DEFAULT clause,
then an insert into that table would have failed if the insert explicitly
specify the column, but with a different case for the column name. The returned
error would have been "Duplicate insert column". For example:
create table T1 ( col1 int default 10, col2 int ) at '....';
insert into T1 ( COL1, col2 ) values ( 1, 1 );
This has been fixed.
================(Build #2020 - Engineering Case #389230)================
While running multiple connections that fetched from different proxy tables,
and different remote servers using ASEJDBC, if one connection was killed
then the server's Java Virtual Machine can no longer be started. This has
now been fixed.
================(Build #2023 - Engineering Case #382839)================
When using the Microsoft SQL Server "Import and Export Data" tool to move
tables from a Microsoft SQL Server database to an ASA database, and the connection
to the ASA server used the OLEDB provider, column data was truncated to 200
bytes. This has now been fixed.
================(Build #2023 - Engineering Case #389598)================
If an Open Client application used unsigned datatypes in an Remote Procedure
Call, there was a good chance the application would hang. This problem has
now been fixed.
================(Build #2024 - Engineering Case #389590)================
If a server was started with the -o <file name> server command line option
(filename for copy of message window), then stopped and immediately started
again with the same -o <file name>, the server could fail to start with the
error "Invalid database server command line" or "Can't open Message window
log file: <file name>". This failure was rare and timing dependent. This
has been fixed so the second server will successfully start.
================(Build #2024 - Engineering Case #389796)================
When performing an UPDATE (or possibly a DELETE) using a keyset cursor, over
a table that was joined to itself (i.e. appears multiple times in the UPDATE
or FROM clause), the server could have failed to obtain write locks on rows
it modified. This could have resulted in lost updates, or a corrupted index,
if an update was made to an indexed column. This has been fixed.
================(Build #2025 - Engineering Case #387997)================
A database file may have grown, even when free pages existed in the dbspace,
if the free space was fragmented such that no 8-page cluster aligned on an
8-page boundary existed within the dbspace, and pages for a large table (one
with bitmaps) were being allocated. When growing a large table prior to this
change, the server always allocated table pages in clusters of eight pages
so that group-reads could be performed on a sequential scan. If no clusters
were found, the dbspace was grown to create one. Now, if no free cluster
is found, the server will attempt to allocate pages from a cluster that has
both free pages as well as pages allocated to the table that is being grown.
If no free pages are found by this method, the server will use any free page
in the dbspace. So now te dbspace will not grow until there are no free pages
left in the dbspace.
As a work-around, periodically running REORGANIZE TABLE on all tables will
generally avoid the problem
================(Build #2026 - Engineering Case #388906)================
When run on Linux platforms, the server could have crashed on shutdown if
a custom backup library was used to perform backups. This has been fixed.
================(Build #2026 - Engineering Case #389583)================
If a database with auditing enabled was loaded in read-only mode, the database
could have been used, but audit records would not have been created. This
has been fixed so that now databases with auditing enabled cannot be started
in read-only mode.
================(Build #2026 - Engineering Case #390025)================
The server could have crashed while performing a string comparison using
the LIKE operator. This has been fixed
================(Build #2026 - Engineering Case #390063)================
Assert failures 200602 and 200603 indicate that when truncating a table,
a mismatch was detected between the number of pages in the table and the
counts recorded in the table's in-memory data structures. These assertions
would have brought down the server, even if the table concerned was a temporary
table. In the case of a temporary table, this likely does not warrant bringing
the server down. This behaviour has been changed so that now the assertion
text is written to the console log, and the server is allowed to continue
running. Also, the text of these assertions has been changed to indicate
the table in question.
================(Build #2026 - Engineering Case #390133)================
If only the second member of a UNION, INTERSECT, or EXCEPT query contained
multiple Transact-SQL variable assignments, then the server would have crashed.
This has been fixed. Now the engine will return a syntax error if any member
other than the first contains Transact-SQL variable assignments, or an "into <tablename>"
clause.
================(Build #2026 - Engineering Case #390765)================
If an Open Client dblib application attempted to fetch a tinyint value using
the dbdata() function, instead of dbbind(), then the application would always
get the value 0, instead of the actual tinyint value. Note that this problem
only occurred if the tinyint value was nullable. This problem has now been
fixed.
================(Build #2027 - Engineering Case #391182)================
If the text of a stored procedure was made unreadable with the HIDDEN clause,
and it contained a call to itself with at least one parameter, then any call
to this procedure will have failed with the error "Wrong number of parameters
to function 'proc_name'". The error would have disappeared after restarting
the database or reloading the procedures due to the execution of a DDL statement.
This has now been fixed.
================(Build #2027 - Engineering Case #391357)================
If the text of a stored procedure, view or trigger was made unreadable using
the HIDDEN keyword, and the definition string was invalid, the server may
have crashed. This has been fixed.
================(Build #2027 - Engineering Case #391751)================
Numerous changes have been made to the system procedures used by jConnect
when connected to ASA servers. Newly created databases will have these changes,
but to update existing databases run the script jcatalog.sql, which is in
the scripts directory.
================(Build #2028 - Engineering Case #392029)================
The server would have returned either -9 or a conversion error from the following
query, depending on the platform:
select hextoint('fffffffffffffff7');
This problem has been fixed. The server will now consistently return -9
for the result.
In furthering compatibility with Adaptive Server Enterprise, the following
examples now return errors, since they contain invalid hexadecimal characters.
select hextoint('ffffJackf7')
================(Build #2028 - Engineering Case #392068)================
When backing up a database to multiple tapes, after the first tape had been
written, the request for the next tape would have failed. This problem has
been fixed.
================(Build #2032 - Engineering Case #392216)================
If a proxy table contained a column declared with DEFAULT AUTOINCREMENT,
and an insert into that table did not contain a value for that column, the
server may have crashed. For this to have happened, one of the column values
in the insert statement had to be an expression or function call that needed
to be evaluated. This has been fixed.
================(Build #2032 - Engineering Case #392500)================
When the server was run on Unix systems, if the public.string_rtruncation
option was on when a database started and select length( property( 'CompactPlatformVer'
) ) was more than 40 characters, user options may not have been set correctly,
and other incorrect behaviour could have occurred. This has been corrected.
Windows platforms were not affected, since the length returned for 'CompactPlatformVer'
is much less than 40 characters.
================(Build #2032 - Engineering Case #392502)================
If a Java application closed an SAConnection object, and then subsequently
called the 'isClosed' method on the same object, an exception would have
been thrown erroneously. This has been fixed.
================(Build #2032 - Engineering Case #392668)================
If an application used the Remote Data Access feature to perform an INSERT
from SELECT in 'no passthru' mode, and the insert received an error, it was
possible for the server to have crashed. This problem has now been fixed.
================(Build #2032 - Engineering Case #393434)================
On Windows platforms, when the Transaction Log, Log Translation or Backup
utilities are executed on a database with auditing enabled, a file called
"<db filename>.alg" is created or updated. A record is added containing the
date and time of execution, the Windows user name, and the name of the utility
executed. If this file already existed, but the new record could not be written
because the disk was full, the utility would have ignored the error and continued.
This has been fixed, now if the audit record cannot be created, the utility
fails. A message is also displayed to indicate that the audit record could
not be created.
================(Build #2033 - Engineering Case #393746)================
If a space did not follow the method name in the EXTERNAL NAME clause of
the wrapper function to a Java method, calls to the function would have resulted
in a procedure not found error.
For example:
a wrapper function definition of
CREATE FUNCTION MyMeth (IN arg1 INT, IN arg2 varchar(255),IN arg3 INT )
RETURNS Int
EXTERNAL NAME 'TestClass.MyMethod(ILjava/lang/String;I)I'
LANGUAGE JAVA;
would have resulted in the error
Procedure 'TestClass.MyMethod(ILjava/lang/stringI)I' not found.
where as
EXTERNAL NAME 'TestClass.MyMethod (ILjava/lang/String;I)I'
would have worked. This has been fixed so that a space is no longer required
between the method name and the left parenthesis.
================(Build #2033 - Engineering Case #393923)================
If a procedure or function called a Java method, and the Java code used the
current connection to run a SQL statement that called a procedure or function
that also called a java method, the server would have crashed if the two
procedures or functions had a return statement.
================(Build #2034 - Engineering Case #393022)================
If an expression contained a reference to a proxy table, the server would
have crashed if the expression was used:
- in a MESSAGE or PRINT statement
- in a RETURN statement of a function or procedure
- in a time/delay expression in a WAITFOR statement
- in an offset expression of a FETCH statement
This has been fixed so that the server now correctly returns the error "OMNI
cannot handle expressions involving remote tables inside stored procedures"
================(Build #2037 - Engineering Case #393776)================
The reload of a database using the reload.sql generated by the Unload utility
would have failed when executing a LOAD STATISTICS statement. This would
have occurred if the column specifed was of type char, varchar or binary,
and its columns size had changed from less then 8 byte to a size of 8 bytes
or more, and it had column statistics generated before altering the column,s
size. This has been fixed. The server now drops columns statistics if the
column size changes.
================(Build #2037 - Engineering Case #394668)================
The ON EXISTING UPDATE clause of the INSERT statement can be used to update
rows that already exist in the database. By default, columns with default
values in existing rows should be left unmodified unless their values are
explicitly changed by the INSERT statement. Under some circumstances, the
server could have modified these columns incorrectly. This problem has been
resolved.
As a simplified example consider the following:
drop table a;
CREATE TABLE a (
a1 INTEGER NOT NULL,
a2 INTEGER NOT NULL,
a3 INTEGER NOT NULL DEFAULT AUTOINCREMENT,
a4 INTEGER NOT NULL,
PRIMARY KEY ( a1, a2 ) );
INSERT INTO a VALUES( 1, 1, 1, 1);
INSERT INTO a VALUES( 2, 1, 2, 2);
commit;
INSERT a ON EXISTING UPDATE WITH AUTO NAME
SELECT 1 AS a1, 99 AS a2, 11 AS a4
union all
SELECT 2 AS a1, 1 AS a2, 88 AS a4;
The INSERT statement should
1. Insert a new rows into table a with PKEY <1,99>, and
2. Update the value of a.a4 to 88 in the row with PKEY <2,1>. The default
column a.a3 in this row should now remain unchanged.
================(Build #2038 - Engineering Case #393745)================
When running the reload.sql generated by the Unload utility, executing LOAD
STATISTICS statements may have failed. This would have occurred if the column
was of type binary or long binary, and the source database and the target
database had different collations
(e.g. one had a single byte collation and the other one a multi-byte
collation).
This has been fixed so that now the statistics of binary columns are only
loaded if both databases have the same collation.
================(Build #2041 - Engineering Case #395054)================
If the database option Wait_for_commit was set to ON while executing a LOAD
TABLE statement, and it failed with a referential integrity error, then the
database could have been left in an inconsistent or corrupt state. This has
been fixed.
Some of the errors that might be characteristic of this problem are:
- Assertion 200602 - Incorrect page count after deleting pages from table
'table_name' in database 'database_name' - could occur during TRUNCATE TABLE
or DROP TABLE.
- Database validation could report that an index has inaccurate leaf page
count statistics.
- Database validation could report that a foreign key is invalid and that
some primary key values are missing.
- Database validation could report that the rowcount in SYSTABLE is incorrect.
- Inconsistent row counts might be observed when querying the table sequentially
versus via an index.
================(Build #2041 - Engineering Case #395908)================
If an Open Client or jConnect application described a column that was of
type Varchar(n) or Varbinary(n), the size reported for the column would have
been 32768 instead of n, if n was greater than 255. This problem has now
been fixed.
================(Build #2041 - Engineering Case #396464)================
After having established an HTTPS connection, the server may have lost data
when receiving a large request. Data loss may have been experienced intermittently
when requests were in the range of 10K bytes or greater. Data loss may have
occurred either receiving POST or GET requests. This has been fixed.
================(Build #2043 - Engineering Case #396058)================
Inserting a string longer than 64K bytes into a column of a proxy table,
would have caused the local server to crash. This has been fixed.
================(Build #2044 - Engineering Case #396571)================
Calling the OPENXML() function with an xpath expression that was NULL, would
have caused the server to crash. This has been fixed. Note, this did not
happen to xpath expressions in the WITH clause.
================(Build #2044 - Engineering Case #396813)================
Changes for Engineering Case 395908 introduced a bug such that long binary
and long varchar columns were truncated for Open Client and jConnect applications.
This problem has been fixed.
================(Build #1814 - Engineering Case #342899)================
When listing indexes not used by the server during the workload capture step
of the Index Consultant, indexes owned by "SYS" and "dbo" could also have
been listed. Consequently, 'DROP INDEX' statements would have been added
to the generated script for these indexes. The indexes would never actually
have been dropped, as the server would have report errors when the script
was run. This has been fixed.
A workaround is to delete the 'DROP INDEX' statement in the generated script
for system tables.
================(Build #1816 - Engineering Case #344173)================
If a new procedure was created after the Breakpoint Creation dialog had been
used, it would not have appeared in the drop down list of procedures in the
Breakpoint Creation dialog. This has been fixed.
================(Build #1817 - Engineering Case #344176)================
Sybase Central's Log Viewer dialogs (Filter Events, Find etc) were not closed
when the escape key was pressed. The escape key will now close these dialogs.
================(Build #1817 - Engineering Case #344384)================
Removing the PUBLIC group from the SYS group by executing the statement,
"REVOKE MEMBERSHIP IN GROUP SYS FROM PUBLIC", would have prevented connections
to the database from Sybase Central. The workaround is to add the PUBLIC
group back to the SYS group
by executing the statement "GRANT MEMBERSHIP IN GROUP SYS TO PUBLIC"
When Sybase Central connects, it executed a query that references the "SYSPROCPARM"
table. This query did not qualify the table by the table owner "SYS", which
meant it was not found, which caused the query to fail. This problem was
fixed by qualifying the reference to "SYSPROCPARM" by adding the owner "SYS".
================(Build #1818 - Engineering Case #344647)================
When using the Sybase Central Performance Monitor, if a statistic was added
to the chart, the chart was displayed, and then another statistic was added
from it's property sheet, the chart will not display the second statistic
in it's legend list. This has been fixed.
================(Build #1818 - Engineering Case #344976)================
On the last page of the Sybase Central Index Consultant, the button 'Run
Script' was active, regardless of whether there was a script to run or not.
If no indexes had been recommended for creation or dropping, clicking the
button would have caused the server to report a syntax error, but have no
other effect. This has now been fixed so that the button is disabled if
no recommendations have been made, which is consistent with the ISQL Index
Consultant.
================(Build #1818 - Engineering Case #345060)================
When in the Code Details panel for a procedure while in debug mode, if the
connection was disconnected, a NullPointerException would have been reported.
This has been fixed.
================(Build #1819 - Engineering Case #344576)================
It was possible that the SQL displayed for a procedure, function, view or
trigger could have been corrupted in the right-pane or in a separate editor
window. This would only have occurred if the SQL contained comments before
the owner and name, or if the SQL for a
procedure or view did not contain an open parenthesis after the name, which
might have been the case if the procedure or view took no parameters and
was in the Transact-SQL dialect. This has been fixed.
================(Build #1819 - Engineering Case #345202)================
When using Sybase Central to modify a table by adding or removing columns,
or changing the size of one or more existing columns, the maximum row width
displayed on the Miscellaneous page of the table's property sheet would not
have been updated accordingly. This has been fixed.
================(Build #1820 - Engineering Case #346142)================
Attempting to unload a database created with a version prior to 7.0.0, with
the Unload Database wizard, would have caused Sybase Central to crash, when
the table
selection page was about to be displayed. This has been fixed. Now, the
table selection page is not available when unloading a pre-7.0.0 database,
since the lists of tables and users are not available.
================(Build #1821 - Engineering Case #345968)================
Attempting to cut or copy a procedure, trigger or view to the clipboard,
would have caused Sybase Central to appear hung. In actual fact, it was simply
taking a very long time to perform the cut or copy operation. This has been
fixed.
Note that this problem would only have occurred if the actual object was
cut or copied from the Procedures and Functions, Triggers or Views folder,
but would not have occurred if the object's source was simply copied from
the editor in the right-pane.
================(Build #1822 - Engineering Case #346477)================
If a table object was cut or copied to the clipboard, or dragged and dropped
into another application, then any column comments would not have been included
in the SQL. This has been fixed.
================(Build #1824 - Engineering Case #345916)================
Any attempt to create a procedure, function, trigger or event using the wizards,
would have failed if the database was running on a version 7.0.x server.
The was due to the wizards creating code templates with BEGIN...END blocks
containing only a comment. This syntax was not valid on a version 7.0.x server.
Now, if the database is running on a version 7.0.x server, the BEGIN...END
block will contain a PRINT statement instead.
================(Build #1824 - Engineering Case #346282)================
If no users were selected in the Filter Objects by Owner dialog, then any
attempt to use a wizard to create a table, proxy table, view, procedure,
function, remote procedure or publication would have resulted in Sybase Central
crashing. This has been fixed.
Note that these objects can still not be created using a wizard when all
users are
filtered out, since only unfiltered users can be choosen as the owner of
a new object.
================(Build #1824 - Engineering Case #347182)================
The Create Function wizard allowed for creating a Transact-SQL function when
connected to a version 8 or earlier server. This would always have failed,
since Transact-SQL functions are only supported on version 9 servers. Now,
this option is no longer available when connected to version 8 or earlier
servers. In such cases, the dialect page is skipped completely, as Watcom-SQL
is the only choice.
In addition, the Translate to Transact-SQL menu item is now disabled for
functions when connected to version 8 or earlier servers.
================(Build #1825 - Engineering Case #347209)================
Column comments would be ignored when creating a new table in the table editor.
Although column comments for existing tables would have been preserved. This
problem has been corrected. Column comments for new tables are now preserved.
================(Build #1825 - Engineering Case #347269)================
The calendars displayed in the Event Schedule dialog and the Translate Log
File wizard was always using Sunday as the first day of the week. Now, the
current locale is used to determine which day should be as the first day
of the week. For example, in French,
German, Chinese and Japanese locales, the first day of the week is Monday.
================(Build #1827 - Engineering Case #347350)================
Attempting to use the View Creation wizard to create a view with a SQL statement
that didn't begin with the keyword SELECT, would have caused the error "You
must specify a valid SELECT statement without an ORDER BY clause". This has
been fixed so that the
wizard can be used to create any valid view.
================(Build #1827 - Engineering Case #347722)================
Database connections made before switching to Debug mode, sometimes would
not have appeared in the connection list of the Debugger details panel.
This has now been fixed.
================(Build #1836 - Engineering Case #349486)================
When text in the editor was printed, if any of the text was selected, the
selected text would have been printed with the same foreground and background
colors as displayed in the editor. Selected text is now printed as if it
was not selected.
================(Build #1839 - Engineering Case #350339)================
Attempting to connect to the utility database, by specifying an ODBC data
source, would have caused the database name to not be displayed in the tree
and generated the error "The information required to display the database
in Sybase Central could not be obtained." This has been fixed.
================(Build #1842 - Engineering Case #350816)================
Attempting to use the Foreign Key wizard to create a clustered foreign key,
with the 'Check on commit' checkbox checked, or specified an Update and/or
Delete referential action, then the wizard would have failed to create the
foreign key. Instead, a syntax error dialog would have been displayed. An
invalid ALTER TABLE statement was being generated, by placing the CLUSTERED
keyword before the referential action clauses. Now, the CLUSTERED keyword
is placed at the end of the statement.
================(Build #1843 - Engineering Case #352293)================
Moving the mouse over an object in the right panel, may have caused an exception
to have been thrown when creating the Tooltip string. This has been fixed.
================(Build #1844 - Engineering Case #351546)================
When in the Code Editor, if Auto indent was set to Default or Smart, pressing
enter with text selected would have added a new line to the selection, rather
than replacing the selection with a new line. The problem does not occur
when tab Auto indent is set to none. This problem has now been fixed.
================(Build #1848 - Engineering Case #352170)================
If a column in the right pane contained dates, times or timestamps, then
clicking on the columns header to sort the items, would have sorted them
according to the string representation of the date, time or timestamp, not
according to the actual date, time or timestamp value. For example, if sorting
in ascending order the date "April 1, 2004", it
would have appeared before the date "January 1, 2003". This has been fixed,
dates, times and timestamps are now sorted by their actual value.
================(Build #1848 - Engineering Case #352226)================
If one or more columns in a primary key were renamed, but no other changes
were made to the primary key columns, then the primary key would have been
dropped and re-created unnecessarily. Since dropping a primary key also drops
all foreign keys which reference the primary key, dropping primary keys unnecessarily
is to be avoided. Now, a primary key is only dropped and re-created when
the data type, size or scale of one or more of its columns is changed, or
when columns are added to it or removed from it.
================(Build #1850 - Engineering Case #352456)================
A service executable path name containing a dash "-", would have been truncated
at the character immediately before the dash when the path name was displayed
in the Service property sheet. This has now been fixed.
================(Build #1853 - Engineering Case #353047)================
Toolbar buttons may not have been enabled correctly when multiple items were
selected in a details list (right pane). This has been fixed.
================(Build #1858 - Engineering Case #354194)================
When using the Create Function wizard to create a function in the Transact-SQL
dialect, the comment describing the format of the parameter list would have
been incorrect. The comment included the "OUTPUT" keyword which is not applicable
for functions. This has now been fixed.
================(Build #1859 - Engineering Case #354282)================
When creating a column, or changing its type, the list of available system-defined
default values for types of binary, varbinary or long binary, included the
date, time and timestamp values. However, if one of these values was choosen
as the default, attempting to save the changes to the column would have resulted
in a "Cannot convert <type> to a binary". This has been fixed, the date,
time and timestamp types are now excluded from the list of system-defined
default values when choosing a binary data type.
================(Build #1862 - Engineering Case #355020)================
On the Macintosh, some COMMAND key combinations did not work in the syntax
highlighting editor, although the same CONTROL key combinations do work.
On the Macintosh, COMMAND should be used instead of CONTROL. For example,
on Windows, CONTROL+G to go to a line. On Macintosh, COMMAND+G to go to
a line. This has been fixed.
================(Build #1865 - Engineering Case #355516)================
When being asked for the name of a new file in the utility wizards (for example,
the Create Database wizard), clicking the "Browse..." button and choosing
an existing file in the file dialog, would have caused a prompt asking to
replace the file. This was erroneous, since once returning to the wizard
and clicking Next, a "You must specify a new file" error message would have
been displayed. This has been fixed. Now, the file dialog no longer asks
to replace the file when a new file is required.
================(Build #1866 - Engineering Case #355982)================
If an article's property sheet was opened to modify its set of columns, and
the Apply button was then clicked, any further changes made to the set of
columns in the article would have been ignored until the property sheet was
closed and re-opened. This has now been fixed.
================(Build #1868 - Engineering Case #356390)================
The "do not ask again" checkbox, that is shown when deleting a table record,
could have been selected and then the "No" button clicked. This would have
resulted in the dialog never being shown again. In 8.x versions, "Yes" would
always have been assumed, and in 9.x versions, "No" would always have been
assumed. This has been changed so that the buttons now say "Ok" and "Cancel",
and the checkbox is ignored when "Cancel" is pressed.
================(Build #1874 - Engineering Case #357703)================
When attempting to connect to a database by specifying an ODBC data source
that used an 8.x or earlier driver (for example, "ASA 8.0 Sample" that is
installed with 8.x), and choosing to use the iAnywhere JDBC driver, the connection
would have failed with the error "The information required to display the
database in Sybase Central could not be obtained". Using a pre-9.x data source
with the iAnywhere JDBC driver to connect to a database in Sybase Central
is now supported.
================(Build #1875 - Engineering Case #358200)================
When connecting to a database with the ASA plug-in, the connection information
used is now remembered for the current Sybase Central session, so that the
next time the Connect dialog is opened, it contains the previous connection
information. For security reasons, the password is remembered only if the
SQLCONNECT environment variable is defined and contains the same password.
Note that this information is remembered for the current Sybase Central session
only, it is not persisted in the users .scUserPreferences file because of
security concerns.
================(Build #1877 - Engineering Case #358479)================
When editing a view, trigger, procedure, function, or event in the right
pane of the main Sybase Central window, the Paste menu item and toolbar button
were enabled if the clipboard contained something other than text, such as
an image. In this case, selecting the menu item or clicking the toolbar button
would have done nothing. Now, the Paste menu
item and toolbar button are enabled only when the clipboard contains
text.
================(Build #1879 - Engineering Case #358926)================
Connecting to a database by using the Connect dialog (as opposed to using
a connection profile) and then attempting to open Interactive SQL by right-clicking
the database and selecting the "Open Interactive SQL" menu item, would have
required typing the password before the connection was established in Interactive
SQL. This has been fixed. Now, the connection is established automatically
using the same password that was specified when connecting to the database
in Sybase Central.
================(Build #1879 - Engineering Case #358965)================
On the Table Data page of the Plug-in Preferences property sheet (accessible
by right-clicking "Adaptive Server Anywhere 9" in the tree and selecting
the Preferences... menu item), changing the font selection to another custom
font when a custom font was already selected, would not have enabled the
Apply button. This has been fixed.
================(Build #1896 - Engineering Case #362144)================
The Message Viewer was capturing CTRL-A keystrokes when it did not have focus.
This has been fixed.
================(Build #1896 - Engineering Case #362226)================
On Linux systems, getting information about a table's primary key, by clicking
the 'Details' button on the Property dialog, would have caused a ClassCastException.
This is now fixed.
================(Build #1898 - Engineering Case #362728)================
Turning on or off warnings in the Preferences dialog from Tools menu, (Tools->Adaptive
Server Anywhere 9->Preferences - 'Confirm deletions when editing table')
would have had no effect when table data was being edited. This has been
fixed.
================(Build #1900 - Engineering Case #362803)================
If the help window was opened and closed, and then Sybase Central was minimized,
the help window would have been reopened when Sybase Central was then maximized.
Note, this same problem affected Interactive SQL dbisql, as well. This has
been fixed.
================(Build #1902 - Engineering Case #363062)================
When running on Linux or Solaris systems, saving a stored procedure or function
as a SQL file would have resulted in the file being written with an extension
of ".null" or ".null.sql", if "All files (*.*)" had been selected in the
"Files of Type" combo box. This has now been fixed.
================(Build #1903 - Engineering Case #358918)================
Sybase Central would not have started if the "Fast launcher" option was turned
on and the TCP/IP port it was configured to use was already in use by some
other program. No errors would have been reported; the application would
simply have failed to run. This problem also affect the Interactive SQL utility
dbisql as well. This has been fixed.
You can disable the fast launcher by running:
dbisql -uninstall -terminate
scjview -uninstall -terminate
================(Build #1911 - Engineering Case #364974)================
When attempting to save a stored procedure or function as a file, the file
name would have been written with a file extension of ".sql", regardless
of the extension given when "All files (*.*)" was selected in the "Files
of Type" combo box. This has been fixed.
================(Build #1921 - Engineering Case #366983)================
When using the debugger in Sybase Central to debug a Java class with Java
object local variables, a NullPointer Exception would have been thrown, when
stopped at a breakpoint, if a local variable was NULL. This has been fixed.
================(Build #1924 - Engineering Case #367470)================
In the "Backup Database" wizard, the "Browse" button on the third page allows
an existing file to be chosen, or to provide the name of a new file. If a
new file name was provided, but without an extension, the value of the file
name text field in the wizard would have ended in ".*" -- which was wrong.
Now, the name typed is copied to the text field verbatim.
================(Build #1925 - Engineering Case #367742)================
When Sybase Central was connected to a database using the iAnywhere JDBC
driver, and a database error occurred, it would have been treated internally
as a closed connection. The error would not have been reported though, as
there really were no closed connections. This has been fixed.
================(Build #1926 - Engineering Case #367886)================
Attempting to use the Proxy Table wizard to create a proxy table for a remote
table that contained a column type not supported by Adaptive Server Anywhere,
would have caused Sybase Central to display an internal error. Now, if an
unsupported column type is encountered, the wizard restricts the selection
of only those columns that are supported.
================(Build #1944 - Engineering Case #370840)================
Shutting down the server while debugging database objects, would have caused
Sybase Central to go to 100% CPU usage. This has been fixed.
================(Build #1947 - Engineering Case #372064)================
When using the MacOS version of Sybase Central to edit a table and clicking
in the Data Type or Value column, by default most of the column's width would
have been taken up by the "..." button, leaving very little room to see the
current value. Now, the button is only as wide as required to show the "..."
text.
================(Build #1947 - Engineering Case #372065)================
When run on MacOS systems, after successfully running a utility wizard that
displayed a messages dialog, Sybase Central would needed to have been restarted
in order to use the menu bar items. Otherwise, selecting a menu item would
have provided no response. This has been fixed.
================(Build #1947 - Engineering Case #372069)================
In the MacOS version of Sybase Central, attempting to use the ENTER key in
a non-editable combo box, to commit the current selection in the drop down
list, would have caused the drop down list to be closed, but the current
item would not have been selected. This has now been fixed.
================(Build #1949 - Engineering Case #372181)================
Selecting a table in the tree and then clicking on the Data tab in the right
pane, would have caused any attempt to unload either the table's data or
the entire database to block, until the unload operation was cancelled and
another item was selected in the tree. Note that selecting another tab in
the right pane was not sufficient to remove the block. Now, the unload operation
will proceed regardless of which tab is selected in the right pane, and there
is no need to change the tree selection to proceed with the unload.
================(Build #1951 - Engineering Case #372800)================
When clicking on the "..." button for a table column's data type, the mnemonics
would not work initially on the property dialog that appears. This has now
been fixed.
================(Build #1954 - Engineering Case #373206)================
In the Table wizard, when specifying a value other than the default for the
percentage of free space to be reserved on each page, the 'number of bytes'
value displayed was not updated as the percentage was changed. This has now
been fixed.
================(Build #1955 - Engineering Case #373172)================
In the Translate Log File wizard, selecting 'Include trigger generated transactions'
and 'Include as comments only', would have included the trigger generated
transactions as statements rather than as comments. This has been fixed.
================(Build #1958 - Engineering Case #373752)================
Attempting to duplicate a Group by copying it to the clipboard and then pasting
it into the Users & Groups folder, would not have copied any of the members.
As well, if a Group was copied and pasted into Interactive SQL, the SQL for
the group would not have ncluded the statements required to define its members.
Both of these problems are now fixed.
================(Build #1958 - Engineering Case #373919)================
If the Unload Database wizard was used to unload the data, but not the structure,
from a subset of a database's tables, then all types of tables were listed
in the wizard. This was misleading, since the wizard should only unload the
data from base tables; that is, the data from proxy and global temporary
tables should not be unloaded. Now, only base tables are listed in the wizard
when choosing to unload data only. When choosing to unload the tables' structure,
then all types of tables are listed as before.
================(Build #1965 - Engineering Case #375205)================
When adding or removing Java classes or JAR files in a database, a class
might have appeared in the All Java Classes folder, but not in the Java Packages
or JAR files folder. This has been fixed. Now, all sub-folders of the Java
Objects folder are kept synchronized.
================(Build #1965 - Engineering Case #375256)================
In the Java class description and source details panels, the line and column
indicators were missing from the status bar. This has now been fixed.
================(Build #1966 - Engineering Case #375369)================
When debugging a procedure or a Java class, if the editor was open when the
breakpoints dialog was closed, it scrolled back to line 1. This has been
fixed.
================(Build #1978 - Engineering Case #378033)================
When creating a new breakpoint in the debugger, the procedure list was not
sorted. This has been fixed, now the list of procedures, events, and triggers
is sorted.
================(Build #1979 - Engineering Case #378272)================
Attempting to view the SQL for a view, trigger, procedure or function, that
contained a -- comment anywhere in the "CREATE|ALTER <object-type> [<owner>].name"
prefix, may have caused Sybase Central to appear to hang, at 100% CPU usage.
It was in fact, just taking a very long time to parse the SQL. This has been
fixed.
================(Build #1990 - Engineering Case #380163)================
If an Article of a Publication contained two columns such that one column's
name was an exact prefix of another column's name (for example, "id" and
"id2"), then only the shorter named column would have been marked as part
of the Article on the Columns tab of the Article property sheet. This has
been fixed.
================(Build #1993 - Engineering Case #380793)================
The background color of explanatory text was being set incorrectly in wizards
when Sybase CEntral was run on Linux. This has been fixed so that the wizard
backgrounds are now transparent.
================(Build #2017 - Engineering Case #387676)================
After creating a procedure using the syntax
CREATE PROC [owner.]name ...
Sybase Central would have incorrectly displayed the procedure's SQL as
ALTER PROCEDURE [owner.]nameCREATE PROC [owner.]name ...
This has been fixed. Now the procedure is displayed as
ALTER PROCEDURE [owner.]name ...
================(Build #2027 - Engineering Case #391584)================
If two tables or views were selected, and the "View data in Interactive SQL"
menu item was clicked, two Interactive SQL windows would have opened, but
data for only one of the tables or views was displayed in one of the Interactive
SQL windows. The other window would often have been empty. This has been
fixed.
A workaround for this problem is to open the Interactive SQL windows separately
-- select one table, click "View data in Interactive SQL", then select the
other table, and click the menu item again.
================(Build #2028 - Engineering Case #392017)================
If a shared connection profile was created, it was not saved if Sybase Central
terminated abnormally after the connection profile dialog is closed. Private
connection profiles were saved properly. This has now been fixed.
================(Build #2044 - Engineering Case #396236)================
After shutting down Sybase Central or Interactive SQL, directories could
have been locked, making it impossible to delete or rename them. This problem
only occurred on Windows platforms, when the "Fast launcher" option was enabled,
and the file browser dialog was used to open or save a file. The native Windows
file dialog changes the current directory. A changes has been added to restore
the directory after the dialog is closed.
================(Build #1752 - Engineering Case #347502)================
When entering a file name in the Export dialog, and omitting the file extension,
an extension which is appropriate to the file type is automatically added.
For example, ASCII format files get a "txt" extension. If the file name entered
ended with a period, the resulting file name would have contained two periods
(e.g. "myFile..txt") Now, only one period is added before the extension.
================(Build #1812 - Engineering Case #341718)================
If the INPUT INTO statement is run with a FORMAT clause and the table does
not exist, for some formats a table is created prior to loading the date.
When executed via DBISQLC, the CREATE TABLE statement generated would have
failed with a syntax error, if the language setting for ASA was anything
other than EN (english). This has been fixed.
Note, this problem did not happen with DBISQL.
================(Build #1816 - Engineering Case #343686)================
Starting with version 8.0.0, it was not possible to connect to a database
using a userid which had an empty string for a password, when using jConnect
and the "Connect" dialog to specify the connection parameters. This was
a bug, and has been fixed.
This problem affected only jConnect connections. A workaround is to use
the iAnywhere JDBC Driver which did not suffer from this restriction.
Note, this restriction affects the ASA and Mobilink plug-ins for Sybase
Central, dbconsole, and dbprdbg as well.
================(Build #1817 - Engineering Case #344168)================
Multiple error messages could have been reported when attempting to fetch
the results of a SQL statement, if the data could not be fetched for any
reason. Now, only the first error message is displayed.
Note, this problem was limited to connections that used the iAnywhere JDBC
Driver.
================(Build #1818 - Engineering Case #344596)================
Specifying the -q commandline option (suppress banner) on the Service Creation
utility dbsvc, without the -y commandline option (delete or overwrite without
confirmation), would not have prevented prompts when modifying or deleting
an existing service. The prompts are now suppressed, and the action will
not be carried out unless the -y switch is also specified.
================(Build #1818 - Engineering Case #348490)================
The Data Source utility dbdsn, now respects the Driver= parameter on Windows
platforms. If the Driver= parameter is included in the connection string,
it will be used to specify the driver to be used for that DSN. The driver
name, (i.e. "Adaptive Server Anywhere 9.0"), is the name listed in the HKLM\SOFTWARE\ODBC\ODBCINST.INI
section of the registry, which contains an entry pointing to the driver DLL.
Notes:
1. Data source entries created using the Driver= parameter, where the driver
is not an ASA driver, cannot then be read or listed by dbdsn
2. The Driver= parameter is already supported on Unix, but has a slightly
different format - it simply specifies the fully-qualified path to the driver
shared object.
================(Build #1819 - Engineering Case #345325)================
The Information utility, dbinfo, could have returned incorrect results if
another table named DUMMY was in the namespace of the connected user, (specified
using the -c switch on the DBINFO command line). This has been fixed, by
qualifying an unqualified reference to the table DUMMY with the user SYS.
================(Build #1821 - Engineering Case #345634)================
If the server issued an error message in response to committing changes,
the error message would not be displayed if the commit was a side-effect
of shutting down DBISQL. This situation could occur if the dbisql option
Wait_for_commit was 'On'. Now the message is always displayed.
================(Build #1821 - Engineering Case #346263)================
The following problems could have been seen when launching or running the
graphical administration tools (ie, Sybase Central, DBISQL, DBConsole, MobiLink
Monitor)
1. A crash on startup -- The Java VM may have reported that an exception
occurred in the video card driver.
2. Painting problems -- On Windows XP, the task switcher that comes with
Windows XP Powertoys caused the administration tools to paint incorrectly
when switching through the list of tasks.
These problems have been fixed, but a workaround is to disable the computer's
use of DirectDraw and Direct3D acceleration.
================(Build #1822 - Engineering Case #345969)================
The Server Location utility, dblocate, would have ignored the timestamp and
listed any server it found in the LDAP directory, regardless of the timestamp.
This problem has been fixed.
================(Build #1822 - Engineering Case #346759)================
The File Hiding utility, dbfhide, can now be used to obfuscated .ini files
used by the server, or any of the utilities (ie. util_db.ini, asaldap.ini,
etc).
================(Build #1824 - Engineering Case #345777)================
The system procedure sa_validate would have returned an empty result set,
instead of an error, when run against a version 8 database that had errors.
This has been fixed to correctly return the error in the result set.
================(Build #1826 - Engineering Case #348793)================
It was possible to edit the result set of a query, even though some, or all,
of the primary keys were not included in the result set. Now, the result
set can only be edited if all of the primary key columns are included, or
the table has no primary key. These conditions have been added in addition
to the existing conditions; that columns must all come from one table, and
no Java columns are included.
Updating rows without the entire primary key being in the result set, could
have inadvertently modified or deleted more than one row.
Some examples, using the sample database (ASADemo):
1. SELECT * FROM customer
The query include all primary key columns from the
"customer" table, so the results are editable.
2. SELECT year, quarter FROM fin_data
The query does not include all of the primary key columns
("code" is missing), so the results are not editable.
================(Build #1827 - Engineering Case #347621)================
When the character set used by a dbunload connection was a Multibyte Character
Set, and was different from the OS character set, rebuilding a database using
either the -an or -ar command-line options, could have caused the last multibyte
character in any comment (for example, a table comment) to have become mangled.
When using dbunload without the -an or -ar options, the mangled multibyte
characters could have been found in the reload.sql file, with the last byte
in the form of hex string (\x??). This has been fixed.
================(Build #1827 - Engineering Case #347779)================
The Unload utility dbunload, or Sybase Central's Unload Database wizard,
could have failed with a syntax error if a comment on an integrated login
id contained a double quote character. Unlike other types of comments, double
quotes are used to enclose the comment string, but any double quotes in the
string where not being doubled. Now the comment will be enclosed in single
quotes and any single quotee or escape characters will be doubled.
================(Build #1829 - Engineering Case #348348)================
If the isql option ON_ERROR was set to EXIT, an "Out of Memory" exception
could have been reported when attempting to display a long binary column
when in console mode. This has been fixed.
================(Build #1829 - Engineering Case #350823)================
The Database Unload and Database Extract utilities dbinload and dbxtract,
did not recognize temporary tables and avoid generating LOAD TABLE statement
for them. The change for Engineering Case 347825 was resulting in an error,
when rebuilding a database containing a Global Temporary table if it was
created with the ON COMMIT DELETE ROWS clause. Both utilities now avoid generating
LOAD TABLE statement for Global Temporary tables.
================(Build #1832 - Engineering Case #313786)================
The Database Initialization utility dbinit, could have failed if the SQLCONNECT
environment variable specified a database file name (DBN). This has been
fixed so that the SQLCONNECT environment variable does not affect dbinit.
================(Build #1832 - Engineering Case #348115)================
If the Database Unload utility dbunload was run with the -ar command line
option (rebuild and replace database), using a database that had already
been started on a database server, it may have failed to rebuild the database.
The original database would have been deleted and the rebuilt database and
log file would still have an extra "R" at the end of the filename. The log
offset values would not have been reset to the values in the original database,
although the data in the rebuilt database would have been identical to the
original database. This has now been fixed.
================(Build #1832 - Engineering Case #349036)================
When using the Database Erase utility dberase, Database Translate utility
dbtran, Database Extract utility dbxtract, or Database Unload utility dbunload,
and specifying the -q command line option (quiet: do not print messages),
without the -y command line option (over write files without confirmation),
the user would still have been prompted with a message whether to over write
an existing file. The prompts are now suppressed, and the action will not
be carried out unless the -y option is also specified.
================(Build #1832 - Engineering Case #349182)================
If the Transaction Log utility dblog was run with the -il, -ir, -is, -x or
-z command line options, or the Database Unload utility dbunload was run
with the -ar command line option, they may have crashed and left the database
in a state such that it could no longer be started. The server would have
reported that the database had been used more recently than the log file
and failed to start. These utilities have now been fixed.
================(Build #1836 - Engineering Case #349675)================
The QueryEditor dialog was not resizable. This has been fixed so that the
dialog can now be resized.
================(Build #1837 - Engineering Case #349930)================
If incorrect options were used with the Unload Database utility dbunload,
it could have crashed after displaying the usage. This has been fixed.
================(Build #1839 - Engineering Case #350188)================
If a database created using SQL Anywhere 5.5 included tables and procedures
used for jConnect, these tables and procedures would have been included in
the reload.sql script generated by the Database Unload utility dbunload.
These objects will now be excluded from the reload script.
Note, that upgrading a 5.5 database using the Database Upgrade utility causes
the jConnect objects to be replaced with ones owned by dbo, so an upgraded
database does not have this problem.
================(Build #1839 - Engineering Case #350252)================
The Interactive SQL utility dbisql, could have reported an OutOfMemory exception
if it encountered a problem fetching a result set. This problem has now been
fixed.
================(Build #1843 - Engineering Case #351394)================
If a query had duplicate ORDER BY items, opening it in the Query Editor would
have caused its parser to generate an error.
For example:
SELECT emp_fname, emp_lname
FROM employee
ORDER BY emp_fname, emp_fname
SELECT emp_fname, emp_lname
FROM employee
ORDER BY 1, 1
This has now been fixed, the duplicate ORDER BY item will be ignored by
the Query Editor's parser.
================(Build #1843 - Engineering Case #351416)================
Dragging a column header, in either the Connection Viewer or the Property
V iewer of the Console utility dbconsole, may have caused it to report an
error, or even crash. This has now been fixed.
================(Build #1846 - Engineering Case #351851)================
The Help button on the Connect dialog was not enabled for the Console utility
dbconsole. It is now enabled
================(Build #1847 - Engineering Case #352893)================
In general, when dbisql is on Windows, if the keyboard focus is a table which
is parented to a tabbed pane, pressing TAB should move the focus to the next
component, given that a cell value in the table was not being edited. This
behavior was broken and has now been fixed.
Note, this problem occurred in all the Java Administration Tools.
================(Build #1852 - Engineering Case #352900)================
If dbisql was connected using the iAnywhere JDBC Driver (which is the default),
and the server was shutdown while it was fetching rows from a result set,
an endless series of dialogs stating that the connection had been terminated
could have resulted. This problem has been fixed, although its occurance
would have been rare, as the server would have had to shutdown after a row
was fetched, but before the column values were read.
================(Build #1855 - Engineering Case #353335)================
When a "rollback to savepoint" statement was executed, the table operations
prior
to the execution of the "savepoint" statement may not have been translated
by the Log Translation utility dbtran. This would have occurred if the rollback
operations had any trigger actions and the dbtran command line option -t
was not used. This problem is now fixed.
================(Build #1856 - Engineering Case #351069)================
Attempting to print the graphical plan when running in a language other than
English, would have caused dbisql to fail with an internal error. This has
been fixed.
================(Build #1859 - Engineering Case #354337)================
An internal error (IllegalArgumentException) could have been reported by
dbisql, when an attempt was made to edit the result set of a stored procedure.
The result set should not have been editable in the first place. This has
now been corrected.
This problem would only have occurred when connecting using the iAnywhere
JDBC Driver.
================(Build #1864 - Engineering Case #355262)================
Clicking the "SQL/Start Logging" menu item, and selecting an existing file,
would have caused a dialog to be opened which asked if it was OK to overwrite
the file. The dialog was misleading as dbisql always appends to the log file,
it never overwrites it. This prompt has now been removed.
================(Build #1865 - Engineering Case #354617)================
If dbisql reported an internal error, the password used in the current connection
(if any) was shown in clear text in the error details. It has now been replaced
by three asterisks. Note that passwords given as part of a "-c" command line
option are still displayed in clear text in the error details.
================(Build #1870 - Engineering Case #355787)================
An internal error could have been reported in response to pressing the DELETE
key when an uneditable result set was displayed in the "Results" panel and
the results table had the focus. This has been fixed.
================(Build #1878 - Engineering Case #358151)================
The Interactive SQL utility dbisqlc, did not display help when the "Help
Topics" menu was selected. This has been fixed.
================(Build #1878 - Engineering Case #358679)================
The drop down boxes on the Joins page of the QueryEditor did not allow scrolling
from the keyboard, only the mouse. This is now fixed.
================(Build #1879 - Engineering Case #358783)================
The Unload utility dbunload, would have reported character set conversion
errors when UNILIB character set conversion tables ( the uct files under
the charsets\unicode folder ) were not deployed with the database server.
The server will nolonger report an error, but if UNILIB character set conversion
tables are not present for the server, object names which contain non-English
characters can be mangled in messages, when the database character set is
different from the client character set.
================(Build #1885 - Engineering Case #359844)================
If "XML" was chosen in the "Files of type" setting in the "Save" dialog,
the file would have been saved with an extension of "sql" rather than "xml".
The file contents were correct, just the file name was wrong. This problem
affected only Linux and Solaris platforms, and has been fixed.
================(Build #1886 - Engineering Case #359828)================
When the Histogram utility dbhist, was run on non-English systems, mangled
characters would have been shown in the title, legend, etc, of the generated
Excel chart. This problem has now been corrected.
================(Build #1886 - Engineering Case #360001)================
The context menu for Results tables could have appeared far from the mouse
pointer if the whitespace to the right or below the actual table data was
clicked on. This has been fixed so that the context menu always appears where
the mouse was clicked.
================(Build #1892 - Engineering Case #361206)================
When run on Unix systems, the Data Source utility dbdsn, required write permission
on the .odbc.ini file, even if just listing or reading the DSNs. This has
been
fixed, now only read permission is required, unless the -d or -w options
are used.
================(Build #1892 - Engineering Case #361307)================
Executing a CREATE DATABASE statement which contained a CASE clause could
have failed as not executable, if the statement appeared in a .SQL file with
other SQL statements or was executed interactively along with other statements.
This has been fixed.
The problem only occurred when attempting to execute a CREATE DATABASE statement
with other statements, as part of a batch, and affects DBISQL version 7.0.0
and later.
================(Build #1893 - Engineering Case #361518)================
On Windows XP (or other Windows versions with the Luna look, ie Windows 2003)
with Luna-style GUI elements enabled, as opposed to Classic Windows elements,
the hard-copy printout of the query plan would have been unreadable. The
DFO tree would have been displayed, but each operator would be a solid-filled
coloured box. This has been fixed.
A workaround is to temporarily change the Display Properties of the machine
to use the Windows Classic style while working with ISQL.
================(Build #1893 - Engineering Case #361599)================
Executing a STOP DATABASE statement which attempted to stop a database running
on a server to which you were not currently connected, would have resulted
in dbisql failing with an internal error. This has been fixed.
================(Build #1895 - Engineering Case #360904)================
The reload.sql file generated by the Database Unload utility dbunload, may
have contained garbled object names if the connection character set and database
character set were different. This problem has now been fixed.
================(Build #1896 - Engineering Case #360041)================
On Linux systems, an error dialog window could have been opened without it
being made the active window. This made it almost impossible to close without
using a mouse, which is contrary to the Section 508 accessibility guidelines.
This has been fixed so that the window is now always activated.
================(Build #1898 - Engineering Case #362497)================
The OUTPUT statement could have failed to write any rows to a file, even
if there were rows to write, if the "Output_format" option was set to an
invalid value. Now, it is impossible to set the "Output_format" option to
an invalid value. When connecting to a database in which the option has been
set to a bad value, the bad value is ignored and the default (ASCII) is assumed.
================(Build #1900 - Engineering Case #362860)================
On slower machines, the Index Consultant may have reported that the analysis
was unable to complete. In this case, the error dialog would have appeared
immediately after tuning was started. This has been fixed by waiting until
the dialog class that reports on the tuning process is loaded before proceeding.
A workaround is to wait approximately two seconds between each mouse button
click on the tuning parameter dialog pages (that is, after workload selection/capture
dialog).
================(Build #1902 - Engineering Case #362936)================
I a query was executed that caused errors as rows were fetched, the same
error could have been reported multiple times. Now, identical errors are
reported only once per result set.
================(Build #1908 - Engineering Case #363434)================
Interactive SQL dbisq, (and Sybase Central), may have appeared to not start
when run on a machine connected via Terminal Services / Remote Desktop. In
fact, these programs do start, but their windows appear on the console (i.e.
the physical machine) instead of the remote desktop. This problem only occurred
if the fast launcher option for the program was enabled (which is the default).
The programs now detect whether they are running within the context of a
remote desktop, and if that's the case, they do NOT use the fast launcher.
================(Build #1910 - Engineering Case #364820)================
If the font used for displaying result sets was changed, the new font was
not applied to those cells which contained NULL values, until dbisql was
restarted. Now, the fonts used in those cells are updated immediately.
================(Build #1911 - Engineering Case #364921)================
Interactive SQL dbisql could failed with an internal error when rows of a
table were selected and then the DELETE key was pressed to delete them. The
following conditions had to be true for the error to have occured:
- There must have been more than about 125 rows in the table
- The rows had to have been selected using the keyboard
- The table was scrolled while selecting, past the initial 125 rows
- The "Show multiple result sets" option was OFF.
This problem has been fixed.
In a related issue, if rows were selected, then CTRL+C was pressed to copy
them, extra lines of empty values would have been select after the last row.
All the table data would have been copied correctly; the error was the addition
of the blank rows. This has also been fixed.
================(Build #1916 - Engineering Case #365537)================
When rebuilding a database with the Unload utility dbunload, and using the
command line options -an or ar along with the -ap option, a START connection
parameter in an ODBC data source was ignored, if the connection string contained
a DSN parameter and no START parameter. This problem has been fixed
================(Build #1916 - Engineering Case #365928)================
When rebuilding a databse with the Unload utility dbunload and using the
-ar or -an command line options, if the original database had page checksums
enabled, the new database would not have had page checksums enabled. This
has been fixed.
================(Build #1918 - Engineering Case #365940)================
If the trantest sample application (PerformanceTransaction), was executed
with -a odbc -n <threadcount> and the thread count was higher than one it
may have crashed in NTDLL.DLL. This has been fixed.
================(Build #1927 - Engineering Case #368023)================
Attempting to run dbconsol.nlm and connect to a server would have caused
an abend. This has been fixed.
NOTE: 6.x versions of dbconsol on all platforms fail to connect to servers
after version 9.0.0 1223. This is still the case, and there is no workaround.
================(Build #1928 - Engineering Case #368170)================
When executing a READ statement with parameters, if the file being executed
did not contain a PARAMETERS statement, but contained an identifier within
braces, dbisql could have reported an internal error. This has been fixed.
================(Build #1928 - Engineering Case #368348)================
Starting dbisql with the "-f" command line option to load a graphical plan
file, could have caused it to hang on startup. The symptom was that the splash
screen would open, but not close, and the main window would not open. This
problem has now been fixed.
================(Build #1932 - Engineering Case #368825)================
If a datasource name was specified on the command line that contained an
encrypted password, dbisql would not have immediately connected to the database,
but would have first displayed the "Connect" dialog. Now an attempt is made
to connect immediately, without first displaying the "Connect" dialog.
================(Build #1932 - Engineering Case #369150)================
If the server command line passed to the Spawn utility dbspawn contained
the @filename option, dbspawn would have expanded the contents of the file
and then spawned the server. This meant that the server command line would
have included the contents of the file. If the file contained certificate
passwords or database encryption keys, they would then be visible through
the 'ps' command or equivalent. This has been changed, dbspawn will no longer
expand the @filename parameter.
================(Build #1934 - Engineering Case #369491)================
When running on Unix systems, the Interactive SQL utility dbisql, would have
displayed the usage message when a full path to a SQL script file was given.
The leading '/' was was being interpreted as a command line switch on. This
has been fixed.
================(Build #1936 - Engineering Case #369842)================
On Windows systems, the Data Source utility dbdsn, would not have listed
all of the data sources if the total length of all of the names exceeded
about 1024 bytes. This has been fixed.
================(Build #1938 - Engineering Case #370180)================
If a version 8.x database was being used as a MobiLink Consolidated database,
and at least one remote user had successfully synchronized multiple publications,
then dbupgrad would have failed to upgrade the database to version 9.x. The
Upgrade utility dbupgrad would have reported a primary key violation on the
ml_subscription table, or in some circumstances the server could fail an
assertion. This has been fixed.
================(Build #1938 - Engineering Case #370333)================
If a version 8.x database was being used as a MobiLink Consolidated database,
and at least one remote user had successfully synchronized multiple publications,
attempting to rebuild using dbunload would have failed when the reload.sql
file was run against an empty database. A primary key violation on the ml_subscription
table would have occurred. This has now been fixed. Note that the problem
did not occur when using the -an or -ar dbunload command line options.
================(Build #1939 - Engineering Case #370567)================
The Unload utility dbunload, may have crashed if the command line option
-ar (rebuild and replace database) was used with a database that had no online
transaction log. This problem has been fixed.
================(Build #1940 - Engineering Case #370724)================
The Unload utility dbunload, would have silently placed old transaction log
files into the root directory, when the command line option -ar (rebuild
and replace database) was used and no log directory was specified, for databases
that were involved in synchronization/replication using RepAgent. Now, if
this situation occurs, the old transaction log file will be placed in the
log directory of the database.
================(Build #1940 - Engineering Case #370726)================
The Interactive SQl utility dbisql, would have failed to process statements
in a file read by the READ command if the file used UTF8 encoding and the
file started with an endian indicator (typically 0xef 0xbb 0xbf on Windows).
Now the endian indicator is handled appropriately.
================(Build #1941 - Engineering Case #369977)================
The Interactive SQL utility dbisql, could have crashed if its window was
closed while an INPUT or OUTPUT statement was executing, and it had been
launched from Sybase Central. Now, clicking the "File/Exit" menu item when
a statement is executing will cause the same "Are you sure?" prompt as would
occur when clicking the window's close button.
================(Build #1945 - Engineering Case #371930)================
It was possible, (although rare, it was more likely on 64 bit Unix systems),
that Interactive SQL dbisql, could have crashed when it was being shut down.
This has been fixed.
================(Build #1949 - Engineering Case #363428)================
When run on Unix systems, the dbisql command line option -onerror (Override
ON_ERROR option) was not being recognized. This has now been corrected.
================(Build #1950 - Engineering Case #371849)================
Executing long statements, which contained the CASE keyword, could have caused
the Interactive SQL utility dbisql to appear to hang. The statement will
eventually execute, but the length of time required will be unreasonably
long. This problem does not affect all statements which contained a CASE
keyword. This has been corrected.
================(Build #1950 - Engineering Case #372485)================
When run on Linux or Solaris systems, the Interactive SQL utilities Import
Wizard could have reported an internal error (NullPointerException) if a
file was imported in a file format other than ASCII, and the "File type"
field was not set to the appropriate type. This has been fixed.
================(Build #1952 - Engineering Case #372897)================
If a database consisted of more dbspaces other than just the SYSTEM dbspace,
and an attempt was made to unload the data from this database to another
database with the same structure using the Unload utility dbunload:
DBUNLOAD -d -ac <connection-parameters-to-new-database>
The Unload utility would have attempted to create a dbspace for the new
database and would have reported an error if the dbspace already existed.
Now, dbunload will not attempt to create dbspaces when reloading into another
database if the -d command-line option is used.
================(Build #1953 - Engineering Case #373253)================
When a table was removed from a query using the QueryEditor, the generated
query statement was not updated to reflect the change. This has been fixed.
Note, this same problem affected the QueryEditor in Sybase Central, and
is fixed as well
================(Build #1954 - Engineering Case #373179)================
If two instances of the SQL preprocessor, sqlpp run at the same time, the
generated code may be invalid. The concurrently running preprocesor's could
attempt to use the other's temporary file, and silently generate invalid
code. This problem has been fixed by including the process pid in the temporary
file.
================(Build #1955 - Engineering Case #373254)================
When the QueryEditor was opened with a query with SELECT FIRST *, the query
would have been changed to SELECT *. This has been fixed.
================(Build #1955 - Engineering Case #373255)================
It a column name ended in a blank space, the QueryEditor would have trimed
the blank space from the end of the name, causing errors. This has been fixed.
================(Build #1957 - Engineering Case #373472)================
The Unload utility dbunload, may not have properly reloaded the contents
of the sys.syssync table for databases that were involved in synchronization.
This would have caused dbmlsync to always have been checking the last sync
status with MobiLink synchronization server, in the first synchronization
after the remote database was rebuilt. This problem is noe fixed.
================(Build #1957 - Engineering Case #373477)================
Running the Unload utility dbunload, with both the -ar (rebuild and replace
database) and -ek (specify encryption key for new database) command line
options, would have failed when attempting to connect to the new database
with the error "Unable to open database file "<file>" -- Missing database
encryption key." The last step of dbunload -ar is to rename the log file,
but the encryption key was not specified when it should have been. This is
now fixed, te encryption key is now specified correctly.
================(Build #1958 - Engineering Case #373740)================
The Interactive SQL utility dbisql, could have become unresponsive if a statement
which returned many large result sets was executed. Interactive SQL would
have attempted to display all the result sets, and could eventually run out
of memory .
Two changes were made to address this problem:
1. The "Show multiple result sets" setting was being ignored. Now the setting
is respected, so by default, only the first result set is displayed.
2. Even if the "Show multiple result sets" setting is on, only the first
10 result sets will be displayed.
================(Build #1961 - Engineering Case #373897)================
When the Index Consultant was run from Interactive SQL, it would have reported
that any query could not be optimized due to optimizer/parser errors. The
problem was caused by an incorrect query result returned from the server
as a result of the changes made for Engineering Case 364372, which was fixed
by Engineering Case 374844. With a server with that fix, the Index Consultant
now operates normally.
================(Build #1961 - Engineering Case #374602)================
Interactive SQL could have failed with an internal error (NullPointerException)
when one of its windows was closed, and all of the following were true:
- more than one window had been opened by clicking the "Window/New Window"
menu item
- more than one window was closed in quick succession
This was was more likely to have occurred on busy machines. It has now been
fixed.
================(Build #1961 - Engineering Case #374630)================
On Mac OS X, script files, which did not have a file extension, were not
able to be selected by the "Run Script" menu item. . A ".sql" was always
added to the file name. This has now been changed so that a ".sql" file extention
is no longer required.
================(Build #1962 - Engineering Case #374705)================
When the Unload utility dbunload, was run with the command line option -ar
(rebuild and replace database), the old transaction log file may not have
been deleted after the database was successfully rebuilt, even if there was
no replication/synchronization involved in the original database. This problem
has been fixed.
================(Build #1966 - Engineering Case #372220)================
When running on NetWare systems, the ASA server would have stopped executing
scheduled events and automatic checkpoints, about 66 hours after the ASA
server was started. This has been fixed.
================(Build #1967 - Engineering Case #374975)================
Under some circumstances, the Index Consultant could claim a benefit for
the query or workload, even though no indexes were recommended. This has
been fixed.
Note that the same problem could have occurred in the Sybase Central Index
Consultant as well, although it is less likely. It has been fixed there as
well.
================(Build #1967 - Engineering Case #375341)================
The Index Consultant may have reported 'identifier ... too long' during analysis,
and failed to continue. This could have happened if the queries being analyzed
were over tables with long names and columns with long names. This has been
fixed.
================(Build #1972 - Engineering Case #377116)================
The reload.sql file created by the Reload utility did not double-quote the
login name for CREATE EXTERNLOGIN statements. This may have caused a syntax
error during reload. This has been fixed, the login name is now double-quoted.
================(Build #1980 - Engineering Case #378613)================
The following problems related to the Import Wizard have been fixed:
1. When importing ASCII or FIXED files into an existing table, the column
data types were always being displayed as "VARCHAR" on the last page. Now,
the actual column types are displayed.
2. When importing FIXED data in an existing table, if fewer column breaks
were placed so that fewer columns were defined than appeared in the actual
table, the preview would still have shown columns for all of the columns
in the database table. This was incorrect, and clicking on these extra columns
would have caused Interactive SQL to crash. These extra columns are now no
long displayed.
3. If the Import Wizard was closed by clicking the close box, it could
still attempt to import the data. Now, clicking the close box is synonymous
with clicking the "Cancel" button.
================(Build #1988 - Engineering Case #379862)================
The Unload utility option -ar ("rebuild and replace database"), would have
deleted the old online transaction log files when done, for databases that
were used as Replication Server stable queue databases. This problem has
been fixed.
================(Build #1994 - Engineering Case #381327)================
Some utilities, (ie dbltm.exe, ssremote.exe, dbremote.exe and ssqueue.exe)
would have crashed if they had been given a non-existent configuration file
on their command lines. The utilities now return the usage screen.
================(Build #2004 - Engineering Case #383942)================
When using the Lithunian language resource library, the server usage failed
to show the command line options -cc, -cr, and -cv. This has been fixed.
================(Build #2009 - Engineering Case #361115)================
If Interactive SQL was run in batch mode with the output redirected to a
file, executing a statement that caused a warning from the server would appear
to have hung. What was happening was that Interactive SQL was waiting for
a key to be pressed to continue execution after reporting the warning, the
prompt had been suppressed. Now, warnings are displayed without prompting
when run in batch mode. Operation in windowed mode has not been changed.
================(Build #2010 - Engineering Case #384070)================
The INPUT statement was ignoring the setting of the "Default_isql_encoding"
option. This has now been fixed.
================(Build #2015 - Engineering Case #387038)================
When connecting using the "Connect" dialog, and specifying a DSN or FileDSN
that contained a LINKS connection parameter, the connection would have failed
if the "Search network for database servers" checkbox on the "Database page"
was checked. The problem was that the "Search network" checkbox causes "LINKS=ALL"
to be added to the end of the connection string, which overrides any LINKS
information contained in the DSN or FileDSN. This has been fixed so that
now when the "ODBC Data Source name" or the "ODBC Data Source file" radio
buttons are selected on the "Identification" page, the "Search network for
database servers" checkbox is automatically unchecked. The names of the server,
database, and database file are also reset. Users are free to modify these
fields if they need to override the contents of the data source.
================(Build #2015 - Engineering Case #387212)================
Attempting to use the FileDSN connection parameter to connect to a database
would have caused Interactive SQL to fail to connect, if the file was not
in the current directory and the file name contained no path information.
The error message: "The ODBC data source file could not be found" would have
been displayed. This has been fixed.
Note, this problem would affect connecting in all of the graphical administration
tools.
A workaround would be to use a qualified name for the DSN file -- either
relative or absolute.
================(Build #2026 - Engineering Case #388837)================
When building a remote database from a reload file generated by extracting
from a consolidated database there could have been a failure. This would
only have occurred if there existed a publication on a subset of columns
in a table and there were also statistics on some columns in the table that
were not part of the publication. This has been fixed.
A workaround would have been to drop the statistics on the table or column
before extracting the user database.
================(Build #2027 - Engineering Case #391577)================
The global variables @@fetch_status, @@sqlstate, @@error, and @@rowcount
may have been set with incorrect values if a cursor's select statement contained
a user-defined function that itself used a cursor. This has been fixed.
================(Build #2030 - Engineering Case #386025)================
When attempting to rebuild a strongly encrypted database with the Unload
utility with the 'rebuild and replace database'command line option (-ar),
if the database was involved in replication or synchronization, the current
transaction log offset of the newly rebuilt database would not have been
set correctly. When rebuilding databases of this type, the Unload utility
will now assume that the encryption key specified with the -ek or -ep switch
is the encryption key for both the database being rebuilt, and the newly
rebuilt database. In addition, using the -ar option will now return an error
in the following situations, where previously the database would have been
rebuilt but the log offset would not have been set correctly :
1) The database is involved in replication or synchronization, but the encryption
key provided with the -ek or -ep switch is not a valid encryption key of
the database being rebuilt.
2) The database is involved in replication or synchronization, the database
being rebuilt is strongly encrypted, but the -ek or -ep switch was not provided
on the dbunload command line.
================(Build #2032 - Engineering Case #390132)================
If the MobiLink ASA client, the MobiLink synchronization server, or SQL Remote
for ASA, was started as a Windows service, the service could not have been
stopped by Window's Service Manager or the Service Creation utility. This
has been fixed.
================(Build #2032 - Engineering Case #391888)================
If the "Show multiple result sets" option was turned on, result sets were
not editable. This has been corrected so that result sets are now editable
with this option on.
Note that editability is still subject to the requirements that:
- All of the columns are from the same table
- If the table has a primary key, all of the primary key columns must have
been selected
- There must not be any Java columns selected
================(Build #2037 - Engineering Case #394136)================
If a table had been created with the PCTFREE clause (table page percent free),
the unload or reload of that table would not have used this PCTFREE in the
CREATE TABLE statement for the new database. This has been fixed.
================(Build #2040 - Engineering Case #395667)================
When attempting to export multiple result sets at the same time, Interactive
SQL could have failed with and internal error, or the resulting files could
have been empty. This has now been fixed.
This problem did not occur if the "Show multiple result sets" open was turned
OFF (which is the default), nor did the problem occur if there was only one
result set to export.
A crash was observed if the result sets were exported using the FIXED file
format. Using other file formats typically resulted in the first result set
being exported correctly, while the files for the subsequent result sets
would be empty.
================(Build #1815 - Engineering Case #343580)================
The SetMessageListener method could have been used to register a message
listener delegate function only if the queuename syntax was used. The agentid\queuename
syntax is now supported.
================(Build #1816 - Engineering Case #342990)================
If two tables had primary keys that included a signed bigint column and dbmlsync
scanned operations in a single run on these two tables, where the values
for the primary key were identical, it was then possible, (although rare),
to lose or mangle operations. When this situation occurred, the two operations
would have been interpreted as occuring on the same table, instead of different
tables, thus causing the two operations to be merged into a single operation.
This has now been fixed.
================(Build #1817 - Engineering Case #339940)================
The MobiLink ASA Client DBMLSync, and SQL Remote may have deleted old transaction
log files that were needed for synchronization. Now, instead of deleting
the old transaction log files, a warning will be issued if the database truncation
offset is greater than the minimum value of progress offsets from SYSSYNC
or the confirm_sent from SYSREMOTEUSER. This warning message will have a
prefix of "I." for 802 and "W." for all other versions.
================(Build #1817 - Engineering Case #347910)================
When run on Windows CE platforms, dbmlsync would have ignored the trusted_certificates
synchronization parameter used to specify trusted root certificates for TLS
synchronization. This has been fixed.
================(Build #1824 - Engineering Case #347052)================
When the Dbmlsync Integration Component was placed on a form in Powerbuilder,
at design time all that was shown on the form would have been a gray rectangle
with the text "ATL Composite Control". This has been fixed, the ActiveX component
will now paint itself appropriately when used in Powerbuilder
================(Build #1824 - Engineering Case #347053)================
Attempting to register the Dbmlsync Integration Component using PowerBuilder
would have caused a message-box to be displayed with the following message:
dbmlsynccomg.dll is not marked as supporting self-registration. However,
the function "DllRegisterServer" was found. Do you wish to attempt to register
this file as a control?
If 'yes' was selected to register the control, the registration would succeed
with no problems. This behaviour has been corrected, the message-box woll
no longer appear.
================(Build #1824 - Engineering Case #347054)================
The visual Dbmlsync Integration Component would not have worked properly
if it was placed on a form in a Powerbuilder application, after the target
was saved and reloaded. After the reload the component would have appeared
on the form as a white rectangle and the "OLE Control Name" on the property
sheet for the component would have been blank. If the application was then
run, the component would have appeared as a white rectangle as well at runtime.
This has been corrected.
================(Build #1826 - Engineering Case #347495)================
The 'restartable download' value passed to the sp_hook_dbmlsync_end hook
was set to FALSE, when it should have been TRUE, for cases where an attempt
to resume a previously failed download failed. The problem occurred when
the attempt to resume the download failed before any more of the download
had been received from Mobilink, and in such a way that it was still possible
to resume the download in the future.
This has been fixed. The restartable download parameter should now always
be correct.
================(Build #1827 - Engineering Case #347635)================
The UploadRow event of the Dbmlsync Integration Component was not being fired
for update operations, unless verbosity was enabled that displayed uploaded
and downloaded rows (-vr). The event is now called correctly.
================(Build #1828 - Engineering Case #348071)================
The synchronization client Dbmlsync, would have used the wrong last download
timestamp for a synchronization, possibly resulting in a remote not getting
download data that it should have recieved when all of the following were
true:
- dbmlsync was being run so that the same subscription was synchronized
repeatedly within the same run. This could occur if scheduling options were
specified or if the restart parameter in the sp_hook_dbmlsync_end hook were
being used.
- hovering was disabled. (ie the -p commandline option was used).
- an error occurred late in the download phase of the synchronization. The
error would have had to occur in the sp_hook_download_end hook or later but
before the download was committed.
In this case, if the next synchronization of the subscription occurred without
shutting dbmlsync down, the last download time used would have indicated
that the previous failed synchronization had a successful download, even
though the download failed and was rolled back. This would have resulted
in the remote not receiving data it required if timestamp based downloads
were being used.
This has been fix, but should not have caused problems for systems using
snapshot download.
================(Build #1841 - Engineering Case #350374)================
If the Synchronization Client dbmlsync, was forced to do Referential Integrity
(RI) resolution after applying the download stream, and the parent table
had been empty, then every row in the child table would have been erased,
even those where there were NULL values in the referencing column(s) of the
child table. Now, these rows are no longer erased during RI resolution.
================(Build #1843 - Engineering Case #352573)================
MobiLink Synchronization clients using HTTP, will now recognize all "Set-Cookie"
and "Set-Cookie2" HTTP headers that they receive in server replies and will
send these cookies back up with all future HTTP requests. If the name of
a cookie matches an existing cookie, the client will replace its old value
with the new one. Cookies are not remembered between synchronizations, they
are discarded at the end of the synchronization.
This behaviour exists in both the HTTP and HTTPS synchronization streams
and is always enabled, but is not supported in Java UltraLite.
================(Build #1843 - Engineering Case #353191)================
Custom HTTP headers can now be specified for HTTP clients with a new "custom_header"
synchronization parameter. The client will include these headers with every
HTTP request it sends. The form for this parameter is:
custom_header=<header >
where HTTP headers typically take the form
<header name>: <header value>
To specify multiple custom HTTP headers, use "custom_header" multiple times.
Note, "custom_header" is supported for both the HTTP and HTTPS synchronization
streams, but is not supported in Java UltraLite.
================(Build #1843 - Engineering Case #353193)================
Custom HTTP cookies can now be specified for HTTP clients with a new "set_cookie"
synchronization parameter. The client will send these cookies with every
HTTP request it sends. The form for this parameter is:
set_cookie=<cookie name>=<cookie value> [, <2nd cookie name>=<2nd cookie
value>, ... ]
Spaces are permitted between tokens.
The set_cookie parameter can be specified more than once in a synchronization
parameters string.
set_cookie is supported for both the HTTP and HTTPS synchronization streams,
but is not supported in Java UltraLite.
================(Build #1845 - Engineering Case #351614)================
When the Synchronization client dbmlsync was running in the scheduling mode,
and the command line option, -p (disable logscan polling) was used, it may
have taken longer and longer to complete each synchronization cycle. This
would have occurred even if the upload and download data remained the same
for each cycle, as the transaction log would have grown. If the command
line option -x (rename and restart the transaction log) was also used, and
the database option Delete_old_logs was set to ON in the remote database,
dbmlsSync could complain with, "Missing transaction log(s) before file ...".
This problem is now fixed.
================(Build #1845 - Engineering Case #351717)================
The Synchronization Client dbmlsync, did not handle resumable downloads correctly.
If a failure occurred after recieving more than 64 K of the download was
recieved, and during the resume attempt, more data was recieved, and another
error occurred, dbmlsync would not have saved the data it recieved during
the resume attempt. As a result it would have had to download that data
again if the download was resumed again. This meant that after 64 K had been
recieved, restarting a download was essentially an all or nothing process.
The download would either have completed successfully, or any data recieved
would have been discarded, leaving the remote in the same state it was in
before the resume attempt.
This behaviour is now fixed.
================(Build #1855 - Engineering Case #353465)================
When a "rollback to savepoint" statement was executed on a remote database,
the table operations prior to the executed of the "savepoint" statement may
not have been uploaded. This would have occurred if the rollback operations
had:
- table operations that were not involved in synchronization, or
- global temporary table operations, or
- trigger actions that modified synchronization tables, but trigger actions
were not requested to be uploaded, or
- trigger actions that modified tables that were not included in any synchronization
publications; or
- rows that were deleted after "stop synchronization delete".
This problem is now fixed.
================(Build #1866 - Engineering Case #346769)================
If the Synchronization client dbmlsync was set to synchronize on a schedule,
and the MobiLink server was shut down when the upload stream was being sent
and then started up
again, dbmlsync could have ontinuously failed to synchronize until it was
shut down and restarted. The MobiLink server would simple have reported
"Synchronization Failed", with no more information. This has now been fixed.
================(Build #1876 - Engineering Case #358388)================
When the ADDRESS specified for the ASA client dbmlsync, to connect to a Mobilink
server, contained the 'security' parameter and the cipher specified was not
recognized, dbmlsync would have reported an error indicating that it could
not load a DLL (usually dbsock?.dll or dbhttp?.dll). A more meaningful error
message is now displayed.
================(Build #1879 - Engineering Case #358939)================
When the graphical version of the Dbmlsync Integration Component was used
on Windows CE, calls to the function IOleObject::GetMiscStatus would have
returned that it was not implemented, (E_NOTIMPL). Thisfunction has now been
implemented for Windows CE.
================(Build #1885 - Engineering Case #359877)================
If the Synchronization Client's (dblmsync) extended option Downloadbuffersize,
was not set to 0 (the default value is 1 Meg), and a communication error
occurred while receiving the download stream from the MobiLink server, the
synchronization would have failed, but no error message would have been reported
and the sp_hook_dbmlsync_download_com_error hook would not have been called.
An error is now issued and the hook is now called in this case.
================(Build #1887 - Engineering Case #360258)================
The total accumulated delay caused by the sp_hook_dbmlsync_delay hook was
being calculated incorrectly when a synchronization was restarted using the
sp_hook_dbmlsync_end hook. As a result the sp_hook_dbmlsync_delay hook might
not be called or the delay produced by the hook might be shorter than specified.
Following are the specific conditions required to see this problem:
- Both an sp_hook_dbmlsync_end hook and an sp_hook_dbmlsync_delay hook have
been coded.
- During a synchronization the delay hook was called one or more times.
Those calls resulted in a total delay D and the maximum accumulated delay
parameter was set to some value M.
- When the end hook is called it sets the 'restart' parameter to 'sync'
or 'true' to restart the synchronization.
When the above conditions are met, the sum of delays caused by the delay
hook was not being reset before the synchronization was restarted. As a
result, on the restarted synchronization, the delay hook would not be called
if D >= M. If D < M then the maximum delay that would be allowed before
the synchronization occurred would be M - D when it should have been M.
The sum of delays is now reset before the synchronization is restarted so
that the delay hook will have the same behavior on a restarted synchronization
as it does on a first synchronization.
================(Build #1891 - Engineering Case #361088)================
When a publication contained a table with a foreign key or a chain of foreign
keys (ie t1 has an FK to t2, t2 has an FK to t3 etc) to a table listed in
the EXCLUDEOBJECT system table and the table listed in the EXCLUDEOBJECT
table was not included in the publication being synchronization, then the
upload would fail with the message, "Upload aborted aborted at offset...".
The message now includes a description of why the upload was aborted.
================(Build #1906 - Engineering Case #364243)================
Support has been added to MobiLink clients for Basic HTTP authentication
to third-party HTTP proxies and servers, as described in RFC 2617. To authenticate
to web servers and gateways, the userid and password is specified using the
new "http_userid" and "http_password" synchronization parameters. To authenticate
to proxy servers, use "http_proxy_userid" and "http_proxy_password".
If a third party HTTP server or proxy requires authentication, but no credentials
are supplied, or if the supplied credentials are rejected, the synchronization
attempt will fail and an appropriate error will be reported.
With Basic authentication, passwords are included in the HTTP headers in
cleartext, so use HTTPS to encrypt the headers and protect the passwords.
Note that Digest authentication, which is also described in RFC 2617, is
not currently supported.
================(Build #1906 - Engineering Case #365154)================
MobiLink clients now support Digest HTTP authentication to third-party HTTP
proxies and servers as described in RFC 2617, in addition to Basic authentication.
The same synchronization parameters are used to supply the userid and password
as for Basic authentication. The HTTP server or proxy configuration determines
whether Basic or Digest is used in the "WWW-Authenticate" or "Proxy-Authenticate"
HTTP header they send to the client.
The difference between Basic and Digest is that Digest employs various security
mechanisms to protect the password and to protect from common types of attacks,
while with Basic, the password is sent in clear text. However, HTTPS provides
far better security than Digest, so it is recommended that Basic HTTP authentication
be used with HTTPS for full security. The only situation where Digest authentication
is really useful is when connecting through a client-side proxy which doesn't
support HTTPS directly, but requires authentication.
================(Build #1908 - Engineering Case #364347)================
There are three output files produced by the QAnywhere Agent if the -verbose
command line option is specified: qaagent.log, qanycli.mls and dblsn.out.
These files contain the output from the qaagent.exe process, the dbmlsync.exe
process and dblsn.exe process respectively, and are created in the "start-in"
directory of the QAnywhere Agent process when launched on a Windows platform,
or the file system root, if the agent was launched on a Windows CE platform.
This could have caused problems if it became necessary to output to another
location, especially on Windows CE, if the root file system was on a media
with very limited space, and the output files needed to be redirected to
other media where there was more room.
The fix is the addition of four new command line options for the QAnywhere
Client agent (qaagent):
-o <file> log output messages to file. Ex. -o c:\tmp\qaa.out outputs to
files c:\tmp\qaa.out, c:\tmp\qaa_sync.out and c:\tmp\qaa_lsn.out
-ot <file> truncate file and log output messages to it. Ex. -ot c:\tmp\qaa.out
truncates and outputs to files c:\tmp\qaa.out, c:\tmp\qaa_sync.out and c:\tmp\qaa_lsn.out
-os <size> rename log file to YYMMDDxx.<ext> and start a new file with the
original name when log reaches <size> (minimum 10K, cannot use with -on).
The value of <ext> depends on the log file being renamed.
-on <size> append .old to the log file name and start a new file with the
original name when log reaches <size> (minimum 10K, cannot use with -os)
The QAnywhere Client will log its output to the specified file, The MobiLink
Client dbmlsync, will log its output to the file suffixed by "_sync", and
MobiLink Listener dblsn, will log its output to the file suffixed by "_lsn".
For example, if "-o c:\tmp\mylog.out" is specified then qaagent will log
yo "c:\tmp\mylog.out", dbmlsync will log to "c:\tmp\mylog_sync.out" and dblsn
will log to "c:\tmp\mylog_lsn.out".
The extension of the files created when using the -os option depends on
the log being renamed. If the QAnywhere Agent log is being renamed, the extension
is .qal. If the dbmlsync log is being renamed the extension is .dbs. If
the dblsn log is being renamed the extension is .nrl.
If -verbose is specified without specifying -ot or -o, then the pre-fix
logs are used: qaagent.log, qanycli.mls and dblsn.log, and -os and -on switches
are ignored.
================(Build #1924 - Engineering Case #367528)================
Autodial was not responding, even when the network_name parameter was specified.
Autodial functionality has now been restored.
================(Build #1935 - Engineering Case #369238)================
If the schema of a table outside of a publication was altered (for example,
table "t1"), and a synchronizing table existed, whose name started with this
table's name (for example, table "t1_synch"), that had outstanding changes
to synchronize, then dbmlsync would incorrectly report that the schema of
the synchronizing table had been altered outside of synchronization. This
has now been fixed.
================(Build #1940 - Engineering Case #370609)================
The MobiLink Client dbmlsync, would have complained of an "invalid option
...", if it was started with a configuration file, @filename, and filename
contained any extended options specified as
-e opt1="val1";opt2=val2;...
even if all the extended options were valid. This problem is fixed now.
================(Build #1945 - Engineering Case #371856)================
When doing incremental uploads, dbmlsync was making several errors when estimating
the size of the upload. As a result the size of the upload increments were
sometimes very different from the size requested using the increment extended
option. The following sources of estimation error have been fixed:
1) Blobs were not being included in the estimate of the upload size.
2) When synchronizing a subscription for a user with more than one subscription,
some operations were being included in the estimate that were not being uploaded.
================(Build #1948 - Engineering Case #372085)================
When the Synchronization Client dbmlsync crashed it would have left behind
a temporary file, which would never have been deleted. Now, a check is made
at startup for any temporary files left from previous runs and deletes them
if they exist.
================(Build #1959 - Engineering Case #374070)================
The ASA client dbmlsync, could have crashed, either while creating the upload,
or at the end of the synchronization. This was more likely to occur with
very large uploads. This behaviour has been corrected.
================(Build #1961 - Engineering Case #374490)================
If the environment variables TMP or TEMP were not set, the MobiLink client
dbmlsync, would have given the error:
"Unable to open temporary file "MLSY\xxxx" -- No such file or directory"
and refused to start. This problem is now fixed.
================(Build #1961 - Engineering Case #374595)================
After performing a large upload, there was a long pause at the end of synchronization.
This pause would have occurred after the message "Disconnnecting from Mobilink
server" was printed in the log, but before the message "Synchronization Completed".
This has been fixed, although the pause may still occur during a synchronization
when the MobiLink client expects to subsequently perform another synchronization
before it shuts down, (for example because a schedule option has been specified,
or more than one -n switch has been specified on the commandline, or the
restart option in the sp_hook_dbmlsync_end hook has been used). In these
cases the pause at the end of the earlier synchronization is offset by a
time savings in the following synchronization.,
================(Build #1975 - Engineering Case #377036)================
If the MobiLink client was run with the -vn option, ('show upload/download
row counts'), but not the -vr option, ('show upload/download row values'),
or the -v+ option, ('show all messages'), then the upload row counts reported
for each table would be cummulative, that is each rows count would include
not just the rows uploaded from that table, but also those uploaded for all
previous tables.
================(Build #1975 - Engineering Case #377905)================
The documentation describes a MobiLink client hook called sp_hook_dbmlsync_process_return_code.
The documentation is incorrect in that the name of this hook is actually
sp_hook_dbmlsync_process_exit code. Otherwise the documentation on this hook
is correct.
================(Build #1977 - Engineering Case #377878)================
The MobiLink client could have crashed, or behaved irratically, when when
the value of the 'charset' database property for the remote database was
longer than 29 bytes. In particular, this was the case for databases with
the EUC_JAPAN collation, although there may be other such collations. This
issue has been fixed.
================(Build #1978 - Engineering Case #377879)================
On Windows CE, when synchronizing a remote database with a double-byte character
set, the MobiLink client would issue the following warning if the database's
character set was not identical to the operating system's character set:
"Unable to convert the string ... from the system collation to the database
collation".
The MobiLink client will no longer issue this warning if the operating system
character set and the database character set are very similar (for example
cp932 and sjis).
================(Build #1980 - Engineering Case #378115)================
The MobiLink ASA client may not have shut down gracefully when it was running
as a Windows service. This may have caused resources such as temporary files,
not to have been cleaned up before shutting down. This problem has now been
fixed.
Note, this problem applied to the MobiLink synchronization server and SQL
Remote for ASA as well, and have also been fixed.
================(Build #1998 - Engineering Case #382166)================
When the MobiLink client extended option TableOrder was specified, the synchronization
would have failed if the tables, or their owners, were specified in a different
case from the one in which they were defined in the database. This problem
occurred whether the database was case sensitive or not. The tables and owners
specified by this option are now always treated as case insensitive.
================(Build #1998 - Engineering Case #382167)================
If event hooks were defined, the MobiLink client would not have recognized
and executed them if their procedure name was not entered entirely in lower
case. The case of the procedure name is now ignored.
Note, a similar problem with the Extraction utility and SQL Remote has also
been fixed.
================(Build #2004 - Engineering Case #384141)================
When using the Database Tools interface to run the MobiLink synchronization
client, if the a_sync_db version field was set to a value that was 8000 or
greater, and less than the version supported by the dbtools library, then
the upload_defs field would have been ignored. If the database had more than
one subscription, then this would have caused the synchronization to report
the following error message:
Multiple synchronization subscriptions found in the database. Please specify
a publication and/or MobiLink user on the command line.
This behaviour could also be seen if the MobiLink client was used with a
later version of the dbtools library. This has been corrected.
================(Build #2006 - Engineering Case #372331)================
During synchronization it was possible for a Windows CE device to go into
sleep mode. Now the MobiLink client makes system calls to ensure that this
does not happen. It is still possible for a device to go into sleep mode
during a delay caused by the sp_hook_dbmlsync_delay hook or during the pause
between scheduled synchronizations.
================(Build #2007 - Engineering Case #385171)================
If an sp_hook_dbmlsync_logscan_begin hook was defined that modified a table
being synchronized, and the extended option Locktables was set to 'off',
then actions performed by the hook would not have been uploaded during the
current synchronization. Actions would have been uploaded correctly though
during the next synchronization. This has been changed so that any changes
made to synchronization tables by the sp_hook_dbmlsync_logscan_begin hook
will now be uploaded during the current synchronization regardless of the
setting of the Locktables option.
================(Build #2007 - Engineering Case #385174)================
The DBMLSync Integration Component would previously have created and written
debug information to a file named dbmlsyncActiveX.log. This file will no
longer be created in released versions of the DBMLSync Integration Component.
================(Build #2009 - Engineering Case #385179)================
The MobiLink client may not have detected that a table had been altered,
and would have sent an invalid upload stream to the consolidated database.
The following operations in a log file demonstrate the problem, when the
client scanned the log during a single synchronization :
1) Data on table t1 is changed.
2) Table t1 is removed from the only publication it belongs to.
3) Data on table t1 is changed again.
4) Table t1 is altered.
5) Table t1 is added back to the publication it was removed from.
Now, the MobiLink client will report the error "Table 't1' has been altered
outside of synchronization at log offset X" when this situation arises.
================(Build #2033 - Engineering Case #393970)================
Reloading state tracking information into an ASA database for MobiLink synchronization,
would have cause subsequent restartable downloads, or file-based downloads,
to fail. Five new columns were added to sys.syssync in version 9.0.0, but
only 2 columns were being unloaded. This has been fix to now unload the missed
columns.
================(Build #2034 - Engineering Case #393633)================
On Windows CE devices, the MobiLink ASA client, (as well as SQL Remote for
ASA) may not have renamed the output file specified by the -o <filename>
option, even when the size of the output file exceeded the size specified
by the -os <size> option. This would have occurred if the output file already
existed before the application started. This has been corrected.
================(Build #1752 - Engineering Case #345236)================
Microsoft Windows, for Asian (multi-byte) languages, allows a user to define
their own characters, including the glyph that is displayed. As part of defining
a character, the user picks an unused code point in the character set. MobiLink
and ASA were not aware of this new code point, and character set conversion
would substitute the "invalid character" for any user-defined characters.
Now, the mappings for user-defined characters in cp950 (Traditional Chinese)
are included.
================(Build #1928 - Engineering Case #366503)================
When using the .NET scripting, Character fields moving to or from the database
may have been corrupted if they contained non-ascii characters. These fields
will now be bound as Unicode. Current user code can remain unchanged and
should continue to work.
================(Build #1875 - Engineering Case #358138)================
Connecting to a database in the MobiLink plug-in by opening the Connect dialog
and specifying the connection information, would have caused the information
specified (excluding the password) to be saved in the user's .scUserPreferences
file under the default connection key "DefConn". The information was saved
so that the next
time the Connect dialog was opened, it would contain the previous connection
information. For security reasons, this feature has been removed. Now, this
information is no longer implicitly saved in the user's .scUserPreferences
file. Instead, it is persisted in memory for the current Sybase Central session
only. Note that the user can still use
connection profiles to explicitly persist connection information in the
.scUserPreferences file.
This change also fixes a problem which could have caused the password to
be incorrectly persisted as "***".
================(Build #1920 - Engineering Case #366765)================
Attempting to delete a user with one or more subscriptions, when connected
to an Oracle database using the "iAnywhere Solutions 9 - Oracle Wire Protocol"
ODBC driver, would have failed to delete the user. The problem was that the
rows in ml_subscription were not being deleted before the row in ml_user.
This has now been fixed.
================(Build #1986 - Engineering Case #375586)================
The following Connection events were missing from the combo box in the Connection
Script wizard: authenticate_parameters, begin_publication, end_publication,
and modify_error_message. This meant that it was not possible to use Sybase
Central to create scripts for these events. These events have now been added.
================(Build #2024 - Engineering Case #389538)================
If an attempt to connect to a database with the MobiLink plug-in failed,
then Sybase Central would have crashed. The plug-in was assuming that if
a SQLException was thrown from JDBCDrivers.connect(), then the value returned
from SQLException.getMessage() was always non-null, which was not always
the case. This has been corrected.
================(Build #1874 - Engineering Case #357556)================
Trying to save a MobiLink Monitor file on a Macintosh, without entering the
extension, would have caused an invalid file format error. This has been
fixed.
================(Build #1896 - Engineering Case #362053)================
The MobiLink Monitor was reporting the wrong number of uploaded bytes. The
Monitor would most often have reported the actual value plus one, but it
could also have reported even larger values. This has been corrected.
================(Build #1819 - Engineering Case #345409)================
If the MobiLink Listener could not be found (ie not installed or not in the
path), then the QAnywhere Client qaagent would have output an error message
when it was launched, but then would have hang and not terminate. A work-around
is to use the qaagent command line option -push_notifications "disabled",
in which case qaagent will not attempt to invoke dblsn. This problem did
not occur on supported CE platforms. This hang will now not occur when the
QAnywhere Client cannot find the Listener.
================(Build #1819 - Engineering Case #345795)================
If a client-side transmission rule contained a syntax error, an error report
would have been written to the QAnywhere log file and the rule would be ignored.
Now, the behaviour has been changed such that if errors are detected, they
are reported and QAnywhere client will fail to start.
================(Build #1824 - Engineering Case #347060)================
An incorrect UTF8 to Unicode conversion that caused a memory overwrite, (specific
to Windows CE), and thread synchronization issues, may have caused the QAnywhere
Client to crash. Both of these problems have now been fixed.
================(Build #1826 - Engineering Case #347324)================
If the readText method of QATextMessage was called to read n Unicode characters,
it would have returned a smaller number (possibly n/2) of the characters
read. To read the remaining characters, it was necessary to repeatedly call
the method, and it would have returned n/4, n/8, etc. characters on each
subsequent call. The exact amount returned on each call would depend on the
length of the UTF-8 representation of the data.
This has been fixed. The function will now return n characters, unless there
are fewer characters available to return.
================(Build #1827 - Engineering Case #347610)================
The readText method of QATextMessage and the readBinary method of QABinaryMessage
could have returned -1 (indicating no more content) before the end of the
message content had been reached. In this circumstance, the getLastError
method of the receiving QAManager could have indicated no error.
Now, getLastError will return an error code indicating an unexpected end
of message has been reached.
================(Build #1828 - Engineering Case #348252)================
The QAnywhere client library failed to free a small amount of memory whenever
a message was sent or received. The amount leaked could have been significant
for a long-running application. This has now been fixed.
================(Build #1830 - Engineering Case #348659)================
The QAnywhere agent can now take a list of MobiLink server connection stream
parameters, rather than just one. The list is supplied by specifying the
-x command line option multiple times, one for each connection stream parameter,
(a maximum of 32 failover servers may be specified).
For example:
qaagent.exe -x tcpip(host=abc.com) -x tcpip(host=def.com) -x tcpip(host=xyz.com)
The specified MobiLink servers are used to implement a failover scheme,
such that if qaagent cannot connect to the first mentioned MobiLink server,
qaagent will attempt to connect to another "failover" MobiLink server in
the order they appear in the list. Qaagent will only connect to a particular
MobiLink server, if it fails to connect to all the other MobiLink servers
appearing previously in the list. This list traversal will occur every time
qaagent attempts to synchronize messages with the server.
Qaagent has a "listener" component (using dblsn) that is used to receive
indications from the MobiLink server that messages are available at the serevr
for synchronizaton. The listener component only uses the first set of connection
stream parameters specified using -x, the listener component does not failover.
This means that qaagent can never receive push notifications from failover
servers. If the failover servers are indeed different servers, then qaagent
should be run with -ra command line option. When -ra is used, qaagent is
allowed to sync with a server even if it has synced with another server --
normally this is restricted behaviour.
Note:
The failover capability of QAnywhere Agent does not work in the following
situations:
- the agent is running on a cradled Windows CE device,
- the agent is configured to use TCP/IP for communication with the primary
MobiLink server,
- the device is using ActiveSync for TCP/IP connections.
Because of the way TCP/IP is implemented for ActiveSync, the QAnywhere Agent
believes that the TCP/IP connection to the primary MobiLink server always
succeeds, even when the server is unavailable. This results in the failover
MobiLink server never being used.
Failover works correctly in the case of a cradled CE device with an ethernet
connection, when ActiveSync is not involved.
================(Build #1832 - Engineering Case #349244)================
Setting value of a string property to NULL, with QAMessage.setStringProperty,
would have resulted in a null pointer dereference. This has been fixed, NULL
string property values are now supported by the QAnywhere client library.
================(Build #1835 - Engineering Case #349471)================
QAnywhere C++ client library APIs, that returned a string to a buffer given
a buffer size, could have overwritten the buffer by 1 with the null terminator.
This has been changed so that buffer size is now the maximum length of the
string plus 1, to include the null terminator.
================(Build #1836 - Engineering Case #349588)================
The list of property names returned by QAMessage::getPropertyNames could
have contained a duplicate name. This would have happened if, for example,
setStringProperty( "p1", "x" ) was called, followed by setIntProperty( "p1",
3 ). This has been fixed. Now, the second call to set the value of property
"p1" will override the first call.
================(Build #1838 - Engineering Case #350084)================
The QAnywhere client library allowed messages to be queued into the message
store until the disk free space was exhausted. When this occurs on a device
with limited resources, such as a Pocket PC, applications are terminated
by the operating system. Furthermore, the QAnywhere Agent could no longer
synchronize messages with the server at this point, because the synchronization
process requires disk and memory resources. This has now been fixed, so that
the message store does not grow to an unmanageable size. If it is deemed
that the message store is too large, QAManagerBase.putMessage() returns false
and getLastError() returns QAError::COMMON_MSG_STORE_TOO_LARGE.
================(Build #1838 - Engineering Case #351135)================
The QAnywhere .NET client library for the .NET Compact Framework now supports
Message Listeners.
================(Build #1844 - Engineering Case #351746)================
The QAnywhere Agent qaagent.exe, would sometimes have failed to stop the
database server on shutdown. This behaviour would most likely have occurred
when the agent was shutdown while the Mobilink Client dbmlsync was in the
middle of a synchronization, and had an open database connection. It problem
only occurred on Windows CE.
Following error message would have appeared in the qaagent logfile:
InternalError: There are still active database connections
QAnywhere Agent shutdown error -109 - shutdown dbeng9 manually
This is now fixed.
================(Build #1845 - Engineering Case #351644)================
The QAnywhere Agent qaagent.exe now supports the -q (Quiet mode) command
line option. With this option, the main window is initially minimized to
the system tray on Windows NT/2000/XP, and is completely hidden on WindowsCE.
This option also starts the database server with the -qi (do not display
database server tray icon or screen) command line option. Because the agent
is completely hidden on Windows CE, when started with -q, there is now a
Stop utility qastop.exe, that shuts it down gracefully.
The motivation for this change was related to running the QAnywhere Agent
on Windows CE devices, in that there is a limitation that there may be at
most 32 processes running at one time. When this limit is reached, and a
new application is launched, Windows CE will send a WM_CLOSE message to the
application that appears in the list of running applications given by Settings/System/Memory/Running
Programs. When the QAnywhere Agent is launched without -q, it appears in
the list of running applications, and is eligible to receive a WM_CLOSE when
another application is launched. Since, the QAnywhere Agent is to behave
like a service when run on Windows CE devices, it is recommended that it
be launched with -q.
================(Build #1848 - Engineering Case #351705)================
It was possible for the QAnywhere client library to have generated duplicate
message IDs. Although it was unlikely for distributed systems of less than
10,000 clients with the total number of messages sent less than 100,000,000.
However, if the number of clients were to have increased by an order of magnitude,
or if the total number of messages were to have increased by several orders
of magnitude, there would have been a significant chance of duplicate message
IDs being generated. This problem is now fixed.
================(Build #1848 - Engineering Case #352169)================
Large messages synchronized over a flaky communications link may have failed
many times before successfully downloading. This was because messages had
to have been synchronized in a single session, requiring a connection with
the server for the duration. Now, when only part of a message is synchronized
before the connection breaks down, the next time a connection is made, only
unsynchronized parts of the message will be synchronized. Hence, eventually,
the entire message will be synchronized and made available to the client.
The partial synchronization is stored in a temporary file, which is deleted
when the entire synchronization is downloaded and applied. If new messages
become available for a client or are generated by the client in the time
between when the last communication failure and the next synchronization,
then any partially synchronized data is discarded.
================(Build #1849 - Engineering Case #352176)================
The QAnywhere Agent qaagent sends special system messages to a "system" queue
that contains information about the network status of the client. These messages
were not getting deleted from the client's message repository, even after
they had expired. Now with this change, they are being deleted.
================(Build #1850 - Engineering Case #352576)================
The QAnywhere Agent, when run on Windows CE, took the path specified by the
-dbfile command line option to be relative to the directory where the agent
was launched. This has been changed, now the path specified by -dbfile is
taken to be an absolute path. The default is unchanged, the directory where
the agent was launched from.
The meaning of the -dbfile option for Windows NT, 2000, or XP is unchanged;
the standard meaning of a file path for these OS's. The default is also
unchanged; the current working directory.
================(Build #1850 - Engineering Case #352582)================
Deleting and recreating the QAnywhere client-side message store-and-forward
database would then have prevented synchronizing messages to or from the
QAnywhere server. This was due to the QAnywhere server and client doing a
sanity check on the store-and-forward database. Since the original database
was deleted, the server was seeing the new database as incorrect. A work
around for this problem, is to use the -rb command line option on qaagent,
which indicates to the QAnywhere server that the store-and-forward database
has been re-created.
Another symptom of this problem was if failover had occurred, and new QAnywhere
server had a different status of client's store-and-forward database. To
work around this problem, use both the -rb and the -ra qaagent command line
options, which together tell the QAnywhere server that the store-and-forward
database is fine to use.
Now, the qaagent -ra and -rb command line options are implicit.
================(Build #1850 - Engineering Case #352626)================
The QAManagerFactory methods createQATransactionalManager(iniFile) and createQAManager(iniFile)
would not have failed if there was an error reading the initialization file.
This has been fixed, now these methods return NULL if the file cannot be
opened or contains an invalid parameter specification. In order to provide
a way for the application developer to discover the cause of an error in
these methods, two new methods, getLastError() and getLastErrorMsg() have
been added to the C++ QAManagerFactory class, and LastError, LastErrorMessage
were added to the C# QAManagerFactory class.
================(Build #1853 - Engineering Case #353151)================
The QAnywhere Agent would have always started its own database server at
startup for its message store, even if a server was already running with
a database name specified by the -dbname command line option. The QAnywhere
Agent has been fixed to behave as documented. A new command line option,
-dbeng, has been added to the QAnywhere Agent, whose default value is the
value of -dbname. Now, when the QAnywhere Agent starts, it attempts to connect
to the database with name given by the -dbname option, in the server with
name given by the -dbeng option. If the connection succeeds, the agent uses
this database as its message store, and does not shut down the database server
when it shuts down. If the connection fails, the agent launches a database
server with the given server name and database name, and uses that database
as its message store. In the latter case, the agent shuts down the database
server when it shuts down.
Note:
If the QAnywhere Agent is connecting to an already running database server,
rather than launching the database server itself, ensure that the database
used by the agent as the message store is for the exclusive use of QAnywhere
messaging applications. This becuase the QAnywhere Agent will perform operations
on the transaction log of this database that assume that no other applications
are using the database. It is possible though to have other databases, unrelated
to QAnywhere messaging, running in the database server.
================(Build #1855 - Engineering Case #352561)================
The logfile for the QAnywhere client store-and-forward database would only
have been truncated when the QAnywhere Agent qaagent was shutdown, and hence
would have contained some redundant messages already synchronized up to the
QAnywhere server. Now, the log file will be truncated periodically during
the lifetime of a qaagent instance.
================(Build #1856 - Engineering Case #353681)================
If an agentid (specified using the command line option -agent_id) contained
non-standard characters, and the agent also had some associated transmission
rules (specified using -policy <rules-file>), then the QAnywhere Agent would
not have started. The non-standard characters would have included any non-alphanumeric
characters with the exception of '_', '@', '#', and '$'. Now, only the following
characters are not permitted in an agentid:
- Double quotes
- Control characters (any character less than 0x20)
- Double backslashes
Single backslashs can be used in an agentid only if it is used as an escape
character.
================(Build #1859 - Engineering Case #354297)================
The QAnywhere Agent was not sending network status notification or push notification
messages to the "system" queue. This problem was introduced after the 9.0.1
GA release, in build 1855, and is now fixed.
================(Build #1868 - Engineering Case #354843)================
QAnywhere client applications are now supported on the Pocket PC 2002 and
Pocket PC 2003 x86 emulators. Only the "scheduled" policy for the QAnywhere
Agent is supported though.
================(Build #1874 - Engineering Case #357494)================
If a large message was sent, the directory containing the QAnywhere store-and-forward
database may have temporarily contained many files with the suffix .log.
After the messages were synchronized to the QAnywhere server, these temporary
files would have been deleted. The amount of disk space consumed by these
temporary files was equal to the size of the message. The problem has now
been fixed, these temporary files will no longer be created.
================(Build #1877 - Engineering Case #358511)================
The QAnywhere Client qaagent.exe, version 9.0.1.1830 or later, could have
failed to start when run against a message store created with an earlier
version of qaagent. The error reported in the log would have been:
E.... InternalError: Procedure 'ml_qa_set_global_property' not found
E.... Source statement:
E.... QAnywhere Agent failed to initialize parameters
I.... QAnywhere Agent stopped
This problem has been fixed.
================(Build #1878 - Engineering Case #357497)================
If the QAnywhere client agent was being run with an "automatic" synchronization
policy, any errors that occurred during message synchronization (a communication
break, for example) would have caused the messages to not be synchronized
until a subsequent push notification was received from the server, or a new
message was sent by a client application. The problem is now fixed.
================(Build #1879 - Engineering Case #358974)================
The QAnywhere client would have automatically set the properties "ias_MimeType"
and "ias_MessageType" for each message that i\was sent. This has been changed.
Now the property "ias_MimeType" is no longer set for any message, since it
did not convey any more information than whether the message content was
text or binary, and this information is obtainable by other means. The property
"ias_MessageType" is also no longer set for regular messages. It is still
set for network status and other system messages that are sent to the "system"
queue. The purpose of this change was to reduce the amount of data sent
over the network by up to 59 bytes per message.
================(Build #1892 - Engineering Case #361517)================
The QAnywhere client import libraries for CE platforms were missing from
the install. This has been corrected, they are now placed in QAnywhere\ce\arm.30\lib
and QAnywhere\ce\x86.30\lib. This affects Windows CE C++ application development.
Windows CE .NET development was not affected.
================(Build #1898 - Engineering Case #362748)================
When launched on Windows CE systems, the QAnywhere Agent qaagent.exe, would
have displayed a continuous "spinning wheel" wait cursor. This has been
fixed, the wait cursor now appears briefly, and then disappears.
================(Build #1899 - Engineering Case #362752)================
When the QAnywhere Agent was launched for the first time, if a QAnywhere
client application was concurrently polling for messages, the agent could
have hung during start up. This scenario occurred when the QAnywhere Agent
and the QAnywhere client application were both connecting to a message store
in an already running database server. This has been fixed, the QAManager/QATransactionalManager
Open() methods now fail if the message store has not been fully initialized.
The error code returned in this case is QAError::COMMON_MSG_STORE_NOT_INITIALIZED.
As well, a new command line option "-si" (store initialize) has been added
to the QAnywhere Agent. This switch tells the qaagent to initialize a database
as a QAnywhere message store, and exit. To solve the original problem, do
the following in this order: (1) start the database server, (2) invoke qaagent
with -si to initialize the message store, (3) start the QAnywhere client
application that polls for messages, (4) invoke qaagent when it is desired
to transfer messages to/from the MobiLink server.
================(Build #1908 - Engineering Case #364260)================
The QAnywhere Agent was launching dbmlsync with the user 'ml_qa_user', and
the password associated with this user, hard-coded in the connection string.
Thus if the default QAnywhere user 'ml_qa_user' was deleted from a message
store , the QAnywhere Agent would not have been able to perform message transmission
to a MobiLink server. This has been corrected. Now, the QAnywhere Agent launches
dbmlsync with the user and password supplied in the command line options
"-dbauser" and "-password", in the connection string.
================(Build #1912 - Engineering Case #363826)================
Every time the synchronization of messages between a QAnywhere client and
server failed, the client log file would have grown roughly in proportion
to the size of the messages that failed to be synchronized. For limited
memory devices, this could be a problem, especially if the QAnywhere client
synchronization policy was "scheduled", with a small synchronization interval.
In this scenario, the log file would grow in proportion to the outstanding
messages every scheduled interval until a successful synchronization occurred.
Once the successful synchronization occurred however, the log file would
have been truncated down to a small size. This problem has now been corrected.
================(Build #1912 - Engineering Case #365210)================
When the QAnywhere Agent was invoked in quiet mode (-q command line option)
on Windows CE, the window did not appear in the System Tray. Now, when qaagent
is invoked with -q, the window is minimized in the system tray. As well,
a new command line option has been added: -qi, which causes the window to
be completely hidden (and not appear in the System Tray). This is similar
behaviour to the database server.
================(Build #1916 - Engineering Case #365955)================
On Windows CE, the C++ API method QATextMessage.readText(string, length)
returned the number of qa_char's read, including the null terminator, when
there were less qa_char's available than the number requested. This problem
has been fixed. The method now returns the number of non-null qa_char's read,
as documented.
================(Build #1928 - Engineering Case #368382)================
The QAnywhere Agent Stop utility, qastop, would have returned before the
agent had terminated. This has been fixed, qastop now waits for the qaagent
process to finish before
returning. This allows more reliable control over starting and stopping
the QAnywhere
Agent under program control.
================(Build #1933 - Engineering Case #368388)================
When the QAnywhere Agent was run on Windows CE systems with "automatic" policy,
and was idle without any messages having been sent or received, subsequent
messages would sometimes appear to be blocked due to the QAnywhere Agent
not transmiting the message, even though the device was connected to a network.
The problem could usually be worked around by shutting down the QAnywhere
Agent, and restarting it while in network coverage. This has been fixed.
Now, messages are sent as needed, with automatic policy.
================(Build #1935 - Engineering Case #371389)================
When the QAnywhere Agent was set up to use a primary server and a secondary
server, and it was performing synchronizations frequently, a failover to
the secondary server may have occurred, even though the primary server was
reachable. This has been fixed. Two new command line options have now been
added: -fr <n>, which specifies the number of retries to connect to the primary
server after a connection failure (default 0), and -fd <n>, which is the
delay, in seconds, between retry attempts to the primary server (default
0). It is recommended that the retry delay be a relatively small value,
perhaps 10 seconds or less.
================(Build #1938 - Engineering Case #370335)================
On Windows CE, the method QAManagerBase::putMessage would have returned FALSE,
and getLastErrorMsg() would have returned the error message "The message
store is too large relative to the disk free space on the device", even though
there was sufficient space in the message store. This has been fixed.
================(Build #1950 - Engineering Case #372406)================
Using the QAnywhere Agent with scheduled transmission rules would have resulted
in invalid messages being received. This has now been fixed. This was a regression
introduced with changes for Engineering Case 371389.
================(Build #1951 - Engineering Case #373130)================
The QAnywhere Agent window did not have a way to be minimized on some Windows
CE operating systems. This has been fixed by adding a Hide button to the
dialog, similar to the Database Server window.
================(Build #1953 - Engineering Case #373543)================
The QAnywhere client library did not have APIs to provide information on
the number of messages queued for sending and receiving. This has been addressed
by adding methods to QAManagerBase to give this information.
For C#, the following enumeration type was added:
public enum QueueDepthFilter {
/// <summary>
/// Count both incoming and outgoing messages. System messages
/// and expired messages are not included in any queue depth
/// counts.
/// </summary>
ALL,
/// <summary>
/// Count only incoming messages. An incoming message is defined
/// as a message whose originator is different than the agent ID
/// of the message store.
/// </summary>
INCOMING,
/// <summary>
/// Count only outgoing messages. An outgoing message is defined
/// as a message whose originator is the agent ID
/// of the message store, and whose destination is not the
/// agent ID of the message store.
/// </summary>
OUTGOING
};
For C#, the following methods were added to QAManagerBase:
/// <summary>
/// Returns the total depth of all queues, based on a given filter.
/// <param name="filter">a filter indicating incoming messages,
/// outgoing messages, or all messages</param>
/// <exception cref="iAnywhere.QAnywhere.Client.QAException">
/// if there was an error
/// </exception>
/// <returns>the number of messages</returns>
/// </summary>
public int GetQueueDepth( QueueDepthFilter filter );
/// <summary>
/// Returns the depth of a queue, based on a given filter.
/// <param name="filter">a filter indicating incoming messages,
/// outgoing messages, or all messages</param>
/// <param name="address">the queue name</param>
/// <exception cref="iAnywhere.QAnywhere.Client.QAException">
/// if there was an error
/// </exception>
/// <returns>the number of messages</returns>
/// </summary>
public int GetQueueDepth( string address, QueueDepthFilter filter );
Similarly, for C++, the following methods were added to QAManagerBase:
/**
* Returns the total depth of all queues, based on a given filter.
* @param filter a filter indicating incoming messages, outgoing messages,
* or all messages
* @see QueueDepthFilter
* @return the number of messages, or -1 if there was an error
*/
virtual qa_int getAllQueueDepth(qa_short filter) = 0;
/**
* Returns the depth of a queue, based on a given filter.
* @param filter a filter indicating incoming messages, outgoing messages,
* or all messages
* @see QueueDepthFilter
* @param dest the queue name
* @return the number of messages in the queue, or -1 if there was an
error
*/
virtual qa_int getQueueDepth(qa_const_string dest, qa_short filter)
= 0;
================(Build #1996 - Engineering Case #381729)================
In relatively rare circumstances, the QAnywhere .NET client could have thrown
the following exception:
System.NullReferenceException: Object reference not set to an instance of
an object.
at Microsoft.Win32.Win32Native.CopyMemoryAnsi(StringBuilder pdst, IntPtr
psrc, IntPtr sizetcb)
at System.Runtime.InteropServices.Marshal.PtrToStringAnsi(IntPtr ptr)
at iAnywhere.QAnywhere.Client.QATextMessage.get_Text()
when reading the Text property of QATextMessage. This has been fixed.
================(Build #2016 - Engineering Case #387508)================
Message transmission rules are specified using the -policy <filename> command
line option of QAnywhere Agent. If the last rule in the rules file did not
end in a newline the rule was ignored. This is now fixed such that the last
rule no longer needs to be terminated by a newline.
================(Build #1819 - Engineering Case #345783)================
If no "automatic" rule was specified in the transmission rules file, then
it was assumed that no messages were filtered. This meant that even if scheduled
rules were specified, they would have had no effect. The work-around is to
create an automatic rule of the form "automatic=1=2" to ensure all messages
would be filtered, except those satisfying scheduled conditions.
Now there is an implicit rule associated with each user mentioned in the
rules file that causes all messages to be filtered, except those explicitly
allowed.
Example:
Before this change a rules file containing:
[someUser]
start time '12:00:00' every 6 hours = myPriority in ( 'low', 'medium' )
would have been equivalent to a rules file containing
[someUser]
automatic=
To get the correct behaviour previously, the rules file would have had to
have been:
[someUser]
automatic = 1 > 2
start time '12:00:00' every 6 hours = myPriority in ( 'low', 'medium' )
Now, the rules file can be simply:
[someUser]
start time '12:00:00' every 6 hours = myPriority in ( 'low', 'medium' )
Note that before this change, the following rules worked as expected:
[someUser]
automatic = myPriority = 'high'
start time '12:00:00' every 6 hours = myPriority in ( 'low', 'medium' )
That is, there was only a problem when scheduled rules were used without
automatic rules.
================(Build #1819 - Engineering Case #345790)================
If a transmission rule contained a syntax error, then an error report would
have been written to the MobiLink log file, and the rule would have been
ignored. Now, the behaviour has been changed so that rules containing errors
detected on server start-up are reported and forcing the MobiLink server
to fail to start.
================(Build #1827 - Engineering Case #347725)================
The QAnywhere JMS connector is a bridge between QAnywhere messaging and JMS
messaging. If the connection to the external JMS messaging provider was
temporarily lost, and the QAnywhere JMS connector subsequently received a
message destined to JMS, while the connection was down, the connector would
have failed and no longer delivered messages to the external JMS provider.
This would not have affected receiving messages from the external JMS message
provider, that were destined for QAnywhere. To restore connector functionality,
the MobiLink server hosting QAnywhere needed to be restarted. Now, temporary
connectivity problems with the external JMS messaging provider will not affect
delivery of messages. Once connectivity is restored, the messages will be
delivered.
================(Build #1830 - Engineering Case #346648)================
Messages that had expired, or had been delivered, would have remained on
the server indefinitely. Now, messages will, by default, be deleted when
the final state of a message has been synchronized with both the target and
originator clients.
================(Build #1830 - Engineering Case #348652)================
If a QAnywhere message was sent to a JMS client through the QAnywhere JMS
connector, it was possible that high priority messages would not have taken
precedence over lower priority messages. Now, higher priority messages will
take precedence over lower priority messages.
================(Build #1838 - Engineering Case #348646)================
Message throughput at the QAnywhere server may have appeared slower than
expected. The synchronization of the QAnywhere server with the QAnywhere
client included a download ack, that kept a server worker idle for roughly
half of each synchronization. The download ack is now no longer used.
================(Build #1850 - Engineering Case #352416)================
Message synchronizations between a QAnywhere client and the QAnywhere server
may have failed with an error indicating a uniqueness constraint violation
on the table ml_qa_repository_staging. It was possible, although rare, that
after a server failure all subsequent message synchronizations from a particular
client would have failed. This is now fixed.
To work around the problem, execute the following SQL statement, against
the consolidated database, with a user having DBA authority:
alter table ml_qa_repository_staging drop unique( seqno )
The unique index being dropped is unnecessary, and is the cause of the problem.
================(Build #1856 - Engineering Case #353595)================
An invalid JMS destination name supplied by a QAnywhere message, when sent
to the JMS connector, would have halted the connector. That is, no more messages
could have been sent through the connector. Now, when the connector receives
a message containing an invalid JMS destination, an error will be reported,
but the JMS connector will continue to process messages.
================(Build #1936 - Engineering Case #369796)================
The ReplyToAddress and InReplyToID properties were not being mapped properly
when a message crossed between QAnywhere and JMS.
If a QAnywhere generated message was sent to a JMS client via the QAnywhere
JMS connector, any ReplyToAddress specified on the QAnywhere message (using
setReplyToAddress() in C++ or the ReplyToAddress property in C#) was mapped
over to the JMS property QAReplyToAddress. This differed from the documentation,
which indicated that the ReplyToAddress was mapped to the JMS property ias_ReplyToAddress.
Similarly, the InReplyToID specified on the QAnywhere message (using setInReplyToID()
in C++ or the InReplyToID property in C#) was mapped over to the JMS property
QAInReplyToID rather than the documented ias_InReplyToID.
If a JMS message was sent to a QAnywhere client via the QAnywhere JMS connector,
the JMSReplyTo specified on the JMS message (using setJMSReplyTo()) was mapped
over to the QAMessage property ias_ReplyToAddress rather than to the actual
ReplyToAddress of the message. Hence the ReplyToAddress could not be accessed
by calling getReplyToAddress() in C++ or the ReplyToAddress property in C#.
Similarly, the JMSCorrelationID specified on the JMS message (using setCorrelationID())
was mapped over to the QAMessage property ias_InReplyToID rather than to
the actual InReplyToID of the message. Hence the InReplyToID could not be
accessed by calling getInReplyToID() in C++ or the InReplyToID property in
C#.
This problem has been fixed to behave as documented.
================(Build #1938 - Engineering Case #370190)================
QAnywhere messages received by the JMS connector, that contain an invalid
JMS Destination, are normally placed in the dead-letter queue (specified
using the connector property ianywhere.connector.outgoing.deadMessageAddress).
If the dead-letter address referred to a QAnywhere client message store,
and that client subsequently received the message, then the copy of that
message would have remained in the server repository indefinitely. This problem
is now fixed.
================(Build #1949 - Engineering Case #372236)================
If the QAnywhere JMS connector received a JMS message having a non-null JMSCorrelationID,
then the QAnywhere server may have failed to receive the message indicating
a NullPointerException. The problem would have occurred even if the JMSCorrelationID
did not refer to a QAnywhere message id. This problem has been fixed.
================(Build #1950 - Engineering Case #372374)================
The JMS connectror dead message address is an address to which messages are
forwarded if it is determined that the message is undeliverable (for example,
the address is not a valid JMS address). If the dead message address was
badly formed, (for example, missing an agentid), then the undeliverable message
would be queued but would be otherwise unreceivable. Now, if the dead message
address is badly formed, the connector will fail to start with an appropriate
connector initialization error message. This will ensure messages queued
against a dead message address will always be receivable.
================(Build #1951 - Engineering Case #372743)================
The QAnywhere JMS connector places a message in the dead-letter queue when
it is determined the message is undeliverable. Binary content messages that
were placed in the dead-letter queue were losing their content. This has
been fixed, such that any new binary messages put in the dead-letter queue
will not lose content.
================(Build #1957 - Engineering Case #373519)================
If a QAnywhere connector was configured to connect to WebSphere MQ, and the
connector received a JMS BytesMessage with a zero length, the message would
have been ignored. The MobiLink log file would have included a error message
indicating "NullPointerException". With this fix, zero length BytesMessage
will be received correctly.
================(Build #1957 - Engineering Case #373526)================
If a QAnywhere connector was configured to connect to WebSphere MQ, and a
QAnywhere message sent to the connector contained an InReplyToID,, (QAMessage::setInReplyToID()
in C++, QAMessage.InReplyToID property in .NET), then the message would not
have been sent onwards to WebSphere MQ. The MobiLink log would show an error
indicating an invalid "JMSCorrelationID". If the connector was configured
with a dead message address, then the message would be re-addressed to that
address. Now, this problem will no longer occur.
================(Build #1977 - Engineering Case #378305)================
Sending an empty BytesMessage through the JMS connector may have caused the
connector to crash due to an uncaught NullPointerException. This problem
has been fixed.
Also, the handling of reply-to addresses through the JMS connector was changed
to match the documentation.
================(Build #1979 - Engineering Case #377349)================
If a JMS server associated with a QAnywhere JMS connector went down, then
some QAnywhere messages directed towards JMS, via the connector, may have
been incorrectly routed to the dead message address instead. This has been
fixed such that QAnywhere messages directed to a JMS server that is down,
will be queued until it comes back up.
================(Build #1979 - Engineering Case #378482)================
Incoming JMS Messages marked as "redelivered", were not being processed correctly
by the QAnywhere JMS connector. This could have resulted in messages being
lost if the connection between the JMS connector and the JMS server was broken
while the connector was in the process of receiving a message. Now redelivered
messages are handled correctly.
================(Build #1980 - Engineering Case #378610)================
If the QAnywhere server crashed, it was possible that an incoming JMS message
could have been acknowledged as received without the JMS message being forwarded
onwards as a QAnywhere message. This has been fixed.
================(Build #1982 - Engineering Case #379240)================
A client could have continuously failed to synchronize messages with a QAnywhere
server, using an ASE consolidated database, under the following circumstances:
1 - During the last message synchronization, the server committed the messages
from the client but the connection with the client dropped before the client
could receive the acknowledgement of that fact.
2 - A new client useed the same message store id as a previous client message
id that was abandoned.
The problem was with the QAnywhere system procedure for handling primary
key conflicts, ml_qa_handle_error. If QAnywhere is being using with an ASE
back-end, the QAnywhere server schema can be patched by dropping and re-creating
the procedure from the fixed syncase.sql, (or syncase125.sql), script.
Any new QAnywhere ASE server schemas created with the fixed scripts will
not have the problem.
================(Build #1991 - Engineering Case #380502)================
If the QAnywhere server contained several thousand undelivered messages (perhaps
because the target clients had not recently synchronized) then the database
server might appear to be using a large percentage of the CPU on its host
machine. The typical solution was to add some indices on the QAnywhere system
tables. Unfortunately, one of the queries being used by the QAnywhere server
was not very efficient, and may have prevented the database server from using
an index. The problem query has been fixed.
================(Build #1992 - Engineering Case #380632)================
The JMS connector would crash when its connection to the consolidated database
was broken. Now, whenever the consolidated database throws a SQLException,
all of the connections in the connection pool are discarded. When the database
is accessed again, new connections are opened which will autostart the database,
instead of using old, invalid connection objects.
================(Build #1997 - Engineering Case #381750)================
It was possible, although likely rare, for the JMS connector to lose the
content of a BytesMessage, when the connector failed to send the message,
due to a JMS provider failure, immediately before or during the send operation.
This has been fixed.
================(Build #2011 - Engineering Case #386385)================
While attempting to send a message to a JMS provider, if a QAnywhere connector
received certain varieties of Java exceptions from that provider, then the
connector would have stopped sending, and could only be restarted by restarting
the hosting MobiLink server. The variety of Java exceptions that would instigate
the problem were any that had a "null" return from the standard "getMessage"
Exception method. This type of exception is seen in certain situations where
a connection to the JMS provider is lost. This has now been fixed.
================(Build #2011 - Engineering Case #386422)================
If a MobiLink server was started with QAnywhere enabled using the -m command
line option and that QAnywhere server was using a connector to a JMS provider,
shutting down the MobiLink server using the GUI shutdown button, the text
console enter key, or dbmlstop, may not have shut down MobiLink cleanly.
That is, the MobiLink server output file may not have contained the line
"MobiLink server finished", as well dbmlstop may have terminated with an
error indicating that it could not shutdown MobiLink, when in fact, MobiLink
had shut down. This has now been fixed.
================(Build #2028 - Engineering Case #392213)================
The QAnywhere JMS connector could have been slow in the presence of many
thousands of messages queued up destined for the connector. This has been
fixed so that any new ASA database created to be used as a consolidated database
for the QAnywhere server includes a new index that greatly improves the performance
of the JMS connector in this situation.
================(Build #1827 - Engineering Case #347536)================
The TCP/IP-based synchronization streams now support the "Ignore" option.
This option specifies a hostname or IP address that MobiLink server will
ignore when that host or IP makes a connection. The intent of this option
is to ignore requests from load balancers at the lowest possible level. This
prevents excessive output in both the MobiLink server log and the MobiLink
monitor output files. The option is only valid when used on the dbmlsrv command
line, as part of the -x option.
For example:
dbmlsrv9 ... -x tcpip {ignore=lb1;ignore=123.45.56.67) ...
This causes MobiLink server to ignore requests from both the host at "lb1"
and the IP address 123.45.56.67
================(Build #1843 - Engineering Case #352188)================
If an HTTP synchronization client encountered a malformed cookie, either
in a "set_cookie:" HTTP header received from an HTTP server, or from a "set_cookie"
synchronization parameter, it would properly have failed, but it would not
have set any stream error. Now, it will set the stream error to STREAM_ERROR_HTTP_UNABLE_TO_PARSE_COOKIE.
================(Build #1903 - Engineering Case #363313)================
When a device was placed in the cradle and synchronization initiated, the
ActiveSync provider could have erroneously reported that no applications
had been registered for synchronization. This has been fixed.
================(Build #1910 - Engineering Case #365083)================
If either the MobiLink Synchonization server or the MobiLink client encountered
an HTTP header in a request or a reply that was larger than 256 bytes, the
synchronization would fail. This has been fixed.
================(Build #1938 - Engineering Case #370166)================
When not going through a proxy server, MobiLink clients always used "localhost:80"
for the "Host:" HTTP header, instead of using the host and port passed via
the synchronization parameters. This has been corrected.
================(Build #1938 - Engineering Case #370317)================
HTTP synchronizations would have failed if a proxy or web server used ASCII
characters that were not beteen 0x20 and 0x7e (except for CR and LF) in their
HTTP headers. now only characters less than 0x20 are rejected.
================(Build #1941 - Engineering Case #370165)================
MobiLink client's synchronizing through Microsoft's ISA Server 2000 proxy
server, via HTTP or HTTPS, would have caused the MobiLink server to hang.
This has been fixed for non-persisent HTTP connections (synchronization paramter
'persistent=0'), but not for persistent connections.
================(Build #1951 - Engineering Case #372484)================
When the Synchronization Server was run using the HTTP stream (ie -x http),
the server could have failed to shutdown in rare cases. This has now been
fixed.
================(Build #1951 - Engineering Case #372785)================
HTTP authentication would fail with Microsoft's Internet Security and Acceleration
Server 2004. When sending back an authentication challenge, ISA 2004 would
send back the HTTP header "Connection: Keep-Alive", even though the client
sent the header "Proxy-Connection: close", which caused the client to become
confused. This is fixed by having the client ignore connection keep-alive
requests.
================(Build #1997 - Engineering Case #381848)================
For secure TLS synchronization, the MobiLink client would have gone through
the list of trusted root certificates provided and failed with stream error
code STREAM_ERROR_SECURE_CERTIFICATE_EXPIRED if any of the certificates had
expired. This behaviour was incorrect. Now, the client will ignore expired
root certificates when reading them, and will only report an error during
the SSL/TLS handshake if no valid root certificates are found to match the
server.
================(Build #1997 - Engineering Case #381849)================
The MobiLink client would only have loaded the first certificate in the trusted
root certificates file specified by the "trusted_certificates" synchronization
parameter. This has been fixed.
Note, this problem did not exist for UltraLite.
================(Build #1813 - Engineering Case #340386)================
When the MobiLink server encountered error -10050 (Expecting ? columns in
cursor, but found ?) no error was written to logfiles or displayed on the
screen. This has been fixed.
================(Build #1816 - Engineering Case #344274)================
If a MobiLink server was run as a Windows service, and it had a dependency
on an Oracle service that was the consolidated database server, MobiLink
may have failed to start. The error would have mentioned being unable to
connect to the consolidated database.
Now, the MobiLink server retries connecting on startup. Retries are once
a minute for ten minutes. After this, failure to connect results in startup
failure.
================(Build #1817 - Engineering Case #338095)================
If a consolidated database had Proxy Tables or Remote Procedure Calls defined
to a remote server, and an error occured when executing a script that referenced
the Proxy Table or Remote Procedure Call, then the MobiLink Server could
have gone into an infinite loop re-trying the same script indefinitely.
Now the handle_error procedure is called after these errors occur.
================(Build #1817 - Engineering Case #343849)================
When starting the MobiLink Server on Windows with an Internet Explorer older
than version 4.0, it would have failed with an error that certain entry points
could not be found in wininet.dll. The wininet dependency has now been removed
from the server. The same dependency has also been weaken on the client side,
so that the relatively up-to-date wininet.dll is required only when automatic
dialup is invoked.
================(Build #1817 - Engineering Case #344544)================
When using IBM's DB2 ODBC driver, the MobiLink server may not have been able
to insert or update BLOBs bigger than 32K. The error in the MobiLink log
(using IBM DB2 version 8.1), would have been:
DIAG [22001] [IBM][CLI Driver] CLI0109E String data right truncation.
SQLSTATE=22001 (-99999)
This has been fixed.
================(Build #1826 - Engineering Case #347513)================
If errors occurred during execution of the prepare_for_download or download
scripts in MobiLink servers, version 9.0.0 and up, olded clients (ie 8.0.0
and 8.0.1) would have failed with the error "Communication error occurred
while receiving data from the MobiLink server". This problem is now fixed.
================(Build #1852 - Engineering Case #349198)================
When two synchronizations, using the HTTP or HTTPS stream from the same remote
database, occurred within the time specified by the MobiLink server's contd_timeout
value (30 seconds by default), it was possible that the second synchronization
would have reported the error "The user name 'rem1' is already synchronizing.
Concurrent synchronizations using the same user name are not allowed". It's
important to note that in some circumstances it was possible for the initial
upload sent by the remote to be discarded by the MobiLink server, and the
MobiLink server would ask the remote to resend starting at a different log
offset. This would also cause the "already synchronizing" error to occur.
This problem has now been fixed.
================(Build #1856 - Engineering Case #351109)================
When rows were uploaded from a Java UltraLite application to MobiLink, it
was possible for MobiLink to download those same rows back to the UltraLite
application, even though these rows were not changed and should have been
filtered from the download stream. This problem has now been corrected.
================(Build #1874 - Engineering Case #357506)================
Using statement-based scripting, uploaded updates can be ignored by not providing
the upload_update script. When an actual update row was encountered on the
upload, an error would have been generated indicating the missing script,
and the synchronization would have been aborted. This has now been corrected.
A workaround would be to provide any of the conflict resolution scripts (resolve_conflict,
upload_insert_new_row, or upload_insert_old_row).
================(Build #1876 - Engineering Case #343460)================
If a combination of inserts and updates existed in the upload stream for
a given table, and an error occurred when MobiLink was applying these operations,
it was possible for the wrong script contents to be written to the MobiLink
log when the error context information was being written. The correct script
contents are now written to the log.
================(Build #1878 - Engineering Case #357824)================
Some files required for MobiLink Java Authentication were not being installed
on AIX and HP systems. In particular, mlsupport.jar, pop3.jar, and imap.jar
were not being installed. This has been corrected.
================(Build #1886 - Engineering Case #359229)================
When using the iAnywhere Solutions ODBC driver for DB2, to insert BLOB data
bigger than 32K bytes, the data at location 32752 wouldhave been corrupted
when fetching BLOB data back in chunks. The trailing byte of a buffer was
being set to 0x00. This problem has been corrected.
================(Build #1896 - Engineering Case #362015)================
The MobiLink Server would not have detected update conflicts, if the server
was running with the command line option -s n (where n was greater than 1)
or without -s at all, and the consolidated database was a Microsoft SQL Server
or Oracle database. Also, the synchronization had to have been a statement-based
upload, with no upload_fetch script, and an upload stream had to have contained
updates that had an unsatisfied WHERE clause in the consolidated database.
These updates would have failed due to the unsatisfied WHERE clause, but
the MobiLink server would have ignored these failures without giving any
error or trying to resolve these conflicts. Now if the batch contains updates
and the number of affected rows doesn't match the number of rows applied,
the server will roll back the operations and try them again using single-row
mode.
================(Build #1898 - Engineering Case #362597)================
It was possible for the MobiLink server to have crashed while doing secure
synchronizations. This has been fixed.
================(Build #1906 - Engineering Case #363990)================
The MobiLink server now recognizes the DataDirect 4.2 native Oracle ODBC
driver. This driver is required when an Oracle consolidated database uses
a character set (eg. JA16SJISTILDE) that is unknown to the wire-protocol
ODBC driver.
================(Build #1906 - Engineering Case #364205)================
The "backlog" parameter can now be used with the MobiLink synchronization
server for HTTP and HTTPS synchronization streams. This new parameter will
indicate the maximum size of the new connection backlog. While the backlog
is full, MobiLink will reject all new synchronization requests, causing synchronizations
to fail on the client side. The maximum backlog size can be set to any integer
value >= 0. By default, the backlog has no maximum size.
================(Build #1913 - Engineering Case #364349)================
If the MobiLink Synchronization Server encountered an HTTP request larger
than 1024 bytes, synchronization would fail. This has been fixed.
================(Build #1913 - Engineering Case #365223)================
The MobiLink Synchronization server may displayed a "Protocol error" message
and aborted the synchronization, if
- it was running without the command line option -s or with -s n, where
n > 1;
- errors occurred during upload of table data;
- the handle_error (hand_odbc_error) script returned 1000; and
- the upload was a transactional upload.
This problem has been fixed.
================(Build #1925 - Engineering Case #367764)================
If the consolidated and remote databases had different collations, the MobiLink
Synchronization server may not have respected the column width defined in
the remote database for columns defined with char or varchar datatypes. This
may have caused the ASA client to crash. Now, the MobiLink server will display
an error, and abort the synchronization, if the length of the column value
is greater than the column width defined in the remote database.
================(Build #1934 - Engineering Case #369479)================
The MobiLink server could have crashed if all the following had occured on
the same worker thread:
- an error was handled on upload on the last table
- a download cursor was opened for the first time on any table
- a subsequent sync used the download table script without having an upload
error handled, and there were multiple rows to download
This is now fixed.
================(Build #1951 - Engineering Case #372205)================
When the Synchronization Server was run using the HTTP stream (ie -x http),
a synchronization failure could have caused the server to crash. This has
been fixed.
================(Build #1952 - Engineering Case #372262)================
Connecting to the MobiLink server immediately after a successful autodial
could have failed with error WSAEHOSTUNREACH (10065). The fix is to repeatedly
attempt to open the session until it is successful, or the network_connect_timeout
expires, default 2 minutes.
================(Build #1953 - Engineering Case #372098)================
In rare circumstances, an upload could have failed with the error "Unknown
Client Error n", where n was some random large number. This error was usually
followed by another error reporting that "A protocol error occurred when
attempting to retrieve the remote client's synchronization log". Although
there are circumstances where this is a valid error to report, an instance
where this error was incorrectly reported has now been fixed.
================(Build #1957 - Engineering Case #367198)================
In some very rare cases, an UltraLite application may have marked a column
as a primary key, as well as an index column, thus causing the MobiLink server
to crash when the application synchronized. This problem has been fixed.
Now, the MobiLink server will give a protocol error when this situation is
detected. To avoid the protocol error, the table will need to be dropped
and recreated.
================(Build #1958 - Engineering Case #373623)================
When synchronizing a version 6 or version 7 MobiLink client to a version
9 MobiLink server, that was connected to a RDBMS other than ASA, it was possible
for the MobiLink server to report an "Invalid datetime format" error after
having sent the download stream to the remote. This has now been fixed.
================(Build #1965 - Engineering Case #375203)================
On Linux and HP-UX systems, if a large number of clients were doing synchronizations
at the same time against DB2, the Mobilink server could have crashed. The
error would be "Error: Unable to dump exception information. Received exception
while processing exception." or "Error: An unexpected exception has been
detected in native code outside the VM." This is a stack overflow problem.
This has been fixed by increasing the size of the stack on these platforms.
================(Build #1969 - Engineering Case #375421)================
The MobiLink server may have hung, with 100% CPU usage, when fetching user
scripts containing Japanese characters, if the consolidated database was
an Oracle database with Japanese character set JA16SJISTILDE, and the MobiLink
server was running on a Japanese OS and used the Data Direct native ODBC
driver 4.20.00.xx for Oracle. A work around for a problem in the Data Direct
driver has been implemented.
================(Build #1970 - Engineering Case #366170)================
When using an Oracle stored procedure to insert multiple rows with a multiple
row array, the version 4.20.00.28 iAnywhere Solutions Oracle WP driver would
fail with the error
"ORA-01460: unimplemented or unreasonable conversion requested (ODBC State
= HY000, Native error code = 1460)". Single row inserts worked fine. This
only affected Windows systems, and has been fixed in version 4.20.00.81(B0067,U0062)
of the driver.
The following files have been updated:
wqora19.dll
wqora19r.dll
wqora19s.dll
wqicu19.dlll
In the DSN setting, the option "Procedure Returns Results" must be selected.
================(Build #1970 - Engineering Case #377019)================
On Windows systems, when selecting numeric values from ASE 12.5.0.1, the
iAnywhere Solution 9 ASE Wire Protocol driver version 4.20.00.06, would have
failed with the error:
"[ODBC Sybase Wire Protocol driver][SQL Server] Arithmetic overflow during
implicit conversion of NUMERIC value '-2.200000000' to a NUMERIC field."
The problem is due to the maximum precision of a numeric column on ASE server.
This has been fixed in ASE 12.5.0.3. Version 4.20.00.63 of the ASE driver
works around the problem. To upgrade to this driver, the following files
need to be changed:
wqdb219.dll
wqdb219r.dll
wqase19s.dll
================(Build #1977 - Engineering Case #377471)================
Attemps to synchronize proxy tables would have failed with the error message
"Feature 'remote savepoints' not implemented". This has been fixed.
================(Build #1981 - Engineering Case #376840)================
The MobiLink Server could have crashed when using the HTTP link for synchronization,
if the remote stopped communicating with the server at a specific point in
the synchronization. The crash actually occurred when the server timed out
the connection. This has been fixed.
================(Build #1988 - Engineering Case #380086)================
The last_download_time (LDT) stored on the consolidated database, would have
been incorrectly set during an upload-only synchronization. Note, this wouldn't
affect the LDT stored at the remote, and sent up during synchronization,
only consolidated-side logic that directly inspected last_download_time in
the ml_user or ml_subscription tables would see an incorrect time. The LDT
is now set only when there is a download.
================(Build #1998 - Engineering Case #381111)================
When used with a case-sensitive database, the MobiLink client could have
behaved incorrectly if MobiLink user and publication names were not specified
in the same case as they were defined in the database. These identifiers
might have been specified in any of the following places:
- the CREATE/ALTER SYNCHRONIZATION USER statement
- the CREATE/ALTER SYNCHRONIZATION SUBSCRIPTION statement
- the dbmlsync command line
- the CREATE/ALTER PUBLICATION statement
The incorrect behaviour could take one of the following forms:
- MobiLink client could have crashed
- synchronizations could have failed inappropriately
- if a MobiLink user was subscribed to more than one overlapping publication,
operations that belonged to both publications might have been uploaded more
than once, resulting in server side errors during synchronization.
MobiLink user and publication names are now treated case-insensitively
================(Build #2002 - Engineering Case #380750)================
MobiLink clients using HTTP and HTTPS can now set a liveness_timeout stream
option, as follows:
liveness_timeout=N
where N is the number of seconds for the timeout. The default is to have
no timeout (N=0). The synchronization will fail if the client waits N seconds
without any data from MobiLink server. Users must be careful when setting
this value. The value set should be short enough to timeout early enough
for an impatient user, but set long enough for the longest delay between
the MobiLink server receiving the upload and sending the download to the
client. Note, if a synchronization is aborted due to a timeout, subsequent
synchronizations should not be affected -- unless the MobiLink server is
still processing the previous synchronization, in which case a "user already
synchronizing" error will occur.
================(Build #2021 - Engineering Case #388858)================
When synchronizing an ASA MobiLink Client that had multiple publications,
MobiLink would have allocated new memory to store the publication information
on each synchronization. Now, the memory is freed after the synchronization
completes.
================(Build #2021 - Engineering Case #388904)================
The MobiLink server could have failed to gather the synchronization scripts
for a given table and reported the error "Error fetching table script t1.begin_synchronization",
even though table t1 did not have a begin_synchronization script. This problem
was more likely to have occured when using an Oracle consolidated database
and the "iAnywhere Solution 9 - Oracle Wire Protocol" ODBC driver. This
problem has now been fixed.
================(Build #2044 - Engineering Case #394331)================
When scanning the transaction log to determine which changes need to be uploaded,
if dbmlsync first found a DML statement on a table (for example, an insert),
and then later found a DDL statement on the same table (for example, an ALTER
TABLE), dbmlsync should have failed and reported an error similar to "Table
't1' has been altered outside of synchronization at log offset X". If the
table in question (or it's owner) had a name that required double quotes
around it in the log file (such as reserved words or numeric names such as
"42"), then dbmlsync would not have detected that the schema of the table
had changed and would not have reported the error. Also, if the ALTER TABLE
statement that was executed included a comment either before the ALTER TABLE
statement or between "ALTER TABLE" and the table name, dbmlsync would also
have failed to detect that the schema of the table had changed and would
not have reported the error. Dbmlsync will now report the error "Table 't1'
has been altered outside of synchronization at log offset X" when either
of these situations arise.
================(Build #1814 - Engineering Case #342817)================
When using the MobiLink authentication utility (dbmluser), updating a user
password may have caused an ODBC error, although the update would have succeeded.
This error message is no longer generated.
================(Build #1815 - Engineering Case #346733)================
The security certificate generation utility, gencert, would have failed to
accept input redirection from a file (eg. gencert < input.txt). This has
been corrected.
================(Build #1815 - Engineering Case #346734)================
The security certificate generation utility, gencert, would always have returned
0, even when an error occurred. Now, if an error occurs, gencert's exit
code is EXIT_FAILURE (as defined in stdlib.h).
================(Build #1818 - Engineering Case #345109)================
When a Windows WM_DESTROY message was posted to any of the application listed
below, the process would not have shutdown and the console for the process
would have stopped responding. Although the process was shutdown properly
when a Windows WM_CLOSE message was received. It is unusal for a WM_DESTROY
message to be posted before a WM_CLOSE message. This problem is now fixed.
Executables affected:
dbmlsync.exe,
dbremote.exe,
ssremote.exe,
ssqueue.exe,
dbltm.exe,
dblsn.exe,
qaagent.exe,
dbmlsrv9.exe
================(Build #1819 - Engineering Case #346623)================
Two new behaviors have been added to the Server-initiated Synchronization
Listener Utility's message handler:
1. The post action now turns a numeric string into message id implicitly,
without going through message registration.
2. The post action will now try to interpret the destination as a window
title, if finding a window class with the specified name failed.
New usage:
action=post <window message>|<id> to <window class name>|<window title>
================(Build #1820 - Engineering Case #346125)================
When using the Server-initiated Synchronization Listener utility's -l command
line option to specify a message handler, it was not possible to specify
messages with a space. Now, both the message and the window class name can
be optionally single quoted, thus allowing them to have spaces, (ie to enter
a single quote in the string, use two single quotes). The entire action string
itself can be a string surrounded by single quotes. If that is the case,
all single quotes within the action string need to be doubled.
Example 1: Posting my message to my class
-l "action='post ''my message'' to ''my class''';"
Example 2: Posting my'message to my'class
-l "action='post my''''message to my''''class';"
Example 3: Also posting my'message to my'class
-l "action='post ''my''''message'' to ''my''''class''';"
================(Build #1823 - Engineering Case #343085)================
The UDP listener only processed one message per iteration, rather than processing
all queued messages. This is now fixed, so that user will no longer need
to adjust to a small interval, even when the notifier send rate can be higher.
A new upper bound of 15 sec is now imposed on the -i polling interval for
UDP. So specifying an interval value higher than 15, it will be treated as
15 seconds internally.
================(Build #1842 - Engineering Case #350857)================
When more than one SMTP gateway was enabled in ther notifier setting, only
one could have been used. Although more than one SMTP gateway is not usually
needed, this problem is now fixed.
================(Build #1843 - Engineering Case #348166)================
An application using the DBSynchronizeLog function of the Database Tools
interface could have crashed, if the msgqueuertn function pointer in the
a_sync_db structure was set to NULL. Now, if the pointer is set to NULL,
the function default implementation is to sleep for the requested time period.
================(Build #1845 - Engineering Case #351756)================
An application connecting to the MobiLink redirector, when configured to
use multiple MobiLink servers, could hang under the following situation:
The application connected to the redirector, which then selected one of
the MobiLink servers to send the request. If this server did not respond,
the redirector detected this and marked the server as dead. The request
would then have been routed to another MbiLink server. If a second application
then connected to the redirector, and it initially happened to select the
dead MobiLink server to service the request, it would eventually have been
routed to a live MobiLink server, but a third application connecting to the
redirector would have caused the application to hang.
Note that this problem could have occurred with a live MobiLink server,
but was timing-dependent, in the following case: the live server would have
had to have been marked dead, and the background thread that updates liveness
state information, would not have updated the state yet. This was unlikely,
unless the sleep interval for the background thread was high, and the number
of requests was high. In that case, the state information was more likely
to be out of date with respect to the MobiLink server when the third client
connected.
These issues have been fixed.
================(Build #1850 - Engineering Case #344018)================
The MobiLink user authentication utility dbmluser would have crashed if it
couldn't determine the default collation from the locale setting. The documentation
is incorrect, the default collation does not fall back to 1252LATIN1 on single-byte
machines and 932JPN on multi-byte machines. The default collation actually
would have become 'unknown', (or in some cases 'ISO_BINENG'), which was a
collation that dbmluser did not expect.
Now the problem in determining the default collation has been corrected,
as well as the cause of the crash.
================(Build #1858 - Engineering Case #353829)================
When the Listener utility was installed on Palm devices running Palm OS v5.2,
and the MESSAGE or MESSAGE_START keyword was not specified in a message handler,
the Listener could have crashed when processing such a message. This problem
did not occur when using the older Palm OS v3.5.x. This has now been fixed.
================(Build #1859 - Engineering Case #353931)================
The SMS listener for the Sierra Wireless AirCard 555 network card, (maac555.dll),
may have delayed processing of a pre-existing message until a new message
arrived. This has now been fixed.
================(Build #1893 - Engineering Case #363625)================
A situation where a client request could have potentially crashed the ISAPI
redirector as been fixed.
================(Build #1896 - Engineering Case #362219)================
With Server Initiated Synchronization, the upload tracking synchronization
could have occurred up to a minute after a _BEST_IP_CHANGED_ internal message
was issue by the Listener. This latency has now been removed, although the
1 minute retry attempt is unchanged.
================(Build #1896 - Engineering Case #362221)================
With Server Initiated Synchronization, the Listener may have failed to expand
the $adapters action variable to a useful adapter name, and left it as an
empty string for some network adapters. An alternative of finding the name
of the adapter has now been added and in the worst case an active adapter
will be named as "unknown adapter".
================(Build #1899 - Engineering Case #362725)================
Some SSL clients could have rejected certificates generated with the Certificate
Generation utility gencert. Beginning with version 8.0.2, gencert added
an extended key usage field to certificates it generated. Since this does
not seem to be accepted universally, it has been removed.
================(Build #1903 - Engineering Case #363426)================
The following invalid command would have caused the Listener utility to crash,
instead of bringing up the usage text:
dblsn.exe -l
This has been fixed.
================(Build #1912 - Engineering Case #365465)================
The redirector for Apache would have truncated HTTP headers that were longer
than 128 bytes. This has been fixed.
================(Build #1913 - Engineering Case #365466)================
The redirector for Apache would have forwarded the 'Authorization' header.
This has been changed so that the redirector now ignores this header and
does not forward it. It is not expected that webservers will run behind
the redirector, hence there is no need to forward this Authorization header,
which could contain a userid and a password in clear text in the case of
Basic authentication.
================(Build #1946 - Engineering Case #371870)================
When using Server-Initiated Synchronization, the Listener could have hung
on shutdown, leaving an unresponsive console. A race condition has been fixed.
================(Build #1994 - Engineering Case #381331)================
The Listener utility may have crashed, if it was shutdown while it was attempting
a long running tracking synchronization. Tracking synchronizations are small
and typically complete or fail quickly. In some rare circumstances, synchronizing
again with a known host, where no MobiLink server was running on the agreed
port, seems to cause the delayed failure. A crash can also happen when a
shutdown was requested during such a delay. This has been fixed.
================(Build #2027 - Engineering Case #391582)================
The Listener utility may have hung on shutdown. This has now been fixed.
================(Build #1970 - Engineering Case #376713)================
Inserting BLOBs or CLOBs bigger than 32k, via the iAnywhere Solution 9 DB2
WP driver 4.20.00.25( 9.0.1GA) or 4.20.00.92(B0072,U0065)( 9.0.2GA), would
have failed with the error:
"[ODBC DB2 Wire Protocol driver]Unexpected Server Reply (Conversational
Protocol error code: 146C (PRCCNVCD)). [-1217] ['HY000']"
This problem has been fixed in version 4.20.00.115(B0080,U0069). To upgrade
this driver, the following files should be changed:
wqdb219.dll
wqdb219r.dll
This fix is for Windows systems only.
================(Build #1960 - Engineering Case #365502)================
When upgrading Mobilink from version 6.0.x, 7.0.x or 8.0.x, if the consolidated
database was Oracle version 8i, the upgrade script would have failedwith
the following error:
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00905: object FPSADMIN.ML_SCRIPT_UPGRADE is invalid
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
execute ml_script_upgrade -- failed
The problem is that Oracle 8i does not support the conversion from LONG
to CLOB. This has been fixed by using an Oracle provided function TO_LOB()
instead.
================(Build #1975 - Engineering Case #377276)================
Setting up or upgrading an ASE 12.0 database server, used as a consolidated
database by MobiLink, would have failed. The problem was as a result of trying
to setup the QAnywhere system tables, so the errors can be ignored if QAnywhere
is not being used. The setup and upgrade scripts have been fixed.
================(Build #1818 - Engineering Case #344321)================
If an online transaction log was truncated, just when dbremote started scanning
the online log, dbremote may have resent transactions that had been sent
in previous messages. This problem is now fixed.
================(Build #1818 - Engineering Case #345076)================
If a transaction log directory contained many files that were not transaction
log files, or the directory contained some files that were not transaction
log files and dbremote was run many times in non-hover mode, dbremote may
have reported the errors: "too many open files" or "missing transaction log
files" and then stop. Everything would have been fine when restarted. This
problem is fixed now.
Note, this problem also affected MobiLink's dbmlsync.
================(Build #1842 - Engineering Case #349918)================
SQL Remote may have reported the error "No log operation at offset x" and
then exited. After examining all the transaction logs, one may find that
the log offset x was in the header page of a transaction log file.
This problem could have occurred if:
1. the last log operation in the latest offline transaction log was a "redo_release";
and
2. in the online transaction log, there is no log operation of commit, rollback,
or release and no actual data to be replicated.
This problem is fixed now.
================(Build #1871 - Engineering Case #355574)================
The SQL Remote Message Agents dbremote and ssremote, the SQL Remote Open
Server ssqueue, the Log Transfer Manager dbltm, and MobiLink Client dbmlsync,
could have hung when attempting to write a message to the output log that
was greater than 64Kb in size. This has now been fixed.
================(Build #1886 - Engineering Case #360190)================
SQL Remote (dbremote) may have hung when scanning transaction logs if the
server logged a checkpoint that has a previous checkpoint pointing to itself.
This problem has been fixed.
Note, this problem also affected Mobilink's Synchroniztion Client and the
ASA Replication Agent
================(Build #1928 - Engineering Case #354147)================
When the log scanning tools were looking for the log file with the desired
starting log offset, if that log file had a transaction in it which began
in an earlier log file, but the log file that contained the start of the
transaction could not be found, an error would have been reported similar
to "Missing transaction log(s) in between file AC.log (ending at offset X)
and file AD.log (starting at offset Y)". The offsets reported would have
been incorrect, and upon inspection, the ending log offset of AC.log would
have likely been the same as the starting log offset of AD.log. The correct
error is now reported, "Missing transaction log(s) before file AA.log".
================(Build #1974 - Engineering Case #376895)================
When the database option "Delete_old_logs" was set to "on", SQL Remote, (as
well as MobiLink, and the ASA RepAgent), may reported "missing transaction
log(s)...". This would have occurred in the following situation:
1) the online transaction that contains the last replication/synchronization
offset, had been renamed, say to X;
2) the offline log X contained transactions that started from an early log,
say Y; and
3) the log Y contained transactions started from an earlier log, say Z.
Transaction log Z may have already been deleted. This problem is fixed now.
================(Build #1924 - Engineering Case #367312)================
When run on Unix platforms, the SMTP message link for the ASA Message Agent
dbremote, would have started and then reported "Execution Complete", with
no errors. This was due to dbremote attempting to load the library libdbencode9_r.so,
rather than the correct library libdbencod9_r.so. Now dbremote loads the
correct library. A work around is to create the following symbolic in the
$ASANY9/lib directory: "ln -s libdbencod9_r.so libdbencode9_r.so".
Note, this problem also affected the SMTP message link for the ASE Message
Agent ssremote, on Unix platforms.
================(Build #1826 - Engineering Case #338398)================
When using the VIM message type, the dbremote icon would not always disappear
from the system tray after dbremote completed. This has now been fixed.