Wrong-Errors Bugs: A New Class of Bug?
Pages: 1, 2
Inequalities are a special case:
means something completely different if you convert the number-side versus if you convert the character side, since characters and their equivalent numbers sort completely differently. In this case, Oracle's default is probably the only way to go--if you are comparing a character with either a number or a date in an inequality, you almost surely want the more-meaningful non-character sort order, and the database should convert the character, unless the developer makes the conversion explicit on the other side, or the character stores a value in a format guaranteed to sort the same way as the date or the number.
If I put together a wish list for the perfect way to handle mixed-type joins on equalities, it would have several requirements:
The join should be able to follow nested loops to a simple index on either side of the join, to choose the join order for best performance, even when the conversion is explicit on one (the wrong) side.
The chosen conversion should never result in conversion errors where errors could have been avoided by converting the other side of the join.
The rows returned by a plan that converts one side of the join must automatically match the rows returned by a plan that converts the other side of the join, without the developer having to think about the problem.
With a solution that delivers all three of these requirements, we can have our cake and eat it, too! Constraints turn out to be the answer.
If a character column always stored a number or a date in the same format,
and all applicable character strings would successfully convert to the required
type using that format, then conversions in both directions would produce the
same results, and would avoid errors altogether. Given constraints that enforced
such consistent formats, developers could safely ignore these subtleties. (As
opposed to the current practice of ignoring them at their peril!) The simplest
case would, for example, constrain the values of character-type
strings of digits (only) without left-hand zeros, strings where
without a conversion error along the way during the evaluation of
The constraints will often be more complex than this, however. Often, the mix
of subtypes stored in the table only sometimes (for one or more subtypes) uses
the character column to store a numerical foreign key, storing unconvertible
strings (names, for example) for other subtypes. In such cases, the constraint
must only apply to the subtypes where the character-type column stores numbers,
and these subtypes should be explicitly defined (for example, with a column
Type) or, if necessary, with some complex expression on some combination
of table columns.
To take advantage of these subtype-specific constraints, developers will need to restrict on the subtype in the SQL, but they should be doing that, anyway--it is surely the intent to join only to these subtypes, if the query joins the character column to a number. You wouldn't want accidentally to join to the wrong subtype just because a couple of rows in that subtype happened (more or less accidentally) to have a character string that converted to a valid number.
Even with current limitations in databases' handling of errors, these constraints are a good idea--there is surely no reason to store numbers as characters in inconsistent formats, and it is surely a bad idea to allow unconvertible strings to be stored where you expect a string to convert to a number. If you currently have rows that violate such constraints, it is surely a good idea to find them and correct them. Unfortunately, most likely no individual knows every case where such constraints should be created on a complex legacy database, nor will you likely find this documented. Currently, we find these errors, at best (assuming super-diligent follow-up on errors), when the application happens to encounter a bad row and return an error, an expensive and unreliable way to uncover the vulnerability.
I propose an error-safe mode for SQL connections: in the error-safe mode, conversions
of columns (whether implicit or explicit) would not even be allowed in the
clause unless a constraint already exists that guarantees the success of the
conversion, and guarantees that the seemingly equivalent conversion on the other
side of the equality really is perfectly equivalent. (For inequalities, the
constraint would guarantee success of the conversion from character to number
or date, but would not allow implicit conversions of the other side where that
would result in a different set of rows owing to a non-equivalent sort.) Developers
would develop and test with this mode enforced (for example, with a new
parameter, in Oracle), while developing or enhancing the application, and many
errors would result even when testing against toy data volumes that would never
yield errors without the parameter.
However, each of these errors would point to a potential corner-case bug that would be horribly hard to find, otherwise, and would point to a fairly simple constraint that would forever prevent that bug. Even on legacy production systems that mostly prevent changes by the customer, the customer could safely add these constraints, cleaning up bad data uncovered by this process, as needed, to meet the constraints. (You would not believe the number of February 30ths, April 31sts, and non-leap-year February 29ths I once found in a legacy-database character column! Cleaning these up can only be a good thing.)
Whenever the database found mixed-type joins, then, whether the conversions were explicit or implicit, the database would find in the constraint definition a declared format in which to expect the character-stored numbers or dates, and could safely convert either side of the comparison, and could join in either direction, with nested loops where these are optimal.
You can already kludge together useful type-format constraints with existing RDBMS functions. For example, in Oracle, you can verify that a character string forms a simple positive integer (or is null) with the pair of conditions:
(LTRIM(Char_Col,'0123456789') IS NULL AND NVL(SUBSTR(Char_Col,1,1),'1')!='0')
If we want to see these constraints everywhere they belong, and if we want
to see the RDBMS recognize them and take advantage of them to safely free up
alternate join orders, such constraints should not be left as complex exercises
for the user. Instead, let's assume that the RDBMS vendors create special-purpose
functions for the purpose. For example, they could create a function
Fmt) that returns
'TRUE' if and only if
is a string that could result from Oracle's function
Fmt is a string that specifies a recognized
number format. For example,
TO_CHAR(456, '9999')='456', and no other number will yield
'0456' using the format
'9999'. A similar
function would verify that a string encodes a date in the specified format.
Here, then, is the path from the current behavior to the new behavior I suggest for handling type-conversion errors and probably preventing at least 95 percent of the wrong-errors bugs, beginning with changes that the RDBMS vendors would need to make.
Create a couple of new parameters, settable at both the session and the system levels. (This requires new functionality from the database vendor.) The first parameter should specify simply that any error outside of the
WHEREclause or in a simple filter condition should be discarded if the row is discarded later in the execution plan. I think the new behavior (returning only errors in fully joined rows that survive all
WHERE-clause conditions) is a safe new default, but we need the parameter in case an application requires consistency with past behavior. The second parameter, not set on by default (for now), would trigger errors in any equality that matches a character-type column with a number-type or date-type value, unless a constraint guarantees that the match will be error-free and equivalent whichever side of the equality is type-converted.
Create new functions, such as
PSEUDO_DATE, that easily allow correct-format checks in constraints. This requires new functionality from the database vendor.
Have the database recognize the new constraints, and take advantage of them not only to trigger errors with the new type-conversions-guaranteed parameter, but also to permit conversions on either side of a dissimilar-types-matching equality, enabling more degrees of freedom for the optimizer. This requires new functionality from the database vendor.
Set both new parameters on, in a controlled development test environment. Play back as much of your application SQL as possible in this test environment, checking for errors, and create constraints that eliminate every error triggered by unsafe type conversions.
Roll the new constraints into production, fixing bad data as needed to enable the new constraints, and fixing any application flaws that lead to the bad data.
Set at least the new error-postponing parameter in production. If you're really serious about preventing wrong-errors bugs, set both new parameters in production. The second parameter will trigger occasional errors when you encounter new SQL that was never tested in the development environments with that parameter set, but the errors will be consistent, regardless of the execution plan and regardless of the data, and will always point to a new constraint required to avoid future wrong-errors bugs.
Remaining Wrong-Errors Vulnerabilities
Type conversions are 95 percent of the battle with wrong-errors bugs, and they require special treatment if we are to both eliminate these bugs and help the optimizer have maximum opportunities to find the best execution plan. Most of the rest of the problem is handled well enough by postponing errors until we see whether the execution plan discards the row before it is complete. A simple new generic function could help eliminate almost all of the rest of the problem, handling division-by-zero, tangent-of-90-degrees, and so on.
I propose a new function,
TRAPPED_ERROR(), which can wrap around
any expression at all, and that would return
'TRUE' if that expression
would trigger an error (though it would trap the error, not return an error), and
'FALSE' if the expression evaluates without an error. For
SELECT Num_Col1, Num_Col2 FROM Experimental_Results WHERE TRAPPED_ERROR(LN(Num_Col1/Num_Col2)='TRUE'
would yield those rows where
was zero, or where
Num_Col1/Num_Col2 was negative, or even where
the database hit an overflow error because the ratio
was just too large for it to evaluate.
This function would help you find bad rows much more easily than you currently can, and with it you could easily code SQL defensively wherever you have the potential for a wrong-errors bug. Even in the absence of the other new functionality I propose above, this new function would greatly help in working around wrong-errors problems.
Jonathan Gennick was a great help getting this rolling, and in good shape for publication. Thanks, Jonathan! My wife, Parva Oskoui, had very useful suggestions as well.
Dan Tow is an independent consultant, operating under the banner SingingSQL (www.singingsql.com). His experience solving Oracle-related performance problems goes all the way back to his 1989 hire by Oracle Corporation. He has a Ph.D. in chemical engineering from the University of Wisconsin at Madison.
In November 2003, O'Reilly Media, Inc. released SQL Tuning.
Sample Chapter 1, "Introduction," is available free online.
For more information, or to order the book, click here.
Return to ONLamp.com