I was mildly surprised to find this function in our code base:
public static double ToDouble( object value ) { if ( value is double ) { return (double)value; } else if ( value is decimal ) { return decimal.ToDouble( (decimal)value ); } else { return double.Parse( value.ToString(), System.Globalization.CultureInfo.InvariantCulture ); } }
It is called from several places in the project, including:
- with the ScannedQty column from a deserialised
DataTable
, - with the Qty and ScannedQty columns in an unrelated
DataTable
received from an ADO.NET connection.
I consider it typically unsafe, dangerous, and fraught with evasive errors, whereas the author, and my colleague, disagrees. He wrote it as a universal one-for-all device to read a double from an object regardless of the source and type of the underlying data and uses it while reading an incoming XML and while loading data from the database. Sometimes, he said, the actual value may be a double
, sometimes a decimal
, and sometimes (e.g. when it comes from an .XML file) a string
.
I proposed that he rewrite access to the various data sources using explicit casting to the appropriate type, which is stable for every use case, e.g.:
public static double DblFromDblObj( object value ) { return ( double )value; } public static double DblFromDecObj( object value ) { return ( double )( decimal )value; } public static double DblFromStrObj( object value ) { return Double.Parse( ( string )value, CultureInfo.InvariantCulture ); }
and replace all invocations of ToDouble()
with one of these, depending on the actual data type involved, but he is loth to do so because
a. it will be a waste of time since the code works as is, and
b. if a data type changes in the database or the incoming XML file the program will break.
In my opinion, a strict and unambiguous protocol shall be established, and enforced in the implementation by means of clean access functions that shall fail in case of type mismatch, indicating an error in the data and a violation of the protocol. This kind of fragility is the basis of reliability.
If the programmer cannot decide the type of a field in the payload and writes dynamic type checking for so trivial a task, there is clearly something amiss, such as lack of attention to the data exchange protocol or mere laziness on part of the programmer that prevented him from studying which type is returned where. For example, when querying a database via ADO.NET
or another SQL-based mechanism, it is dead wrong to sweep the dust under the carpet:
object valueObj = myDataRow["ScannedQty"]; double value = Convert.ToDouble( value );
Direct type-casting is cleaner, faster, and more transparent (because it shows the unambiguous internal type to the reader of the code):
double value = ( double )( decimal )myDataRow["ScannedQty"]; // NOT NULL field
Therefore, the refactoring that I propose is not a waste of time but a useful improvement that makes the code cleaner and the program more reliable. It is also very well that the program will stop working if the underlying type of the object changes, because by this token we shall know that somebody broke the protocol.
These arguments of mine, however, failed to convince my colleague, so I should be grateful for others’ opinions to help us come to a common understanding. How do you resolve such conflicts at your job?
P.S.: This seeming a more informal forum than StackOverflow, I hope it will not be downvoted and closed as opinion-based. Please, let me know if I am wrong and then be so good as to suggest a better place to ask my question.
P.P.S.: Please, be polite to both of us, for I will show this question to my colleague.