I found some thing really interesting about float.Parse() method. Normally, I thought this parse method should be able to convert any numeric string to a float value, as long as it does not cause overflow. However, I found that it does not really faithfully do the conversion. If the string value is too small or too big, you may lost some precision in value.
Here I tried to narrow down this issue to a test program:
private static void TestFloat() { for (int index = 1; index < 10; index++) { string val = string.Format("{0}.123456", new string('8', index)); float f = float.Parse(val); string sF = f.ToString(); string sF1 = f.ToString("0.000000"); Console.WriteLine(@"string value: {0}(len: {5}); float value: {1}(len: {6}-{7}); string to float.Tostring() {0}=={1}? {3}; string to float.Tostring(xxx) {0}=={2}? {4}", val, f, sF1, val.Equals(sF), val.Equals(sF1), val.Length, sF.Length, sF1.Length); } }
The result is very surprising:
string value: 8.123456(len: 8); float value: 8.123456(len: 8-8); string to float.Tostring() 8.123456==8.123456? True; string to float.Tostring(xxx) 8.123456==8.123456? True string value: 88.123456(len: 9); float value: 88.12346(len: 8-9); string to float.Tostring() 88.123456==88.12346? False; string to float.Tostring(xxx) 88.123456==88.123460? False string value: 888.123456(len: 10); float value: 888.1235(len: 8-10); string to float.Tostring() 888.123456==888.1235? False; string to float.Tostring(xxx) 888.123456==888.123500? False string value: 8888.123456(len: 11); float value: 8888.123(len: 8-11); string to float.Tostring() 8888.123456==8888.123? False; string to float.Tostring(xxx) 8888.123456==8888.123000? False string value: 88888.123456(len: 12); float value: 88888.13(len: 8-12); string to float.Tostring() 88888.123456==88888.13? False; string to float.Tostring(xxx) 88888.123456==88888.130000? False string value: 888888.123456(len: 13); float value: 888888.1(len: 8-13); string to float.Tostring() 888888.123456==888888.1? False; string to float.Tostring(xxx) 888888.123456==888888.100000? False string value: 8888888.123456(len: 14); float value: 8888888(len: 7-14); string to float.Tostring() 8888888.123456==8888888? False; string to float.Tostring(xxx) 8888888.123456==8888888.000000? False string value: 88888888.123456(len: 15); float value: 8.888889E+07(len: 12-15); string to float.Tostring() 88888888.123456==8.888889E+07? False; string to float.Tostring(xxx) 88888888.123456==88888890.000000? False string value: 888888888.123456(len: 16); float value: 8.888889E+08(len: 12-16); string to float.Tostring() 888888888.123456==8.888889E+08? False; string to float.Tostring(xxx) 888888888.123456==888888900.000000? False
To my discovery, the parsed result lost its precision, 9 out of 10!
I found this issue when I tried to take an input string from a text box, convert it to a float value and finally save to database. My tester found that the results are not consistent when value is too big. At the beginning I did not believe it, but after I repeated the case, I found the bizarre result. Finally I realized the issue is caused by Parse() method.
Any way to get around this issue? It seems there is no way to save the exactly value to database. Even I can save the value as a string to database, eventually, it may reach a point that database will present the value as xxxEzz format, which may lost precision when the value is retrieved back.
It looks like that we have to limit values to be entered, to a realistic range. Then handle the value from UI to database or vice visa.
This blog is published by schedule.
0 comments:
Post a Comment