I came across an interesting case last week. Excessive parse time can sometimes be a problem, especially with complex SQL, in which case you may see “Parse CPU to Parse Elapsed” percentages which are quite low.
This ratio is derived from the CPU time spent parsing, divided by the elapsed time for the parse.
If you’re not getting a ratio near 100% then this means there is competition for CPU parsing time. If you have one parse to do and plenty of CPU to spare, the ratio will be 100%. If there are millions of statements being parsed at the same time, you’ll see a lower ratio. So I was somewhat surprised when I saw a ratio of 0.0%
Further investigation revealed that the “failed parse elapsed time” was not zero and far higher that it should be. One possible scenario for such numbers is an application which is detecting parsing errors and re-submitting SQL. This might be a good guess or there might be another explanation, but how do you prove such a theory?
10035 trace (yes 10035 NOT 10053) collects parsing errors and dumps them in the alert log if they occur. I did a little experiment, with this SQL
drop table test;
select colx from test;
Naturally I’m going to get some errors. There is no table test and even if there was it would be dropped. The problem with these kinds of errors is they can slip by unnoticed. Grid Control shows nothing
Unless the parse errors are truly stupendous in nature there is nothing to see. The AWR report can give a clue but you have to be looking very carefully. Here’s an AWR report I collected when I did LOTS of parse errors
The Time model output shows 0.34% for “failed parse elapsed time”. So to capture this information I set up a script
After running the script we can see in the alert log the following entries
So next time somebody asks for a 10035 trace don’t assume they are number dyslexic, just check what it is they are after and remember to look in the right place for the results.