The Dynamic Statement Cache (DSC) has been around for a long long time (since DB2 V5 in fact), but I still meet people and more importantly DB2 subsystems who/which are not clued up on the correct way to monitor, check and tune the DSC so that it and the enterprise flies along at a nice cruising speed.
The DSC is where any PREPAREd statement lands if caching is enabled. And if the very same SQL statement is then PREPAREd again then this PREPARE can be either completely avoided – very good; or made into a “short” PREPARE – good. Remember that the primary authorization Id is also part of the “key” so you can still get duplicate DSC records but for different Authorizations. (In detail all of the BIND Options [CURRENTDATA, DYNAMICRULES, ISOLATION, SQLRULES, and QUALIFIER], Special registers [CURRENT DEGREE, CURRENT RULES, CURRENT PRECISION, CURRENT PATH and CURRENT OPTIMIZATION HINT], Authorization Id, Declared cursor data [HOLD, STATIC, SENSITIVE, INSENSITIVE, SCROLLABLE], Parser options [APOST or QUOTE delimiter, PERIOD or COMMA decimal delimiter, date and time format DECIMAL options] and the complete Attribute string must all be 100% the same – in fact it is a MIRACLE that there is ever a hit!).
In a perfect world, the SQL in the cache should stay there forever. However reality dictates that two days is about the best you can aim for, and I have seen shops where 2 hours is good. Of course at 10 minutes you are just thrashing the cache and you might as well switch it off to improve performance. Remember that sometimes no cache is better.
The major controller of the DSC is the ZPARM CACHEDYN which you simply set to YES or NO. Set to NO it severely limits the use of the cache as the cache is really two caches : the LOCAL cache for the thread and the GLOBAL cache for everyone. The LOCAL is storage in the EDMPOOL using a simple FIFO (First In First Out) queue and is controlled by the MAXKEEPD ZPARM which is normally set and left at 5000 statements. The GLOBAL is in the EDMPOOL (EDMP) and uses a sophisticated LRU algorithm (Last/Recent Used) but its true size has varied a lot over the years:
Version | MAXKEEPD | CACHEDYN | Notes |
---|---|---|---|
5 | 5000 | NO | EDMPOOL was the CASH |
6 | 5000 | NO | If CACHEDYN “YES” then new EDMDSPAC With VALID RANGE 1K – 2,097,152K |
7 | 5000 | NO | New EDMDSMAX with Valid range 0 – 2,097,152K and default 1,048,576K |
8 | 5000 | YES | EDMDSPAC and EDMDSMAX removed. New EDMSTMTC With Valid range 5,000K – 1,048,576K and new “opaque” ZPARM CACHEDYN_FREELOCAL Valid Values 0 or 1 With default 0 (off) |
9 | 5000 | YES | CACHEDYN _FREELOCAL default changed To 1 (on) and EDMTSTMTC default changed to 56,693K |
10 | 5000 | YES | EDMSTMTC default changed to 113,386K |
The LOCAL size has not changed a bit but the other defaults and the location of the GLOBAL is regularly changed! Now the GLOBAL is 113,386K and that is HUGE!
Setting the KEEPDYNAMIC bind option to NO is also not brilliant for the DSC but if the ZPARM CACHEDYN is YES you can still get the GLOBAL cache benefit. This now leads us to the four flavors of PREPARE:
• Full Skeleton copy of the SQL is not in the cache or the cache is not active. Caused by a PREPARE or EXECUTE IMMEDIATE statement. | |
• Short A skeleton copy of the PREPAREd SQL statement is copied to local storage. | |
• Avoided PREPARE avoided by using full caching. PREPAREd statement information is still in thread’s local storage. | |
• Implicit Due to limits, such as MAXKEEPD, a PREPARE cannot be avoided and DB2 will issue the PREPARE on behalf of the application. |
A Full PREPARE takes the most time to process (Think of 100 here) then a Short PREPARE (Think of one here) and then an Avoided PREPARE (Think of zero here)
So now in a parameter matrix:
CACHEDYN NO CACHEDYN YES KEEPDYNAMIC(NO) -> No caching -> GLOBAL cache KEEPDYNAMIC(YES) -> LOCAL cache -> LOCAL and GLOBAL (Called FULL)
LOCAL keeps no skeletons in the EDMP, only allows FULL PREPARES, allows you to PREPARE once over COMMITs (Only with hold cursors) and statement strings are kept across commits which gives you Implicit PREPARES.
GLOBAL keeps skeletons in the EDMP, 1st PREPARE is FULL, others are SHORT, allows you to PREPARE once over COMMITS (Only for WITH HOLD cursors) and no statement strings are kept across commits.
FULL keeps skeletons in the EDMP, 1st PREPARE is FULL, others are SHORT, keeps PREPAREd statements over COMMITS (avoided PREPARES) and statement strings are kept across commits which gives you Implicit PREPARES.
So remember MAXKEEPD > 0, CACHEDYN=YES and on the BIND statement KEEPDYNAMIC(YES) to get the most out of the DSC but watch out for memory growth.
OK, so now we know what knobs and buttons to fiddle with. The next question is: Where should I concentrate my efforts? Now you can easily extract the DSC with an EXPLAIN statement and you can dump the contents and then analyze all of the data. There are LOTS of columns full of great data but you must remember to switch on IFCID 318 when you start DB2; otherwise all the really good data will be zero!
To start the IFCID 318 you must issue a | |
‐START TRACE(PERFM) CLASS(31) IFCID(318) | |
command whenever DB2 has started up. It is also recommended to always have the accounting trace class(3) started by using either the installation parameter “SMF ACCOUNTING” field with a value of “*” or “3” on panel DSNTIPB or by issuing a | |
‐START TRACE(ACCTG) CLASS(3) command. |
Now of course you have between 5,000 and 120,000 rows of data to work with. First off the bat are the heavy hitters where you simply ORDER BY “CPU” or “GETPAGES” to pick out simple fixes where an INDEX or an additional column could avoid a sort or a data scan. But then you run out of low hanging fruit and you will want to aggregate your SQL. The best method is to group together the SQL which is “the same” but “different”. To do this you must make sure that a given SQL statement has the same tables in all of the FROM statements and that the predicates are the same. You can basically ignore the SELECT line and also the literals in the predicates to begin with. This then aggregates similar SQL together so that you can see the “big picture”. I have been to sites where 120,000 statements actually shrank down to only 900 different statements. The vast majority was literal usage in predicates instead of parameter markers which is of course anathema to the DSC *but* could be a performance requirement! Remember that “It Depends” is the correct answer but all you should be doing is trying to wrap your head around 120,000 statements!
Once you have a method to do this (Or you buy our SQL PerformanceExpert or Bind ImpactExpert that does this for you of course!) you can repeatedly extract and check that all is “well” in the world of the DSC. Finally, you should start doing “baselines” where you snap the cache and then a month or so later, or just before/after a new roll out, you snap the cache again and then compare the two cache extracts against one another. This enables you to see which SQL is “new” in the Dynamic World, and if your cache allows it you can then get a “rolling DSC” that contains all the dynamic SQL at your shop. This is a very powerful audit and performance possibility that not many shops are doing today. Just having the chance to easily see what the JAVA developers will be running next week in production is worth its weight in gold!
OK, that’s all for this month, as usual any questions or comments are gladly wanted/wished!
TTFN,
Roy Boxwell
Senior Architect