This month I would like to “roll through” the recently held IDUG NA Virtual 2021 – Another virtual conference, but still full of tons of good stuff! Read on dear reader to see what piqued my interest!
Do you like being Profiled?
Profiling is getting much better (not easier to use mind you!) The best bit was “Remove ‘ping’ and validation queries” and the two easiest ways to do this are either to enable Sysplex WLB, or to use the isVALID() JAVA method. The typical SELECT * FROM SYSIBM.SYSDUMMY1 is a real pain …
PBG Sad future…
The idea of PBGs has, from my POV, changed quite dramatically. From the “Set it to 100 MAXPARTITIONS and forget it” to “Set MAXPARTITIONS to 1 and DSSIZE to 64GB”. I still think this is more of a design flaw than a good idea. If the spacemap search is such a performance killer – fix it …
Performance tidbits
Just using the “new” feature PCTFREE xx FOR UPDATE yy or the PCTFREE_UPD Zparm is a bit of a no-brainer if you have variable length or compressed records and lots of updates that then cause an Overflow record, (better called an indirect reference) – these bloat your data and cause more getpages over time. A sort of creeping death! The only cure is periodic REORGs to remove all of these indirect references, but it is better to delay this as long as possible – hence the FOR UPDATE percentage is a very nice feature!
From Db2 11, you could make your DGTTs NOT LOGGED which can give dramatic performance boosts! Naturally, it is a code change in the create DDL. You must add NOT LOGGED to the back end of the DECLARE statement but there have been really amazing savings like 60% elapsed time reduction and greater than 20% cpu savings! Of course, your mileage may vary. Always test these things before blindly changing all of your DECLARE statements!
zHyperWrite: If you are running with DS8000 and Metro Mirror (PPRC) then this really is a great idea! Update the IECIOSxx HYPERWRITE=YES, issue the SETIO HYPERWRITE=YES z/OS command and set ZPARM REMOTE_COPY_SW_ACCEL to ENABLE to get large reductions in active log writing times! Note that HyperSwap must be enabled as well.
zHyperLink (Baby brother of zHyperWrite) is also fantastic if you have intensive reads (especially sync reads) and lots of active log writing. (Who does *not* have this combo???) Set the zHyperLink to ENABLE for both read and write, DATABASE for read only or ACTIVELOG for write only. Downside is, you may see cpu increase slightly but the payback of elapsed reduction (over 20%) is well worth it.
Too many statistics can be bad for you
Also heard a few times was the statement “Never collect STATISTICS as part of REORG INDEX” – I have firmly believed in this for years and am very happy that it is now the modern mantra!
Run RUNSTATS rarely, if at all! How often does the statistical data actually change? During a REORG – Always! But “on its own” ? Why? Creation/Change of index – Naturally. Mass delete and LOAD with brand new data – Clearly! Insert another million rows? Really needed?? Go logarithmic like our RTDX RunstatsAvoidance feature.
Utility updates
High speed REORG: SORTDATA NO RECLUSTER NO – Just for actioning DDL as fast as possible. Downside is the CLUSTERRATIOF might be killed and, even worse in my humble opinion, all of the RTS counters are set to zero! Of course, if you are clever, you could simply back up the relevant RTS rows before the REORG and replace them afterwards…
REORG got a parallelism boost when building PIs (PH28092 & PH33859), naturally only when no NPIs exist. If running on the z15, think about setting ZPARM UTILS_USE_ZSORT (PH28183) to improve performance.
LOAD got a bunch of improvements (PI96136, PH11255, PH19073, PH23105) and the capability to override row change timestamps with PH25572. It also got the ability to skip the MAXASSIGNEDVALUE update in Db2 12 with PH28476.
RUNSTATS got the new STATCLGSRT ZPARM (PI74408 and PH03678) which then added the STATCLGMEMSRT parameter. RUNSTATS also got an update to avoid “overdoing things”:
- If BOTH or HISTOGRAM then the COUNT is limited to be 100
- If sampling value is less than 50% and FREQVAL COUNT BOTH or LEAST is specified then change it to be MOST
These avoid flooding the SYSCOLDIST with basically useless data that just slows down PREPARE and BIND/REBIND as well as possibly causing the optimizer to make a bad judgment.
TEMPLATE support also finally got Large Block Interface! New BLKSZLIM parameter in PH30093.
Compressed?
Generally speaking, random table access will show better performance than sequential table access for compressed data, as the row is only decompressed if required.
Generally speaking, sequential index processing (index scans) works much better than random index access for compressed indexes. The opposite of tables! Why? The entire index page must be read in and decompressed in one call for indexes.
So, once again, know your data, know your access patterns!
New Compressed?
Huffman Object Compression: Came along in FL509 and is well worth a look, as it can give very good results but, as always, test and check before blindly setting all compression to Huffman!
Index room for growth?
On index design the Holy Grail is to set the PCTFREE equal to the number of new index entries between reorg runs divided by the totalentries. FREEPAGE should be used to avoid index page splits. It may also help to increase the index page size and go to index compression.
Last mix
DataSharing OA59122 which went PE and the OA60394 that fixed it, both correct a problem of “contention management” that can be CPU expensive when contention has gone – but what happens next?
Machine Learning – Unsupervised (sounds like me most of the time) … especially „data munging“ …
And finally…
A whole ton of really interesting stuff all about deprecated items… oh wait a second, that was my presentation! Anyway, good for a read and learn all about the nasty things lurking in the catalog! Do not forget we have freeware that shows it all to you! MigrationHealthCheck (MHC) is the name!
As always it is a pleasure to hear any comments or criticisms!
TTFN,
Roy Boxwell