Hi! This month I wish to briefly delve into the inner workings of the D2b Log. The topic is very broad and complex and so this newsletter is only really skimming the surface of this topic!
What is the Log?
The Db2 Log is the central area where all data changes, plus a whole lot more, gets written away by Db2 as it does its normal work. Most shops allocate several large VSAM datasets for the Active Logs which, when full, get written off to Archive Logs.
Rules of the Log
How much Log do you need? The golden rules are all time based:
1) At least 24 hours of coverage on your Active Logs
2) At least 30 days of coverage on your Archive Logs
Any less and you could seriously run into trouble, breaking the 24 hour rule means that possibly normal ROLLBACKs might suddenly be requesting tape mounts which is not pretty and breaking the 30 days rule might put you into a world of pain when data sets get migrated off and scratched in a recovery scenario.
Take Care of Your Logs!
These Logs must be looked after and cared for as they save your company’s life on a daily and hourly basis. Normally, no-one really cares about the logs as they just „work“ and that’s it! However, the Db2 Log is actually a bit of a bottleneck these days.
Remember LOG NO?
Many years ago, Roger Miller said „LOG NO datasets will be implemented in DB2 over my dead body“ as he was pretty adament that LOG NO was a bad idea. The driver behind the requirement was just the sheer size of the logs being written by Db2 and the belief that writing fewer logs would make application faster.
How Many?
When you look at the history of Db2 you can see it started with between two and 31 active logs and between 10 and 1000 archive logs. We all thought „that will be more than enough“. Nowadays we have between two and 93 active and between 10 and 10,000 archives! Some shops rotate through their six byte RBAs in 24 hours and go through 1000’s of logs so we have grown up a little bit!
Before it Hits the Log…
Remember that there is another really important ZPARM that affects the LOG before the log is even hit – OUTBUFF it started out at a value between 40K and 4000K and is now between 400(K) and 400000(K). Just set it to the highest value you can! Db2 will always look here first before even looking at the active log and so if the data is here it is much faster than VSAM access!
Bottleneck?
Some customers were convinced that one area that was slowing down Db2 was the log write and the externalization of the data within. Roger was 100% against this but even he lost this battle and so the LOG NO space was born. However, if you ever do a ROLLBACK then the space goes into COPY pending which is horrible!
Checkpoint Charlie
The number of system checkpoints also has a direct impact on log size and usage. You must decide whether you wish to go time based (IBM-recommended way is three minutes) or transaction based – or even a mix of the two methods. I am in favour of keeping my life simple so I would always recommend the checkpoint every three minutes rule. You could argue that at „quiet times“ overnight too many checkpoints will be taken, but I counter that argument with „When was the last time you ever had a quiet time on your machine?“
Index Split?
Believe it or not, index splits seem to take up some 55% of the Db2 Log at one shop I visited. We decided to try different index pages sizes , including compression, and this all really helped in reducing the log load pressure. This has a direct effect on DASD, elapsed and CPU time which was an allround winner!
Crystal Ball?
So, how do you look into the Db2 Log? If you have no tooling then you only get the IBM stuff which is – shall we say – basic. It was here that I decided to write a little COBOL program, that I have called Db2 Archive Log Viewer for Db2 z/OS, that would read all of the Archive logs (No VSAM – keep it simple!) and handle normal non-spanned log records to give me a glimpse into what on earth Db2 was writing into the darn things!
Log HealthCheck
So what does Db2 Archive Log Viewer for Db2 z/OS then do? Well, it reads as many Archive Logs as you can give it and reports on the contents of the Logs in absolute numbers and in size. Using this data enables you to get a new view into what your Db2 system is actually doing and who is causing all your logs to be written.
Surprise!
I was pretty surprised by what I saw inside my logs and I hope you find it interesting to peer inside your logs!
If you have any ideas or desires about log datasets, feel free to email me!
TTFN
Roy Boxwell