We've been working with IBM support on an issue with a slow-running query - at least the first run is slow (about 3 minutes), but the second run completes in about 10 seconds. I know it's normal to have the second run faster, but not *that* much faster.
Anyway, after a lot of traces and suggestions from IBM to create new views, then create the new views again with a 64K page size, I finally asked if re-creating all of the views (DDS-defined logical files actually) with a 64K page size that are over the three physical files used by the query would make a difference. They said: give it a shot. So I did, and wow! The run time of the query improved dramatically, as did non-query workloads that use these three files. (There are roughly 80 logical files over the three files.)
So now I'm wondering if it would make sense to re-create all of our logical files, specifying a 64K page size. I asked IBM about any downsides, and support is saying: no downsides. However, Knowledge Center says (https://www.ibm.com/support/knowledg...pagesize.htm):
"Consider using the default of *KEYLEN for this parameter, except in rare circumstances. Then the page size can be determined by the system based on the total length of the keys. When the access path is used by selective queries (for example, individual key lookup), a smaller page size is typically more efficient. When the query-selected keys are grouped in the access path with many records selected, or the access path is scanned, a larger page size is more efficient."
So my question: has anyone traveled this road? Were there any downsides from increasing PAGESIZE to 64K?
Thanks,
Emmanuel
Anyway, after a lot of traces and suggestions from IBM to create new views, then create the new views again with a 64K page size, I finally asked if re-creating all of the views (DDS-defined logical files actually) with a 64K page size that are over the three physical files used by the query would make a difference. They said: give it a shot. So I did, and wow! The run time of the query improved dramatically, as did non-query workloads that use these three files. (There are roughly 80 logical files over the three files.)
So now I'm wondering if it would make sense to re-create all of our logical files, specifying a 64K page size. I asked IBM about any downsides, and support is saying: no downsides. However, Knowledge Center says (https://www.ibm.com/support/knowledg...pagesize.htm):
"Consider using the default of *KEYLEN for this parameter, except in rare circumstances. Then the page size can be determined by the system based on the total length of the keys. When the access path is used by selective queries (for example, individual key lookup), a smaller page size is typically more efficient. When the query-selected keys are grouped in the access path with many records selected, or the access path is scanned, a larger page size is more efficient."
So my question: has anyone traveled this road? Were there any downsides from increasing PAGESIZE to 64K?
Thanks,
Emmanuel
Comment