ibmi-brunch-learn

Announcement

Collapse
No announcement yet.

Application performance issues after moving more into SQL

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Application performance issues after moving more into SQL

    Hello,
    I observe more and more performance issues with our application and they started to show up after we've introduced a lot of SQL to our system. We've did a lot of optimization but even after that we still have some problems and I'm trying to understand what tools I can use to see if hard drives are out bottleneck or maybe it is CPU.

    System setup (from program/database perspective) is quite complicated. We've moved from DDS to SQL tables(most tables) and MQT (materialized query tables, biggest tables ) but programs were not changed and most of them still use native I/O (but we have SQL behind because of RPG Open Access and MQT/Indexes). I understand that we may have some overhead because of this(on CPU) but for example MQT Updates/Insert are quite slow but we use the for Read operation because chain operation with sql behind was very slow without MQT. Of course we use SQL triggers in this setup.

    We do not have EasyTier license and from what I've read it could tell us if we could benefit from SSD. I've also found something about SSD Analyzer (http://www-01.ibm.com/support/docvie...id=tss1prs3780) but only for 7.1 and it look like it does not work on 7.3 (I could not see it as collection in iNavigator

    We have machine type 8286-41A (power8) with 3 LPARS which has few HDD drives (raid 5) and StorWize 5010 with few HDD Drives. Unfortunately we do not have VIOS and LPAR 1 (master) expose HDD to DEV LPAR and PROD LPAR. This of course should and will be changed but to be honest I have no idea how much performance we may get if we start use VIOS.

    Now I'm totally lost which tools/commands could tell me if response time from our HDD is weak point and this is why we have slow read/write times and our CPU does nothing because it waits for HDD(and we should buy few SSD for hot files). We have Performance Tool so I can use collections services/performance monitor and then print one of available reports but there are so many of them. I guess disk report can be used.

    Do You have any advices what should I check here?


    Filip

  • #2
    Is performance poor for everybody for all jobs? Or just on certain jobs?

    You say you have "few" HDD drives - what does that mean? The number of drives can affect your disk performance.

    From green screen - WRKDSKSTS shows thee disk activity, WRKSYSSTS is also important to know if you're experiencing a lot of memory faults. You start this up, press F10 to start the stats at 0 and after about 5 minutes do an F5, what are the faults for each of the memory pools?

    If it's certain applications you can get into Navigator and check the index advisor - it might be a matter of creating a handful of indexes.

    Comment


    • #3
      We had performance issues on our Power7 and the biggest improvement was on larger queries when we restricted the memory for SQL with a min and max identical. Problem was SQL wants to use all it can get, and optimizes for that, but loses memory available as it runs. This causes trashing. I decide on a specific % and assigned that to SQL. Dramatic improvement. (24 hours down to 1 hour in some cases).
      When we upgraded to Power 8, we bought all new SSD's and saw another noticeable improvement.

      Comment


      • #4
        Arrow,

        How did you go about setting a min and max memory allotment for SQL?

        Thanks in advance.

        Walt

        Comment


        • #5
          I assigned the SBS that runs SQL in batch to a new Memory pool - in my case, it is Shared1. I assigned 25% of my memory as configured, then on the tuning page I assigned min = 25% and max= 25%
          This way, I can leave tuning running, but this pool never changes. The amount you should assign varies on each system and your workloads, but the idea is to keep it consistent.

          Comment


          • wegrace
            wegrace commented
            Editing a comment
            Thanks!

            Walt

          • Rocky
            Rocky commented
            Editing a comment
            Makes sense - group like jobs into their own memory pool...

        • #6
          Check your programmers and make sure their SQL is performant, this may seem obvious, but there is so much that can be wrong here. An sql statement could work perfectly but be 30 times slower than a statement written differently but still producing the same result set. Optimization is key here and looking at the SQL path the SQL engine uses is very helpful. I often ask DB2 to visually explain it's used path and make sure indexes are being used correctly. Something stupid like using a substring in an sql statement might prevent the use of an index. In such cases it's better to have a (local) RPG variable with the substring inside of it and using that variable in the SQL (This can easily be a factor 200 + difference!!).
          Another thing we noticed here, something that was slowing the system down in the past was users sending numerous requests instead of patiently waiting for SQL to finish. Our users had a tendancy to press whichever key was used to load data every 0.05 seconds untill they saw a result. So we gave them a loading screen and it helped! lol

          Best of luck.

          Comment


          • #7
            I would also look at the way your applications are developed. If it was performing good before you switched to SQL, I'd expect your performance should not suffer.
            I'd pick some of your offending sql statements, and run/explain them thru "run sql scripts" and see how they look. It sounds like your queries may be reading too many rows. You may need to either modify your query or add indexes as others has suggested.
            I'd rule this issue out before looking for hardware solutions.

            Comment

            Working...
            X