ibmi-brunch-learn

Announcement

Collapse
No announcement yet.

QSNDDTAQ API lock situation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • QSNDDTAQ API lock situation

    Hello Scott Klement,
    Would request your assistance to resolve an issue in our IBMi production system.

    Multiple batch jobs are writing to a single DTAQ using QSNDDTAQ API. During this, a lock situation is occurring which delays the process.
    Sometimes we receive an error (CPF9503 - Cannot lock data queue, Another job currently has exclusive use of this data queue).

    Not sure if one of the batch jobs is locking DTAQ while adding entry. If so, is there a way to write to DTAQ without locking?
    Also is there a way to find what is locking the DTAQ? At this point we could not find anything with WRKOBJLCK command – guess it is so quick and happening in fraction of a second.

    Is there something that needs to be taken into account while writing to a DTAQ with parallel jobs?

    Prototype used:
    D SendData PR ExtPgm('QSNDDTAQ')
    D Dtaqnam_ 10a const
    D Dtaqlib_ 10a const
    D Dtaqlen_ 5p 0 const
    D Data_ 32777a const

    Please advise. Thank you!

  • #2
    Would like to add couple more things:

    As part of different batch jobs, we also call two other APIs for that DTAQ to check # of entries (QMHQRDQD) & consume (QRCVDTAQ). Will this cause any lock while writing (QSNDDTAQ)?

    In the IBM website (https://www.ibm.com/support/knowledg...s/qmhqrdqd.htm), noticed below information about cache. Where is the cache located? There is a replication process already excluding the DTAQ, but not sure if it has to exclude cache also?
    Internally, when a job uses API QSNDDTAQ (Send Data Queue), QRCVDTAQ (Receive Data Queue), QMHQRDQD (Retrieve Data Queue Description), or QMHRDQM (Retrieve Data Queue Message), a cache is created to allow faster access to the data queue. An entry in the cache means a user is authorized to the data queue. An entry is added to the cache when a user calling one of the APIs has the required authority to the data queue. An entry is also added to the cache when QSNDDTAQ is called to handle a journal entry for a data queue created with the sender ID attribute set to *YES, and the user requesting the the send function has the required authority to the current profile name in the sender ID information of the journal entry. The data in the cache is used until the job ends, so if you need to immediately change a user's authority to one of these objects, you may need to end that user's jobs.

    Comment


    • #3
      We have a similar setup to you where batch processes wait on DTAQ entries which can come from various sources and we've never experienced this issue. Any locking of the DTAQs would normally allow for updates unless the system does something extra momentarily while making changes.
      I did a little test and to my surprise found that even if I allocate a DTAQ *EXCL, the send and receive still seem to work. It's just the retrieve DTAQ description that doesn't. And surprisingly this doesn't seem to honour the usual LCKW timeouts. The big question is why do you call the QMHQRDQD API ? You mention you call it to check the number of entries, why do you do this?

      As for the caching, the documentation seems very specific to authorities so this seems to be an authority cache and would not have anything to do with locking the DTAQ.

      Comment


      • #4
        Thank you John for your response. QMHQRDQD API is called to check if the number of entries in DTAQ reached set limit (say 1000), then increase number of jobs that consume data from DTAQ. We noticed that the error occurs specifically while trying to send data to Q. We are working to get the details from PSDS about the lock.

        Comment


        • #5
          FYI: Received the clarification from IBM on this - it is caused by space location locks which are used internally by the operating system for synchronization. Suggested solution is to monitor for CPF9503 & re-try or increase job's default wait time.

          Believe this information would help in the group, if someone else has similar situation.

          Comment

          Working...
          X