Our daily expdp backup job became abnormally and it would last more than one day, while in the past it just took about 40 minutes.
From the v$session view I found the expdp related processes were waiting for event 'Streams AQ: enqueue blocked on low memory', and I also found the export speed was very very slow even the exported tables were empty.
Searched and got below notes:
The key information I got was this: If the size of the streams_pool is being modified, then this can result in excessive waits for 'Streams AQ: enqueue blocked on low memory'.
Every occurrence for this event causes a 1 minutes delay.
And the suggested actions could be easily executed were:
- Explicitly set the streams_pool_size to a fixed (large enough) value, 150 Mb or 300 Mb if needed. We had 40G SGA, so I updated it to 500M.
- Alter system set "_disable_streams_pool_auto_tuning"=TRUE. We modified it and when I checked and found the default value was TRUE, so in fact no need to modify it.
Three patches were mentioned and I found all of them already were included in the latest PSU, so maybe I just need to restart the instance to observe the result.
I could not restart it as it is a production database, so I tried to find more and got below notes:
Two important information:
- Select shrink_phase_knlasg from X$KNLASG; Will return 1 if the stream pool is shrinking and return 0 if such action is finished.
- Alter system set events 'immediate trace name mman_create_def_request level 6'. Will force to complete the stream pool shrink.
I run the statement in step 1 and did get value 1. I was surprised as we set the stream pool to 500M one day before and we still in the shrink action.
So I run statement in step 2 to complete it and I did begin to get value 0.
I checked the wait event after that and I could not find such wait event, so I guessed the issue should be fixed. I tailed the log of the expdp job and found it finished already!