Oracle db file sequential read p1 p2 p3




















That has got to hurt, no? It was executed times, did IO's per execute, about IO's per execute. Meaning it did 3 IO's probably via the index, and one IO to goto the table. Do it hundreds of thousands of times -- very very very slow. Alex, August 15, - am UTC. That little query ran like that due to no index stats on the table.

But I think I get the jist of it. I think mislead you, I'm not use to conversing with someone who is so precise and exact as you are. Perhaps I should have said "recursive", because the trouble queries do not appear in the overrall, bigger query created by crystal reports. I wish I could post the actual monstrosity that crystal creates for you, it's code generated so you cannot edit it. It doesn't even look like sql, it's impossible to troubleshoot.

I'm amazed at what a few missing indexes can do though. I checked couple waits on sequential read, found out that those indexes are created within one index file, though for each index, I have couple files on different mount point. Will it help if data is spread out on different files on different mount point? August 17, - pm UTC. We're running into some performance problems. The following is the partial output of the raw trace file.

The wait on "db file sequential read" actually continues for several more pages. I have a few questions on the trace file: 1. Oracle was parsing, binding, executing, and fetching for cursor Then all of a sudden it started waiting on 'db file sequential read' for cursor Is this because the system switches from processing one SQL to another or is the raw trace file not sorted by time? Should it be a concern that there are so many waits on "db file sequential read" for Cursor ?

That does not explain to me why there are waits. Am I interepreting the output correctly? How should I proceed with the troubleshooting? September 22, - pm UTC. This just means you are doing physical IO, if you can find a way to avoid it - great, but if you cannot, it is the price of admission, you have to read the data in sometime. Tom, I have a procedure which is taking a very long time to complete. How can I know what is it waiting for? The dba says it means full table scan and create indexes to avoid it.

Please give your insight into it. Thanks in advance Praveen. October 22, - am UTC. Well, the procedure mentioned above is actually taking about 5 days to complete processing just 1. As you said I enabled tracing for about 10 min. Certain relevant parts of the trace information is shown below. The above trace shows exactly the same. But query is not a dynamic query and bind variables are in place. We can see this from low library cache misses. Why does too many soft parses happens?

If we look into the full trace file, this is exactly what happens in almost all the other queries in the procedure. Is the query a worst written one?

Wait Total Waited Waited db file sequential read 5 0. How does the Fetch count become too much greater than rows fetched? Although no bulk operation is done here, this ratio should be nearer to 1 in the worst case , right? Tom, could you please shed some light on these issues. This will help us understand much more about how oracle works. Best regards. October 24, - am UTC. Also there is no way we can modify the source table data.

The source tables are actually materialized views. If we keep the data in it after trimming, this will make it impossible to do a fast-refresh from 'its' source tables. Ofcourse they are created on materialized views, which are the source tables here, and the refresh will be slow. Infact the cardinality of those columns upon which the bitmap indexes are created are so low such that the expalin plan out showed a much better cost than on normal indexes.

These explain plans are not obtained at runtime. The application is taking about 1 sec to process just 3 or 4 records. Do you think that repeated soft parsing is the 'main' reason for such a terrible performance? What else should I look for, if not? October 25, - am UTC. I've never looked at the cost that way. Sorry, it's a mistake and ignorance from my side.

Actually they are different tables. I am working on Oracle 10g R1 on Redhat linux. If the plan cost is not the one to measure the performance of a query, then how does one estimates it?

Execute it with timing on? Then how does explain plan useful? What are the other areas in the trace file should I look for possible clues of bad performance? Is the dba can be of any help to find them? Best regards October 26, - am UTC. You can ask yourself "why am I executing the sql over and over again - did I do something procedurally I should have done in a single query" tune the algorithms You can look for "low hanging fruit" - the query s that take the most time.

Tom, Looking at something that is very stange at least to me. Wait Total Waited Waited db file scattered read 18 0. Could it be due to db parameters making index more attractive, then full scan? Thanks in advance. December 09, - pm UTC. Sorry, don't follow you. I was using tkprof with a level 8 trace. I see Wait 50 event db file scattered read 1 time and then wait 50 event db file sequential read the rest of the time.

Am I missing something? December 10, - am UTC. December 14, - pm UTC. This is what I am seeing when the job actually finishes Wait Total Waited Waited db file sequential read Will send the file to thomas. Size of the file zipped up is 35Meg. So if you email account will not handle that do you have another way to send the file? Thanks, Brian. Tom, This was executed on my production box, no other jobs where running except my batch job. I turned on extended tracing on a SID and among other things found the slow timing on the select below.

Does this tell me I have a slow disk problem? A max wait of Is this strictly disk related or could there be something else? Thank you. February 11, - pm UTC. Slow memory? A reader, March 10, - pm UTC. Tom, We're investigating the apparent slow disk issue that seems to be causing the very high db file sequential read wait I reported on the previous message.

What if it turns out my server has slow memory? What wait events would crop up? March 10, - pm UTC. We don't really "wait" for memory, it just "happens", the CPU waits for memory, we use the cpu. So, since Oracle doesn't "wait" for memory, given two servers A and B with the same software and hardware configurations except B's memory is slower than A's, extended SQL trace doesn't help me identify why B is slower than A. How about statspack?

If I run the same job on A and B, B should be slower due to slow memory. In-memory sorts should be slower, right? Would the slowdown be reflected on statspack? If not, where? March 13, - pm UTC. How is it possible? How can an Index scan show waiting for db file scattered read.

Normally, sessions wait for "db file scattered read" while doing a Full Table Scan or Index Fast Full Scan, but why in this case it is waiting for db file scattered read? The DB Version is Regards VLS. March 17, - pm UTC. That is a rather nasty view normally.

Otherwise the query just shows what an active process last waited on. There may not have been any sequential read waits for the current query because most of the blocks were already in cache. March 19, - am UTC. Now we have 2 tables Parent and Child, during the load process we concatenate the files having data and load the data pertaining to parent table, then we load the child data and each record is getting validated to being having a parent record.

The datafiles for tables and indexes are distributed across 9 devices shared by 3 device controllers, one controller managing 3 disks.

SGA buffer Cache : 4. Still, your advice will be appreciated! Thanks in advance,. April 11, - am UTC. Hi Tom, My point was that the expected wait on db file sequential read was just the same CPU cycles oracle spent in executing the task. If CPU time spent is 'x' secs then Elapsed time was '2x', and the waits listed below in the trace just contribute to a single wait event 'db file sequential read' Now we are loading data in conventional mode, simply because of the reason that we are not sure whether the child record comes faster to us in a file via application servers queue or its the parent record.

We need to maintain RI through'out the database. Hence we load the parent records for that hour and then load the child records for that hour. This process continues every hr.

All the child records for which we don't recieve parent gets rejected and we load them in another file to be loaded back again at the end of the day. If we try to use the direct path load, we will n't have some time window to enable RI at all since this process just keeps on going on an hourly basis. Please suggest if you have other ideas, which can improve the performance of the load.

April 12, - am UTC. Tom, This process that run once a month is taking about 21 hours to run. I am putting part of the code which I have indentified as the hanging fruit. Can you offer some suggestions here Wait Total Waited Waited db file sequential read 0.

VISIT call count cpu elapsed disk query current rows Parse 1 0. May 04, - am UTC. You have written some slow by slow code row by row processing , you will have to review your algorithm to see where you can optimize that. Tom, This is the sql that gets executed within the bulk collect, how can I joined as you put it?? May 04, - pm UTC. I cannot look at a single procedure in isolation and tell you how to fix it - the join probably needs to happen at the layer ABOVE this procedure so this procedure isn't really even called.

A reader, May 12, - pm UTC. How can I do select into when I have to variables? May 12, - pm UTC. Mark S. Wooldridge, May 12, - pm UTC. The initial question and response just shows the depth of knowledge from Tom.

Most know that IO is critical to performance and scalability. Furthermore, the SQL statement that is currently running may or may not be the one that is responsible for the waits. This is why interactive diagnosis without historical data is often unproductive.

Due to these limitations, you may have to identify and trace the session the next time around to nail down the offending SQL statement.

Once you have found it, the optimization goal is to reduce the amount of physical and logical reads. You should expect an average wait time of 4 to 8ms 0. The higher the average wait time, the costlier it is to perform a single-block read, and the overall process response time will suffer. On the other hand, a lower average wait time is more forgiving and has a lesser impact on the response times of processes that perform a lot of single-block reads. We are not encouraging you to improve the average wait time to avoid SQL optimization.

Although it may be more efficient in your situation to perform a full table scan than an index scan, check to ensure that full table scans are necessary when you see these waits. Try to cache small tables to avoid reading them in over and over again, since a full table scan is put at the cold end of the LRU Least Recently Used list.

FROM sys. WHERE a. If an application that has been running fine for suddenly starts indicating the db file scattered read event then this could be an index issue. One or more indexes may have been dropped or become unusable. To determine which indexes have been dropped, the DBA can compare the development, test, and production databases.

Certain partitioning operations can also cause indexes to be marked unusable. This includes adding a new partition or coalescing partitions in a hash-partitioned table, dropping a partition from a partitioned table or global partitioned index, modifying partition attributes, and merging, moving, splitting or truncating table partitions.

A direct load operation that fails also leaves indexes in an unusable state. This can easily be fixed by rebuilding the indexes. You are commenting using your WordPress.



0コメント

  • 1000 / 1000