SQL Profiler nee Trace
SQL Profiler is the main tool I use in optimization. You can read about it in BOL and in almost any SQL Server book, so I won't go into much detail here. An article about SQL Profiler can also be found in technet.
I like to set up the trace to log to a table so it can be quickly queried and searched. You can put the table on a different server from your trace. You set this up by checking "Capture to Table" at the bottom of the General tab in the Trace setup dialog as you can see below:
SQL 7.0 Trace Setup
(Click image for full size)
SQL 2000 Trace Setup
(Click image for full size)
Also, you will probably want to limit your trace to just capture your own activities so other users don't clog up your data. If I'm testing in Query Analyzer, I limit the trace to my own spid (SELECT @@spid) on the Filters tab of the trace setup. Alternatively, selecting database, username, program, or hostname (your client PC) can be useful. Combining trace filters can be quirky and often doesn't yield expected results--go figure...
If you have a very long process with many different stored procedures, you may need to trace it twice. The first time, just trace stored procedure completion. This will tell you which procedure(s) are slow. The second time, trace statement completion of the particular slow stored procedures as you run them manually in testing or by filtering for the procedure name pattern (procName%) in the TextData filter. You can also request CPU > 100 ms (or your threshold of choice). This will reduce your logs to a manageable size. This is also the method for investigating a server where you have no idea what the slow process is.
If doing a long trace on a server of a day or so, you may want to filter on CPU > 100 and occassionally remove rows with NULL in the CPU column to keep the size of the log table from growing too large.
Trace Stored Procedures
First trace just stored procedures to find which ones are slow.
Trace Statements Within Procedures
Then trace statements inside
the stored procedures to
find which statements need to be optimized.
The trace table will have more than enough information to find the slow parts in SQL Code. You can just select your answer from the trace table:
SELECT CPU, TextData
from TraceTable
ORDER BY CPU desc
If statements are traced, this will be your final answer. If it's a procedure-only trace, you will have to run the procedure(s) and trace statements in the suspect procedures.
If you have procedures, you can dig deeper with an aggregate query:
select sum(cpu) 'sum cpu',
avg(cpu) 'avg cpu',
max(cpu) 'max cpu',
count(cpu) 'times run'
left(cast(textdata as varchar(128)),
isnull(nullif (CHARINDEX (' ', cast(textdata as varchar(5000))), 0) , 255))
from trace_table
where cpu is not null
and textData is not null
group by left(cast(textdata as varchar(128)),
isnull(nullif (CHARINDEX (' ', cast(textdata as varchar(5000))), 0) , 255))
order by avg(cpu)
This query does a good enough job of extracting the stored procedure names from the full text (stripping the parameters) and showing which ones are slow, how often they run, etc. Once I find the slow ones, I extract the full text from the trace table so I have examples to test with:
select distinct
left('exec ' + rtrim(cast(textdata as varchar(5000))) , 145),
char(13)
from trace_table
where cpu is not null
and textData like 'procName%'
order by cpu desc
A simple way to minimize the costs vs. the benefits is, of course, to fix the longest running queries first.