Sunday, 27 September 2015

PERFORMANCE TUNING IN PLSQL

PERFORMANCE TUNING IN PLSQL:

USE TRACE FACILITIES TO GATHER RAW DATA ABOUT PROGRAM PERFORMANCES:
1. PLSQL PROFILER:
It will calculate the time that your program spends on each line and in each subprogram. These runtime statistics are then written to a set of database tables.
DBMS_PROFILER is extracting meaningful information.
Example Code:
Declare
profile_id PLS_INTEGER
Begin
DBMS_PROFILER.START_PROFILER ( run_comment => ‘<profiler descriptor>’
Run_number => profile_id);
my_application_code;
DBMS_PROFILER.STOP_PROFILER();
END;
Note:
You will not have to make any changes to your code to use this profiler; You will though, need to start and stop the DBMS_PROFILER package.
2. PLSQL TRACE (DBMS_TRACE):
  • DBMS_TRACE package to allow you to control the tracing of the execution of subprogram in your application.
  • Also, we can trace exception, SQL execution and individual line exceptions.
  • Traced information is then written to set of tables owned by SYS and created by the tracetab.sql script located in the <ORACLE_HOME>rdbms/Admin directory.
3. PLSQL HIERARCHICAL PROFILER (DBMS_HPROF) – 11G
It profiles program execution. But stores that data in a way that clearly reveals the hierarchical call stack of the application.
Note:
  • DBMS_PROFILER tracks information down to the individual line making it difficult for programmers to see the performance of district units of code (proc and func).
  • The new DBMS_HPROF package on the other hand lets you see the performance of these units at higher level.
4. APPLICATION DATA PROFILER (DBMS_APPLICATION_INFO):
  • DBMS_APPLICATION_INFO package to trace the execution of specific subprograms and/or portions of programs.
  • The output from this package is written to a variety of V$ views, which means that you can review and analyze trace information while the application is still running.
  • This package is especially helpful in analyzing long running programs.
Note:
You will have to make changes to your code to take advantages of this profiler.

BUILD OR FIND TOOLS TO CALCULATE ELAPSED TIME

We have to write/modify the code to know the best and suitable performance program.
Execute the program and check the elapsed timing.
SET TIMING ON;
Another solution:
  • Use DBMS_UTILITY.GET_TIME AND GET_CPU_TIME for very granular analysis.
  • DBMS_UTILITY.GET_TIME returns a point in time as an integer, with precision down to the hundredth of seconds.
  • DBMS_UTILITY.GET_CPU_TIME (10G) returns the current CPU time as an integer also precision down to thehundredth of second.
  • GET_TIME function to calculate elapsed clock time.
  • GET_CPU_TIME function to compute elapsed cpu time (actual cycles used to perform work).
  • The way use these utilities is to call the function once to get the start time, then run the program you want to get time. Then call the function second time and finally subtract the start time from the end time. The outcome is the number of hundredth of seconds that have elapsed in running the program.
Note:
You still generally cannot run your program just once it will usually need to be put inside a loop and run hundreds or thousand times.
In some operating systems the GET_TIME and GET_CPU_TIME functions “rollover” and start counting up from 0. This you can subtract from end and get a negative no. That’s not a very useful elapsed time computation.

BEST PRACTICES FOR HIGH-IMPACT TUNING:

Make sure your code is being optimized when compiled.
Since oracle 10G the plsql compiler will now also automatically rearrange our code to improve performance.
LEVEL 0:
Optimization is disabled, The code is compiled just as it would be in oracle 9i Database.

LEVEL 1:
The compiler performs basic optimization on your code, but is not aggressive as in Level 2. You will see limited performance improvements but this compile time will also not be increased as dramatically.
LEVEL 2 (10G):
This is the default and the most aggressive form of optimization.
It appeals the most rearrangements to your code performance and alos has the most impact on compile time (it increases it).
LEVEL 3 (11 G):
The plsql compiler will automatically “in-line” all calls to local subprograms in the current program.
Note:
Make sure that when your code base is compiled you are using the highest possible optimization level.
We should generally just stick with default. I think it would be very unusual for you to run into a situation that calls for a reduced optimization level.
Example:
Select owner, name, type, plsql_omptimize_level from all_plsql_object_settings.

USE BULK COLLECT AND FORALL TO IMPROVE PERFORMANCE OF MULTROW SQL OPERATIONS IN PLSQL.

Sometimes you have to help plsql integrate with sql.
  • DBMS_SQL was very inefficient because anything you did in DBMS_SQL involved multiple switches.
  • Oracle 8i introduced BULK COLLECT & FOR ALL.
  • BULK COLLECT can request multiple rows of data in a single request to the oracle DB. The SQL Engine processes this request and returns all this data together to the PLSQL engine which deposits the data into one or more collections.
  • FOR DML operations you lead into collections all the data you want to push back to your tables and then FOR ALL batches up all the DML statements that need to be run to cover all the data in the collections. These statements are transferred over to the SQL engine in a single context switch.
Note that the SQL engine executes the same statements it would have run with row-by-row processing. Use of BULK COLLECT OR FOR ALL doesn’t change SQL behavior. You are only changing the way that PLSQL communicates with SQL engine.

RECOMMENDATIONS OF BULK COLLECT
MORE MEMORY

The memory consumed by plsql collection come out of the PROGRAM GLOBAL AREA (PGA) not the SGA.
Example:
If your BULK COLLECT program requires 5MB of data and you have 1000 simultaneous connections then 5GB space require. 1000*5MB = 5gb.
CURSOR FOR LOOPS:
  • If plsql optimization level set to 2 or higher then BULK COLLECT will execute with the normal speed.
  • But, if you are performing DML operation then you should explicitly rewrite the cursor for loop into a BULK COLLECT. Because DML excute row-by-row basis.

LIMIT clause:

  • Always use the LIMIT clause when querying BULK COLLECT in the production. Because the data likely to get increase every day.

NO CURSOR ATTRIBUTES:

  • When using a LIMIT clause with an explicit cursor and loop, never rely in %NOTFOUND or the NO_DATA_FOIUND exception to terminate the loop.

  • If no rows returned, the co9llection is empty (count=0),then NO_DATA_FOUND exception will not raise.
NO STRING-INDEXED COLLECTIONS:

  • With BULK COLLECT collections are always filled sequentially starting with index VALUE. You cannot use string indexed collections with BULK COLLECT.

RECOMMENDATIONS FOR FORALL:
EXCEPTIONS:

  • When the SQL engine raises an exception executing one of your many DML statements batches together by for all, it will pass that exception back to the plsql engine, and the forall statement will terminate.
  • You can tell the sql engine to continue past exceptions, save them up and return all of them when it is done by adding the SAVE EXCEPTION clause to your FOR ALL header.

COLLECTIONS:
Prior to oracle database 10G Release 2 collections that are bound into your DML statements must be filled sequentially between the low and high index value specified in the FORALL range clause.
Otherwise oracle will raise an exception. 10G and above, you can use the INDICES OF VALUES of clause to use collections that have gaps index values between low and high that or not defined.
  • DML STATEMENTS

  • You can execute only one DML statement within each FORALL statement.

  • If you have multiple DML operations, you will need to write a separate FORALL for each one and you will need to write additional logic to synchronize activities between them.

ORACLE DATABSE 11G PLSQL FUNCTION RESULT CACHE:

10G has two major limitations to implementing the cache:
  1. The memory used by the cache is duplicated in each session.
  2. The cache makes sense only for completely static datasets.
11G has implemented a new kind of aching that combines the speed of the package based PGA cache with the memory optimization and automatic cache management of the SGA.
Example:
FUNCTION ref_code_description (category_in IN mfe_ref_codes.CATEBURY%TYPE, code_in IN mfe_)ref_codes.code%TYPE,)
RESULT_CACHE_RELIES_ON (mte_ref_codes)
IS
l_discription mfe_ref_codes.description%type;
BEGIN
select description INTO l_description from mfe_ref_codes where category=category_in AND code=code_in;
RETURN l_description;
EXCEPTION
WHEN NO_DATA_FOUND THEN RETURN NULL;
End;
  • RESULT_CACHE_RELIES_ON (mte_ref_codes) tells the oracle database that it should remember store in a special memory result cache each record retrieved for specific category code combination.
  • And when a session executes the function and posses in a combination that was previously stored the plsql runtime engine will not execute the function body which includes that query. Instead, it will simply retrieve the record from the cache and return that data immediately. The result is much faster retrieval time.
  • This caching does differ slightly from the package implementation in that the manual caching preloaded all of the rows when the session started while the function result cache loads the required data as needed.
  • This caching does differ slightly from the package implementation in that the manual caching preloaded all of the rows when the session started, while the function result cache loads the required data as needed.
  • In addition by specifying “RELIES_ON(mfe_ref_code)”. We inform the oracle database that if any session commits changes to that table, any data in the result cache drawn from the table, must be invalidated.
  • The next call to the ref_code_description function would then have to execute the query and retrieve the data fresh from the table.
  • This cache is not per-session but is available to all sessions connected to the instance.
  • 11g_emplu.pke, 11g_emplu_compare.sp, 11g_emplu.txt files to include the result cahce technique.
  • Per session PGA-based cache is still the fasted (PGA more efficient that SGA), but the plsql function result cache is dramatically faster that repeated querying.

USE PIPELINED TABLE FUNCTIONS TO RETURN DATA FASTER AND TO PARALLELIZE FUNCTION EXECUTION.

When we analyze the sql statement that is being generated on the fly and it is completely optimized. That’s not the problem.
Instead the delay is simply that it takes a while to return all that data over the network and the frontend process is blocked, waiting, until all of the dta is returned.
To fix the above solution is pipelining.
design your function to return data while if it is still running.
FUNCTION dynamic_query(query_in IN VARCHAR2)
RETURN numbers_nt PIPELINED
IS
l_cursor SYS_REFCURSOR;
l_return numbers_nt;
BEGIN
Open l_cursor FOR query_in;
FETCH l_cursor BULK COLLECT INTO l_return;
close l_cursor;
FOR indx IN 1..l_return.COUNT
LOOP
manipulate_data (l_return(indx));
PIPE ROW (l_return(indx));
END LOOP;
RETURN;
END dynamic_query;
Explanation of the above program:
LINE 1 PIPELINED:
The addiction of the keyword tell the oracle databse that you are defining a pipelined function which allows you to use the constructs explained in the rest of the this table.
LINE 13 PIPE ROW:
This statement says, in effect “Send this data back from the function now, even though the function is still running”.
LINE 16 RETURN:
Return nothing but control. The PIPEROW statements have already returned all the data. This construct is allowed only in a procedure, uless the function is pipelined.
Insert into ticketable select * from (stockpivot(cursor select * from stack table))
And add where rownumn < 10;
  • With pipeline, once rownum < 10 clause was satisfied and the query was terminated.
  • Without pipeline the full pivoting was performed and then first 9 rows will returned.
BEST PRACTICES OF OTHER TUNING

Use NOCOPY with care to minimize overhead when collections and records are OUT or IN OUT parameters.
When we defined as below, the default mode is COPY
TYPE descriptions_aat
PROCEDURE change_descript(descriptions_io IN OUT descriptions_aat)
Parameters passing in PLSQL by default follow these rules:
  • In arguments are passed by reference.
  • OUT and IN OUT arguments are passed by value.
So, now the solution is user has to use pass by reference instead of pass by value. When we defining OUT or IN OUT parameters with collection type.
Example is:
PROCEDURE change_descript(descriptions_io IN OUT NO COPY descriptions_aat)
There are two consequences of this NO COPY:
  1. Oracle will not copy in or copy out, so the overhead of running your program will be reduced performance will improve.
  2. If your program makes changes to your IN OUT argment and then the program raises an exception those changes will be reversed. This means that the contents of your variable could be corrupted or at least not be consistent.
The 2nd consequence is rather severe. If you are using NO COPY, you should not allow unhandled exceptions to propagate out of the program. You should trap errors and explicitly write the code necessary to ensure that the IN OUT arguments contents are ok.

PLSQL loops should not resemble hamsters running in circles. Move static expressions outside of loop and sql statements.

In efficient program:
Loop
DBMS_OUTPUT.PUT_LINE(TO_CHAR (SYSDATE, ‘DD-MM-YYYY’ || ‘ ‘|| UPPER(PREFIX_IN)|| ‘ ‘ || UPPER(STATE_IN ||’ ‘ ||customerrec.fistname||’ ‘ ||customerrec.lastname)))
End loop;
Solution is we have to avoid executing anything inside a loop that doesn’t change in that loop;
Efficient program
C_output_prefix CONSTANT VARCHAR2(32767) := to_char (sysdate, ‘mm-dd-yyyy’) || ‘ ‘ || upper (prefix_in) || ‘ ‘|| c_state;
Loop
DBMS_OUTPUT.PUT_LINE(C_output_prefix || ‘ ‘ || customerrec.fistname||’ ‘ ||customerrec.lastname);
End loop;
So, whenever you set out to tune your plsql programs, you should first take a look at your loops, Any inefficiency inside a loops body will be magnified by the multiple execution of that code.

CHOOSE DATATYPES CAREFULLY TO MINIMIZE IMPLICIT CONVERSIONS OF DATA

Avoid implicit conversion as below:
DECLARE
L_string varchar2(10) :=1000;
l_number number :=’5005’;
BEGIN

l_number := l_string + l_number;
DBMS_OUTPUT.PUT_LINE(l_number);
END;
Use built in function oracle provides to perform those conversions like below
DECLARE
L_string varchar2(10) :=1000;
l_number number := TO_NUMBER(’5005’);
BEGIN

l_number := TO_NUMBER(l_string) + l_number;
DBMS_OUTPUT.PUT_LINE(l_number);
END;
BEST PRACTICES FOR PARAMETERS:

Make sure all new IN arguments have defaults or add an overloading.

Example:
Function usage_estimate ( customer_id_in IN customer.id%TYPE, frequency_in IN BOOLEAN, daytime_usage_in IN BOOLEAN DEFALULT FALSE) RETURN PLS_INTEGER;

Use named notiation to self document sub program calls and pan values more flexibly.
Example:
PROCEDURE business_as_usual (advt_budget_in IN NUMBER, contributionsinout IN OUT NUMBER, merge_and_purge_in IN DATE default sysdate, obsence OUT NUMBER, act_Acorners_in IN VARCHAR2 DEFAULT ‘WHENEVER POSSIBLE’);
To call;
business_as_usual (advt_budget_in => ‘005’, contributionsinout => ‘dollors’, merge_and_purge_in => sysdate+20), obsence = l_cessalary);
It’s Like:
Formal_parameter => actual_parameter
  • Functions should return data only through the RETURN clause.

  • Return multiple values through a single composite structure or with a procedure.

Example :

FUNCTION excuse_info (id_IN IN a.l%type) return abc%rowtype;

BEST PRACTICES FOR PROCEDURES AND FUNCTIONS:
  • Write Tiny chunks of code.

  • Limit execution section length to no more than 50 lines.

  • Use step-wise refinement and local sub programs to make code transparent purpose and design.

Example:

PROCEDURE distribute_calls(dept_id in mfe_emp.dept_id%type)
Is
Begin
While (calls_still_unhandled ())
Loop
For emp_rec_in emps_in_dept_cur(dept_id)
Loop
If current_caseload(emp_rec.emp_id) < avg_caseload_for_dept(dept_id)
Then
Assign_next_open_call_to (emp_rec.emp_id, l_case_id);
Notify_customer(l_case_id);
END IF;
END LOOP;
END LOOP;
END;

Minimize side effects and maximize reuse by creating programs with narrowly defined purposes.
Write program with very specific purposes and avoid hidden dependencies.
There are several specific consequences of putting DML inside a sub program.
  • The DML cannot be called inside a SQL query (unless it is an autonomous traction)
  • That insents becomes a part of the application transaction.
  • The performance profiles of the program changes dramatically – it slows down.
So, do not add DML statements to a program without careful consideration. You certainly should not “hide” DML operations inside a program with another stated purpose.
  • HIDE business rules and formulas inside functions.
Wrap or hide all business rules and formulas inside a function.
Before you change any of those now obsolete rules. Write a single function that implements the rule. Put it inside a special “customer rules” package if there isn’t one already. Test it thoroughly. Then go back to each place in which you coded the rule and replace it with nothing more than call to a function.
LIMIT functions to a single RETURN statement in the execution sections:

DON’T ALLOW MULTIPLE EXIT POINTS FROM THE FUNCTION.

Wrong program:
Functions_status_desc(cd_in IN VARCHAR2) return varchar2
IS
BEGIN
If cd_in=’C’ Then
Return ‘closed’;
elsif cd_in=’0’ Then
Return ‘open;
elsif cd_in=’I’ Then
Return ‘inactive;
end if;
end status_desc;
Solution
Functions_status_desc(cd_in IN VARCHAR2) return varchar2
IS
l_return varchar2(32757)
BEGIN
l_return := case upper(cd_in)
when ‘c’ then ‘closed’;
when ‘c’ then ‘open;
when ‘c’ then ‘inactive;
end case;
return l_return
end status_desc;
Solution
  • NEVER return null from Boolean functions.

  • Ensure that a Boolean functions returns only TRUE or FALSE.

BEST PRACTICES FOR PACKAGES

Avoid schema-level programs: Instead, group related code into packages.
Put in the dot from the start package subprogram.
General Recommendations:
  • Avoid writing schema-level procedures and functions.
  • Always start with a package. Even if there is just one program in the package at then moment. It will increase later.
  • Use packages to group together related functionality:
  • A package gives a name to a set of program elements: procedures, functions, user-defined type, variable and constant declarations, cursors and so on.,
  • By creating a package for each distinct area of functionality you create intuitive containers for that functionality. Program will easier to find and therefore less likely to be re-invented in different places in your application.
  • Keep your packages small and narrowly focused.
  • Anticipate programmer needs and simplify call interfaces with overloading.
  • Great multiple programs with the same name(overloading) that anticipate user needs.
  • DBMS_OUTPUT.PUT_LINE;
  • The fundamental problem with the procedure is that it is not overloaded at all. There is just one declaration of PUT_LINE, accepting a single string as in sole argument.

  • Thus, put_line can only display the values of data structures that can be implicitly or explicitly converted to VARCHAR2.

BEST PRACTICES FOR TRIGGERS

  • Uncertainty in trigger execution is a most unsettling emotion.
  • Consolidate “overlapping” DML triggers to control executions order or use the follow syntax of oracle database 11g.
  • Consolidate ‘same-event’ triggers or use the follows clause (11g).
If you wants to create two triggers means the user use FOLLOWS clause (11g) as below.

CREATE OR REPLACE TRIGGER mfe_excuse FOLLOWS mfe_excuse
…… smase as above….
END;

Use oracle databse 11g compound triggers to consolidate all related trigger logic on a table.

If few different triggers defined in the TRIGGER BLOCKS, when executes if the BEFORE TRIGGER raises an exception, the collection with not be cleaned up consume lots of memory.

Consolidate all logic into a compound trigger and lose the package if it is present only for trigger implementation.

Example:

Create trigger full_mfe_ecuse_transaction BEFORE UPDATE ON mfe_customers
COMPOUND TRIGGER

--Declare any variable here can be accessed any where insde the trigger body.
BEFORE STATEMENT IS
BEGIN
……
END BEFORE STEMENT;

BEFORE ROW IS
BEGIN
…..
END AFTER ROW

AFTER ROW IS
BEGIN
….
END AFTER ROW;
AFTER STATEMENT IS
BEGIN
…..
END AFTER STATEMENT
END;

  • Validate complex business rules with DML triggers.

  • Apply the business rule at the lowest level possible, to ensure that if cannot be avoided.
  • Populate columns of derived values with trigger.

Example: (denormalized)
Create trigger denor_mfe_excuse
BEFORE UPDATE OR INSERT ON mfe_excuses
FOR EACH ROW
BEGIN
:new.upper_title := upper(‘new title’);
END;

BEST PRACTICES FOR DEVELOPING AND USING STANDARDS.

Declaration Type
Naming Conventions
IN parameter
<root>_in
OUT parameter
<root>_out
IN OUT parameter
<root>-inout <root>_io
constant
c_<oot>
Global package variable
g_<root>
local variable
l_<root>
Cursor
<root>_cur
Associative Array type
<root>_aat
Nested table tyhpe
<root>_nt
VARRAY type
<root>_vat
Record type
<root>_rt
Object type
<root>_ot
Procedures and functions
No standard follows but name should properly explain.

Note: Oracle restricted only 30-characters for the names for all sql and plsql identifiers.

BEST PRACTICES FOR DECLARING VARAIBLES AND DATA STRUCTURES
  • Always anchor variable to database using %TYPE and %ROWTYPE.
  • Assume that everything will change and that any program you write could be around for decades.
  • So, don’t provide the constant length of the data type, try to use %TYPE or %ROWTYPE.
  • Use SUBTYPE to declare program specific and derived datatypes.
  • Create a new datatype with SUBTYPE and anchor to that

The SUBTYPE statement allows you to create “aliases” for existing types fo informations, in effect creating your own specially named datatypes.
Example:
package execuser_rp IS
SUBTYPE full_name IS VARCHAR2(1000);
FUNCTION full_name (last_name_in IN excuser.last_name%TYPE, firstname_in excuser.first_name%TYPE)
RETURN full_name_t
end EXCUSER_RP;
Perform complex variable initialization in the execution section.
Don’t trust the declaration section to assign default values.
Wrong:
Package body re_config is
C_worst_excuse CONSTANT VARCHAR2(20) := ‘The dog afe my7 home work really’;
Right
PACKAGE BODY re_config IS
g_worst_excuse filmsy_excuse.title%TYPE;
PROCEDURE initialize IS
BEGIN
g_worst_excuse := ‘The dog ate my homework. Really’;
End initialize;
End re_config;
So follow the below steps:
  • No more hardcode of the varchar2 length anchored to a databse column instead.
  • Move the assignment of the default value into a separate procedure and call this procedure in the package initialize section.
BEST PRACTICES FOR USING VARIABLES AND DATA STRUCTURES

Clean up data structures when your program terminates (successfully or with an error).
Good example programs:
PROCEDURE busy_busy IS
fild_id UTL_FILE.FILE_TYPE;
dyncur PLS_INTEGER;

Procedure cleanup IS
BEGIN
if book.pkg all_books_by%ISOPEN then
close book.pkg all_books_by;
end if;

IF DBMS_SQL.IS_OPEN(dyncur) THEN
DBMS_SQL.CLOSE_CURSOR(dyncur);
END IF;
IF DBMS_SQL.IS_OPEN(fileid) THEN
DBMS_SQL.CLOSE_CURSOR(fileid);
END IF;
BEGIN
Dyncur := DBMS_SQL.OPEN_CURSOR;
OPEN book_pkg.all_books_by (‘FEVERSTEWP);
fileid := UTL_FILE.FOPEN (filepath,filename, ‘R’);
………………………….
………………………….
cleanup;
EXCEPTION
WHEN NO_DATA_FOUND THEN
err.log
cleanup;
RAISE;
END;

The above program used cleanup logice into its own program. It also took the time to enhance it, so that it closes only those things that are actually open.
So, it will consume loss SGA
A common clean up procedure offers several important advantages.
  • Your program are less likely to have memory leaks (open cursors) and to cause problems in other programs by leaving data structures in an uncertain state.
  • Future developer can easily add new cleanup operations in one place and be certain that they will be run at all exit points.
  • When and if I add another WHEN clause. I will be a very likely to follow the ‘model’ in WHEN OTHERS and perform cleanup there as well.
  • Beware of and avoid implicit datatype conversion.
Perform explicit conversion rather than relying on implicit conversion.
Wrong one:
Declare
bday DATE := ’05-11-85’;
RIGHT FORMAT:
DECLARE
bday DATE := TO_DATE(’05-11-85’, ‘dd-mon-yyyy’);

BEST PRACTICES FOR DECLARING AND USING PACKAGE VARIABLES
  • Use package global sparingly and only in the package bodies.
  • Don’t expose program data in package specifications, letting everyone sec and change it.
  • So try to avoid declaring in package specification mostly declare the variables in package bodies.
  • Global are dangerous and should be avoided because they create hidden “decencies” or side effects.
Follow the below instructions:
  • Pass the global as a parameter in your procedures and functions.
  • Declare variables, cursors, functions and other objects ad “deeply” as possible.
  • Hide your package data behind “gets and sets”.
  • Scope declarations as locally as possible avoid declaring in package specifications.
Try to declare it package body, if you want to use the variable in many programs.

BEST PRACTICES FOR CONDITIONS AND BOOLEAN LOGIC

CASE statement introduced to PLSQL since 9i.
1. Use if..Else only to test a single, simple condition.
Wrong format:
If cond A AND not(COND b OR COND C) Then
proc;
elsif cond A AND not(COND b OR COND C) Then
proc2;
end if;
Right format:
If cond A
If (COND b OR COND C) Then
proc;
else
proc2;
end if;
2. Use CASE to avoid lengthy sequences of IF statements.
Don’t forget the else in the case of else it will throw an error, as ‘OR-06592’ CASE NOT FOUND while executing CASE statement.
Example:
FUNCTION t1 (letter_in varchar2) RETURN VARCHAR2 IS
retrval VARCHAR2(1000);
BEGIN
case
when letter_in = ‘A’ Then
retval := ‘Apple’;
when letter_in = ‘B’ Then
retval := ‘Banana’;
END CASE;
RETURN retval;
END t2;
BEST PRACTICES FOR LOOP PROCESSING

1. Never EXIT or RETURN from WHILE and FOR LOOPS:
Try to follow one way in and one way out.
Example:
Declare
a Boolean default true
begin
while(A index is not null and A)
loop
…..
………..
A:=FALSE;
END LOOP;
END;
2. Write “full collection scans” so that there is no assumption aboujt how the collection is filled.
3. Use goto and continue only when structured code is not option.

BEST PRACTICES FOR UNDERSTANDING ERROR HANDLING

1. Basic functionality and capabilities of error management in PLSQL.
i. Execution section.
ii. Probagation.
iii. SQL ERRM:
  • Don’t use SQLERRM to retrieve the message associated with the last SQL error. Instead, call the DBMS_UTILITY.FORMAT_ERROR_STACK function.
  • SQLERRM may truncate the error message, but the DBMS_UTILITY will not.
iv. SAVE EXCEPTIONS and SQL%BULKEXCEPTIONS.
  • Use SAVE_EXCEPTIONS with FORALL to continue post exceptions in a bulk bind operation. Then iterate through SQL%BULKEXCEPTIONS afterward to recover from or log errors.
  • You should use this clause in just about every production usage of FORALL. Otherwise, you will not be able to manager errors effectively.
v. DBMS_UTILITY.FORMAT_BACK_TRACE
  • Since Oracle 10G Release 2 use DBMS_UTILITY.FORMAT_BACK_TRACE to obtain the line number on which the most recent exception was raised.
vi. DBMS_ERRLOG:
  • Since oracle 10G release 2, use DBMS_ERRLOG to continue past DML errors without raising an exceptions.
  • Similar to SAVE EXCEPTIONS the 10G ERROR clause for individual DML statements will clause error information to be written to the specified error table and no exception will be raised.
  • After executing you SQL statements, you check the contents of the error table to see whether there were any problems.
viii. AFTER SERVER ERROR:
  • This trigger allows you to define exception handling logic for an entire oracle DB instance.
  • The trigger fires only when an exception escapes unhandled from the outermost PLSQL block.
2. Avoid application logic in the exception section.
3. Transform the exception to a status indicator that can be interpreted by user of that code.
4. Handle those unexpected “hard” errors and then re-raise the exception(q$errormanager).

Example :
EXCEPTION
WHEN TOO_MANY_ROWS THEN
q$errormanager.RAISE_ERROR( ‘ERROR_CODE_IN => SQLCODE, NAME_IN => ‘EMP_ID’, VALUE-IN => ‘EMIDIN’);

q$errormanager raise_error
q$errormanager register_error;

5. Use the EXCEPTION_INIT pragma to name exceptions and make your code more accessible.

6. Never Use WHEN OTHER THEN NULL

Add value in WHEN OTHERS log information and re-raise some exception or other.

BREAK YOUR ADDICTIOIN TO SQL.

1. SQL IS BAD:
  1. HARDCODING IS BAD
Hardcoding the variable value and cursor hardcoding is very bad, since the value or the requirements will change after some time.
Exampledeclare
abc a.id%type;
a_sal a.sal%type;
cursor c1_cur is select id, sal from A;
Begin
open c1_cur;
fetch c1_cur into abc, a_sal;
…………
……………..
end;

In the above example hard coding declaration of cursors & fetching the cursor is hardcode. Because after sometime the requirement will change to fetch more rows from the table. That time entire code should be change or else the serious problem will occur.
  1. Every SQL statement you write is hard coding.
2. Hide your sql statements behind a programmatic interface.
Never repeat a sql statement, instead implement sql behind procedures and functions.
3. Use Autonomous transactions to isolate the effect of COMMITs and ROLLBACKs.
Save your error log information separately from your business transaction logic.

BEST PRACTICES FOR QUERYING DATA FROM PLSQL
  1. Fetch into cursor Records, never into a hardcoded list of variable.
  2. Don’t use a cursor for loop to fetch just one row.
BEST PRACTICES FOR CHANGING DATA FROM PLSQL
  1. Don’t forget exception handler for your DML statements.
  2. List columns explicitly in your INSERT statements.
Example:
Begin
Insert into emp(empid,ename) values(1001,’abcd’);
End;

  1. Remember that sql%attributes always refer to the most recently executed implicit cursor in your session.

Attribute
Functioniality
SQL%ROWCOUNT
Number of rows affected by the DML statements.
SQL%ISOPEN
Always FALSE. Since the cursor is opened and then close implicitly.
SQL%FOUND
TRUE, if the statements effects atleast one row.
SQL%NOTFOUND
FALSE, IF THE STATEMENTS EFFECTS ATLEAST ONE ROW.

BEST PRACTICES FOR DYNAMIC SQL

Dynamic means that the SQL statement or PLSQl block that execute is constructed, passed and compiled at runtime. Not the time the code is cimpiled.
Two types:
1. Native dynamic SQL (NDS)
By using EXECUTE IMMEDIATE and OPEN FOR.
2. DBMS_SQL (Built-in package).
A large and complex API to the level steps required and execute SQL statements dynamically.
  1. Dynamic SQL method:
At the time you write your program, you don’t know either the number of elements in your select list or how many variables will need to be band.
NDS is not well suite to this challenge.
  1. Describe Columns
You need to get information about the values being returned by your dynamic SELECT.
The DBMS_SQL.DESCRIBE_COLUMNS procedure will provide you with this information.
  1. Very large SQL statements:

If your statement has more than 32k characters, you will need to use the collection overloading of DBMS_SQL.PARSE to parse the statement.

DYNAMIC SQL USAGE:

Method 1:
DDL statements or DML statements that contain no place holders for bind variable.
Method 2:
DML statements that contain at least one placeholder but the number of placeholder is known at the time you compile your code.
Method 3:
Queries that contain a fixed number of elements in the SELECT list.
Since Oracle 11.
EXECUTE IMMEDIATE AND OPEN-FOR (NDS)
Now accept a clob argument allowing dynamic sql statements target than 32k.
DBMS_SQL.TO_REFCURSOR:
Converts a DBMS_SQL cursor handle to a cursor variable, based on the predefined weak SYS_REFCURSOR type.
DBMS_SQL.TO_CURSOR
Converts a cursor variable to a DBMS_SQL cursor handle. You can start with native dynamic SQL and then shift over to DBMS_SQL to take advantage of DESCRIBE_COLUMNS or some other DBMS_SQL specific functionality.
  1. Always parse a string variable. Do not EXECUTE IMMEDIATE a literal.
Example:
L_query_string varchar2(32767);
l_query_string := ‘SELECT’ || select_list || ….
DBMS_sql.parse(g_cursor, l_querystring, DBMS_SQL.native);

Never pass a literate string directly to EXECUTE IMMEDIATE.

  1. Avoid concatenation of variable values into dynamic SQL strings.
Example:
Ap_in number, job_in number;
L_dml varchar2(32767);
BEGIN
For locrec IN (SELECT city from off)
loop
l_dml := ‘UPDATE wm_emp_’ || loc_re.city || ‘SET sal=sal * (H(:per_change/100)) where job=:job_cat;
Execute immediate l_dml USING
end loop; ap_in, job_in;
End;

Memory

When cursor executes, the results will be populated to mainly in UGA and some in CGA.

QUERY the V$SQL view to find the SQL statements

The V$SQL view contains extremely useful information on recently executed SQL statements
By Querying the V$SQL view, the oracle detective can discover very pertinent clues suc as
  • Actual SQL statement processed.
  • Number of time executed.
  • Number of disk reads.
  • Number of logical reads.
  • First load time
Best usage to use V$SQL
Retrieve only these SQL statements that have a large no. of disks.
Next, the analyst might be looking for SQL statements that have a specific no.
Finding SQL statements with Excessive Disk Reads.
SELECT execution, disk_reads,sql_text from V$SQL
where disk_reads/(.01 + executeions) > 10000;
V$SQL Limitations:
When query the V$SQL view, remember that the statistics are not kept in memory forever depending on the size of the shared pool, statistics may son “age-out”.

SQL TRACE

Activating SQL TRACE:
ALTER SESSION SET SQL_TRACE = TRUE;
EXECUTE SYS.DBMS_STSTEM.SET_SQL_TRACE_IN_SESSION(SID,SERIAL#, TRUE);
TKPROF: (Transient Kernel Profiler)
Oldest utilities which has designed by oracle.
The purpose of TKPROF is to read SQL trace files and format the contents for easier reading and analysis
STARTING TKPROF:
TKPROF <true file> <output file> [Explain =<username/password>] [sys=n] \ [insert =<filename> ] [recod=<filename>] [sort=<keyword>]
ALTER SESSION /*TKPROF example */ SET sql_trace = true;
ALTER SESSION SET timed_statistics = TRUE;
MONITORING INDEX USAGE:
Use the ALTER INDEX.. MONITORING USAGE
For Single Index: Use the below Query.
SQL>alter index f_resps_icki MONITORING USAGE;
Then first time the index is accessed oracle recods this.
Then find the entry in v$object_usage;
SQL > Select index_name, table_name, monitoring used from v$object_usage;
If you want to monitor n number of indexes then use the below script.
Set pagesize 0 head off line size i32
Spool enable_mon.sql
Select ‘alter index ‘ || index_name || ‘monitoring usage;’ from user_indexes
SPOOL off;
To disable
SQL> alter index f_reegs_idx nomonitoring usage;
If monitoring is enabled, you can use the below query to identify the status of the indexes by using DBA credentials.
SELECT io.name, t.name, decode(bitand)(i-flags, 65536) 0, ‘no’,’yes’, decode (bitand)(ou.flags,1),0,’no’,’yes’), ou.start_monitoring, ou.end_monitoring
From sys,obj$io, sys.obk$ t, sys.ind$ I, sys.object_usage ou
where i.obj#=ou.obj# and io.obj#=ou.obj# and t.obj#=i.bo#;

FREE UP THE INDEX SPACE:
The following methods will be used
  • Rebuilding Index
  • Shrinking Index
First check USER_SEGMENTS to verify that the amount of space used.
SQL> Select bytes from user_segments where segment_name = ‘F_REGS_IOU’;

output:
BYTES:
166723584
Try rebuilding to re-organize and compact the spce used.
SQL > alter index t_regs_idx1 rebuild;
Now, try shrinking to free up unused space.
SQL> alter index f_regs_idx1 shrink space.
Now, query user_segments table again.
BYTES:
524288.
You have to consider the following additional reason, when rebuilding index.
  • The index has become corrupt
  • You want to modify storage characteristics (such as changing table space).
  • An index that was previously marked as unusable now need to be rebuilt to make it usable again.
Note:
When rebuilding the index, the table will be locked by oracle. If any transaction is not committed then the below error will thrown.
ORA-0054: Resource busy and acquire with NOWAIT specified or timeout expired.
Solution is wait for some time to get lock release or use the below query.

SQL> alter sessions set ddl_lock_time = 15 (here ddl_lock_time(11g))
For shrink
ORA-10635: Invalid segment or tablespace type.
Correlated Subquery:
Correlated : It will allows you to reference the outer query from within the inner query.
Example:
We want to see all the job each current employee has ever held in the company.
Select emp_id, job_id from job_hist in where job_id in (Select job_id from emp e where e.job_id = h.job_id) order by 1;
Process of Correlated Subquery:
  1. Retrieve row from the outer query.
  2. Execute the inner query
  3. The outer query compares the values returned from the inner query.
  4. If there is a value match in step 3, the row is returned to the user.
EXISTS AND NOT EXISTS also one kind of correlated subquery
AUTOTRACE OPTION:
Auto Trace Option
Execution plan
shown

Statistics shown
Query Executed
AUTOTRAC OFF
NO
NO
YES
AUTOTRACE ON
YES
YES
YES
AUTOTRACE ON EXPLAIN
YES
NO
YES
AUTOTRACE ON STATISTICS
NO
YES
YES
AUTOTRACE ONLY
YES
YES
YES (But query output is supressed)
AUTOTRACE ONLY EXPLAIN
YES
NO
NO

EXPLAIN PLAN:

DBMS_XPLAIN has extensive functionality to get explain plan.
The display function can be used to quickly get the execution plan for a query.
Example:
Sql> Explain plan for select query
Sql> Explained.
Sql> select * from table (dbms_xplan.display);
PLAN_TABLE_OUTPUT.
Plan has value: 18355461541
The DBMS_XPLAIN.DISPLAY procedure is very flexible in configuring how you would like to see the output.
Example
Select * from table (dbms_xplan.display(null,null,’ BASIC’));
Monitoring Long_running SQL statements:
SELECT username, target, sofar block_read total work total_block, round(time-remaining/60) minutes from V$session_longops
Where sofar <> totalwork
and username = ‘HR’;

To display the query result, it should cross the below things:

  • Query must run for six seconds or greater.
  • It should access more than 10,000 blocks.

No comments:

Post a Comment