Sunday, 27 September 2015

Why and How SQL and PLSQL

Difference between TRUNCATE and DELETE and Drop:
TRUNCATE is much faster than DELETE.
Reason:When you type DELETE.all the data get copied into the Rollback Tablespace first.then delete operation get performed.Thatswhy when you type ROLLBACK after deleting a table ,you can get back the data(The system get it for you from the Rollback Tablespace).All this process take time.But when you type TRUNCATE,it removes data directly without copying it into the Rollback Tablespace.Thatswhy TRUNCATE is faster.Once you Truncate you cann't get back the data.
TRUNCATE removes all rows from a table. The operation cannot be rolled back and no triggers will be fired. As such, TRUCATE is faster and doesn't use as much undo space as a DELETE.
The DROP command removes a table from the database. All the tables' rows, indexes and privileges will also be removed. No DML triggers will be fired. The operation cannot be rolled back.
DROP:
Drop command remove the table from data dictionary. This is the DDL statement. We can not recover the table before Oracle 10g. But Oracle 10g provide the command to recover it by using the command (FLASHBACK)

Difference between Constraints and Triggers:
Both triggers and constraints can constrain data input, but they differ significantly.
A trigger always applies to new data only. For example, a trigger can prevent a DML statement from inserting a NULL value into a database column, but the column might contain NULL values that were inserted into the column before the trigger was defined or while the trigger was disabled.
A constraint can apply either to new data only (like a trigger) or to both new and existing data. Constraint behavior depends on constraint state.
Constraints are easier to write and less error-prone than triggers that enforce the same rules. However, triggers can enforce some complex business rules that constraints cannot. Oracle strongly recommends that you use triggers to constrain data input only in these situations:
  • To enforce referential integrity when child and parent tables are on different nodes of a distributed database
  • To enforce complex business or referential integrity rules that you cannot define with constraints
Index :
Blocks
First you need to understand a block. A block - or page for Microsoft boffins - is the smallest unit of disk that Oracle will read or write. All data in Oracle - tables, indexes, clusters - is stored in blocks. The block size is configurable for any given database but is usually one of 4Kb, 8Kb, 16Kb, or 32Kb. Rows in a table are usually much smaller than this, so many rows will generally fit into a single block. So you never read "just one row"; you will always read the entire block and ignore the rows you don't need. Minimising this wastage is one of the fundamentals of Oracle Performance Tuning.
Oracle uses two different index architectures: b-Tree indexes and bitmap indexes. Cluster indexes, bitmap join indexes, function-based indexes, reverse key indexes and text indexes are all just variations on the two main types. b-Tree is the "normal" index, so we will come back to Bitmap indexes another time.


CLUSTER:

A cluster provides an optional method of storing table data. A cluster is made up of a group of tables that share the same data blocks. The tables are grouped together because they share common columns and are often used together. For example, the emp and dept table share the deptno column. When you cluster the emp and dept tables, Oracle physically stores all rows for each department from both the emp and dept tables in the same data blocks.
Because clusters store related rows of different tables together in the same data blocks, properly used clusters offer two primary benefits:
  • Disk I/O is reduced and access time improves for joins of clustered tables.
  • The cluster key is the column, or group of columns, that the clustered tables have in common. You specify the columns of the cluster key when creating the cluster. You subsequently specify the same columns when creating every table added to the cluster. Each cluster key value is stored only once each in the cluster and the cluster index, no matter how many rows of different tables contain the value.
Therefore, less storage might be required to store related table and index data in a cluster than is necessary in non-clustered table format. For example, in Figure 18-1, notice how each cluster key (each deptno) is stored just once for many rows that contain the same value in both the emp and dept tables.
After creating a cluster, you can create tables in the cluster. However, before any rows can be inserted into the clustered tables, a cluster index must be created. Using clusters does not affect the creation of additional indexes on the clustered tables; they can be created and dropped as usual.
You should not use clusters for tables that are frequently accessed individually.

Clustered Table Data




The following sections describe guidelines to consider when managing clusters, and contains the following topics:
  • Choose Appropriate Tables for the Cluster
  • Choose Appropriate Columns for the Cluster Key
  • Specify Data Block Space Use
  • Specify the Space Required by an Average Cluster Key and Its Associated Rows
  • Specify the Location of Each Cluster and Cluster Index Rows
  • Estimate Cluster Size and Set Storage Parameters

Choose Appropriate Tables for the Cluster

Use clusters for tables for which the following conditions are true:
  • The tables are primarily queried--that is, tables that are not predominantly inserted into or updated.
  • Records from the tables are frequently queried together or joined.

Choose Appropriate Columns for the Cluster Key

Choose cluster key columns carefully. If multiple columns are used in queries that join the tables, make the cluster key a composite key. In general, the characteristics that indicate a good cluster index are the same as those for any index.
A good cluster key has enough unique values so that the group of rows corresponding to each key value fills approximately one data block. Having too few rows for each cluster key value can waste space and result in negligible performance gains. Cluster keys that are so specific that only a few rows share a common value can cause wasted space in blocks, unless a small SIZE was specified at cluster creation time
Too many rows for each cluster key value can cause extra searching to find rows for that key. Cluster keys on values that are too general (for example, male and female) result in excessive searching and can result in worse performance than with no clustering.
A cluster index cannot be unique or include a column defined as long.

Specify Data Block Space Use

By specifying the PCTFREE and PCTUSED parameters during the creation of a cluster, you can affect the space utilization and amount of space reserved for updates to the current rows in the data blocks of a cluster's data segment. PCTFREE and PCTUSED parameters specified for tables created in a cluster are ignored; clustered tables automatically use the settings specified for the cluster.

Specify the Space Required by an Average Cluster Key and Its Associated Rows

The CREATE CLUSTER statement has an optional argument, SIZE, which is the estimated number of bytes required by an average cluster key and its associated rows. Oracle uses the SIZE parameter when performing the following tasks:
  • Estimating the number of cluster keys (and associated rows) that can fit in a clustered data block
  • Limiting the number of cluster keys placed in a clustered data block. This maximizes the storage efficiency of keys within a cluster.
SIZE does not limit the space that can be used by a given cluster key. For example, if SIZE is set such that two cluster keys can fit in one data block, any amount of the available data block space can still be used by either of the cluster keys.
By default, Oracle stores only one cluster key and its associated rows in each data block of the cluster's data segment. Although block size can vary from one operating system to the next, the rule of one key for each block is maintained as clustered tables are imported to other databases on other machines.
If all the rows for a given cluster key value cannot fit in one block, the blocks are chained together to speed access to all the values with the given key. The cluster index points to the beginning of the chain of blocks, each of which contains the cluster key value and associated rows. If the cluster SIZE is such that more than one key fits in a block, blocks can belong to more than one chain.

Specify the Location of Each Cluster and Cluster Index Rows

If you have the proper privileges and tablespace quota, you can create a new cluster and the associated cluster index in any tablespace that is currently online. Always specify the TABLESPACE option in a CREATE CLUSTER/INDEX statement to identify the tablespace to store the new cluster or index.
The cluster and its cluster index can be created in different tablespaces. In fact, creating a cluster and its index in different tablespaces that are stored on different storage devices allows table data and index data to be retrieved simultaneously with minimal disk contention.

Estimate Cluster Size and Set Storage Parameters

The following are benefits of estimating a cluster's size before creating it:
  • You can use the combined estimated size of clusters, along with estimates for indexes, rollback segments, and redo log files, to determine the amount of disk space that is required to hold an intended database. From these estimates, you can make correct hardware purchases and other decisions.
  • You can use the estimated size of an individual cluster to better manage the disk space that the cluster will use. When a cluster is created, you can set appropriate storage parameters and improve I/O performance of applications that use the cluster.
Whether or not you estimate table size before creation, you can explicitly set storage parameters when creating each non-clustered table. Any storage parameter that you do not explicitly set when creating or subsequently altering a table automatically uses the corresponding default storage parameter set for the tablespace in which the table resides. Clustered tables also automatically use the storage parameters of the cluster.

Creating Clusters

You create a cluster using the CREATE CLUSTER statement. The following statement creates a cluster named emp_dept, which stores the emp and dept tables, clustered by the deptno column:
CREATE CLUSTER emp_dept (deptno NUMBER(3))
   PCTUSED 80
   PCTFREE 5
   SIZE 600
   TABLESPACE users
   STORAGE (INITIAL 200K
      NEXT 300K
      MINEXTENTS 2
      MAXEXTENTS 20
      PCTINCREASE 33);
If no INDEX keyword is specified, as is true in this example, an index cluster is created by default. You can also create a HASH cluster, when hash parameters (HASHKEYS, HASH IS, or SINGLE TABLE HASHKEYS) are specified.

Creating Clustered Tables

You create a table in a cluster using the CREATE TABLE statement with the CLUSTER option. The emp and dept tables can be created in the emp_dept cluster using the following statements:
CREATE TABLE emp (
   empno NUMBER(5) PRIMARY KEY,
   ename VARCHAR2(15) NOT NULL,
   . . .
   deptno NUMBER(3) REFERENCES dept)
   CLUSTER emp_dept (deptno);  

CREATE TABLE dept (
   deptno NUMBER(3) PRIMARY KEY, . . . )
   CLUSTER emp_dept (deptno);

Note:
You can specify the schema for a clustered table in the CREATE TABLE statement. A clustered table can be in a different schema than the schema containing the cluster. Also, the names of the columns are not required to match, but their structure must match.

Creating Cluster Indexes

A cluster index must be created before any rows can be inserted into any clustered table. The following statement creates a cluster index for the emp_dept cluster:
CREATE INDEX emp_dept_index
   ON CLUSTER emp_dept
   INITRANS 2
   MAXTRANS 5
   TABLESPACE users
   STORAGE (INITIAL 50K
      NEXT 50K
      MINEXTENTS 2
      MAXEXTENTS 10
      PCTINCREASE 33)
   PCTFREE 5;
The cluster index clause (ON CLUSTER) identifies the cluster, emp_dept, for which the cluster index is being created. The statement also explicitly specifies several storage settings for the cluster and cluster index.

Altering Clusters

You can alter an existing cluster to change the following settings:
  • Physical attributes (PCTFREE, PCTUSED, INITRANS, MAXTRANS, and storage characteristics)
  • The average amount of space required to store all the rows for a cluster key value (SIZE)
  • The default degree of parallelism
ALTER CLUSTER emp_dept
   PCTFREE 30
   PCTUSED 60;

Altering Clustered Tables

You can alter clustered tables using the ALTER TABLE statement. However, any data block space parameters, transaction entry parameters, or storage parameters you set in an ALTER TABLE statement for a clustered table generate an error message (ORA-01771, illegal option for a clustered table). Oracle uses the parameters of the cluster for all clustered tables. Therefore, you can use the ALTER TABLE statement only to add or modify columns, drop non-cluster key columns, or add, drop, enable, or disable integrity constraints or triggers for a clustered table.

Altering Cluster Indexes

You alter cluster indexes exactly as you do other indexes.



Note:
When estimating the size of cluster indexes, remember that the index is on each cluster key, not the actual rows. Therefore, each key appears only once in the index.



Dropping Clusters

To drop a cluster that contains no tables, and its cluster index, use the DROP CLUSTER statement. For example, the following statement drops the empty cluster named emp_dept:
DROP CLUSTER emp_dept;
If the cluster contains one or more clustered tables and you intend to drop the tables as well, add the INCLUDING TABLES option of the DROP CLUSTER statement, as follows:
DROP CLUSTER emp_dept INCLUDING TABLES;
If the INCLUDING TABLES option is not included and the cluster contains tables, an error is returned.

If one or more tables in a cluster contain primary or unique keys that are referenced by FOREIGN KEY constraints of tables outside the cluster, the cluster cannot be dropped unless the dependent FOREIGN KEY constraints are also dropped. This can be easily done using the CASCADE CONSTRAINTS option of the DROP CLUSTER statement, as shown in the following example:
DROP CLUSTER emp_dept INCLUDING TABLES CASCADE CONSTRAINTS;
Oracle returns an error if you do not use the CASCADE CONSTRAINTS option and constraints exist.

Dropping Clustered Tables

To drop a cluster, your schema must contain the cluster or you must have the DROP ANY CLUSTER system privilege. You do not need additional privileges to drop a cluster that contains tables, even if the clustered tables are not owned by the owner of the cluster.
Clustered tables can be dropped individually without affecting the table's cluster, other clustered tables, or the cluster index. A clustered table is dropped just as a non-clustered table is dropped--with the DROP TABLE statement.

Note:
When you drop a single table from a cluster, Oracle deletes each row of the table individually. To maximize efficiency when you intend to drop an entire cluster, drop the cluster including all tables by using the DROP CLUSTER statement with the INCLUDING TABLES option. Drop an individual table from a cluster (using the DROP TABLE statement) only if you want the rest of the cluster to remain.

Dropping Cluster Indexes

A cluster index can be dropped without affecting the cluster or its clustered tables. However, clustered tables cannot be used if there is no cluster index; you must re-create the cluster index to allow access to the cluster. Cluster indexes are sometimes dropped as part of the procedure to rebuild a fragmented cluster index.

Triggers:
Like a stored procedure, a trigger is a named PL/SQL unit that is stored in the database and can be invoked repeatedly. Unlike a stored procedure, you can enable and disable a trigger, but you cannot explicitly invoke it. While a trigger is enabled, the database automatically invokes it—that is, the trigger fires.

A trigger is a pl/sql block structure which is fired when a DML statements like Insert, Delete, Update is executed on a database table. A trigger is triggered automatically when an associated DML statement is executed.
  • 12 types of triggers we can define in the single table
Reasons to Use Triggers
Triggers let you customize your database management system. For example, you can use triggers to:

•Automatically generate virtual column values
•Log events
•Gather statistics on table access
•Modify table data when DML statements are issued against views
•Enforce referential integrity when child and parent tables are on different nodes of a distributed database
•Publish information about database events, user events, and SQL statements to subscribing applications
•Prevent DML operations on a table after regular business hours
•Prevent invalid transactions
•Enforce complex business or referential integrity rules that you cannot define with constraints


Caution:
Triggers are not reliable security mechanisms, because they are programmatic and easy to disable. For high-assurance security, use Oracle Database Vault,
Order in Which Triggers Fire
If two or more triggers with different timing points are defined for the same statement on the same table, then they fire in this order:

1.All BEFORE STATEMENT triggers
2.All BEFORE EACH ROW triggers
3.All AFTER EACH ROW triggers
4.All AFTER STATEMENT triggers
Note: If you are creating two or more triggers with the same timing point, and the order in which they fire is important, then you can control their firing order using the FOLLOWS and PRECEDES
Trigger Types:
  1. DML Triggers Insert, update and delete
  2. DDL Triggers Alter or create
  3. System Triggers initiating the job in oracle
  • DDL triggers and system triggers mostly used by the DBA.
DML Trigger:
If the trigger is created on a table or view, then the triggering event is composed of DML statements, and the trigger is called a DML trigger.
DML Trigger types:
  1. Statement Level
  2. Row Level.
Statement Level
Row Level
Before
After
Before
After
1. insert
1. insert
1. insert
1. insert
2. update
2. update
2. update
2. update
3. delete
3. delete
3. delete
3. delete
Statement Level:
Statement-level triggers execute once for each transaction. For example, if a single transaction inserted 500 rows into the Customer table, then a statement-level trigger on that table would only be executed once.
Statement-level triggers therefore are not often used for data-related activities; they are normally used to enforce additional security measures on the types of transactions that may be performed on a table.
Statement-level triggers are the default type of triggers created and are identified by omitting the FOR EACH ROW clause in the CREATE TRIGGER command.
  • It will execute the block only once.
  • We could not track the history of data by using the statement level trigger.
  • It is the default trigger.
Example:
CREATE OR REPLACE TRIGGER T3
BEFORE INSERT ON mouse
BEGIN
DBMS_OUTPUT.PUT_LINE('New employees are about to be added');
END;
Whenever values are inserting in mouse tables, the dbms messages will printed.

Row Level:
Row-level triggers for data-related activities

Row-level triggers execute once for each row in a transaction.
Row-level triggers are the most common type of triggers; they are often used in data auditing applications.
Row-level trigger is identified by the FOR EACH ROW clause in the CREATE TRIGGER command.
  • It will execute the statement for each row
  • We can track the old data and new data by using the keyword old and new. Example (new.empid, new.empname, old.sal)
Example:
CREATE or REPLACE TRIGGER T2
BEFORE Update OF unit_price
ON tempp2
FOR EACH ROW
BEGIN
INSERT INTO tempp1 VALUES (:old.product_id, :old.product_name, :old.supplier_name, :old.unit_price);
END;
If you are updating any data in the tempp2 tables, the same updated values will inserted in the tempp1 tables.
  • We can find out the triggers in user_triggers tables.
Advantages:
Replication purpose (Reflecting the data in the same table)
  • We can track the history of data by using old and new keyword.
  • Data validation (By using constraints)
Mutating tables:
A mutating table is a table that is currently being modified by an update, delete, or insert statement(possibly by the effects of a DELETE CASCADE constraint). (A view being modified by an INSTEAD OF trigger is not considered to be mutating.) . When a trigger tries to reference a table that is in state of flux (being changed), it is considered "mutating", and raises an error since Oracle should never return inconsistent data.
Mutating errors:
  • A Row level trigger throws an error.
  • This error can occur is if the trigger has statements to change the primary, foreign or unique key columns of the table off which it fires.
Caution:
Oracle Database does not enforce the mutating-table restriction for a trigger that accesses remote nodes, because the database does not support declarative referential constraints between tables on different nodes of a distributed database.
Similarly, the database does not enforce the mutating-table restriction for tables in the same database that are connected by loop-back database links. A loop-back database link makes a local table appear remote by defining an Oracle Net path back to the database that contains the link.
Example Trigger Causes Mutating-Table Error
-- Create log table
DROP TABLE log;
CREATE TABLE log ( emp_id NUMBER(6),
l_name VARCHAR2(25),
f_name VARCHAR2(20));

-- Create trigger that updates log and then reads employees
CREATE OR REPLACE TRIGGER log_deletions
AFTER DELETE ON employees
FOR EACH ROW
DECLARE
n INTEGER;
BEGIN
INSERT INTO log VALUES ( :OLD.employee_id, :OLD.last_name,:OLD.first_name);
SELECT COUNT(*) INTO n FROM employees;
DBMS_OUTPUT.PUT_LINE('There are now ' || n || ' employees.');
END;
-- Issue triggering statement:
DELETE FROM employees WHERE employee_id = 197;
Result:

DELETE FROM employees WHERE employee_id = 197
*
ERROR at line 1:
ORA-04091: table HR.EMPLOYEES is mutating, trigger/function might not see it
ORA-06512: at "HR.LOG_DELETIONS", line 10
ORA-04088: error during execution of trigger 'HR.LOG_DELETIONS'


DDL triggers:
If the trigger is created on a schema or the database, then the triggering event is composed of either DDL or database operation statements, and the trigger is called a system trigger.
Autonomous transaction
An autonomous transaction is an independent transaction that is initiated by another transaction, and executes without interfering with the parent transaction. When an autonomous transaction is called, the originating transaction gets suspended. Control is returned when the autonomous transaction does a COMMIT or ROLLBACK.
A trigger or procedure can be marked as autonomous by declaring it as PRAGMA AUTONOMOUS_TRANSACTION;. You may need to increase the TRANSACTIONS parameter to allow for the extra concurrent transactions.
Example
SQL> CREATE OR REPLACE TRIGGER tab1_trig
2 AFTER insert ON tab1
3 DECLARE
4 PRAGMA AUTONOMOUS_TRANSACTION;
5 BEGIN
6 INSERT INTO log VALUES (SYSDATE, 'Insert on TAB1');
7 COMMIT; -- only allowed in autonomous triggers
8 END;
9 /
Instead of triggers:
  • When the triggers needs to create on Views then instead of triggers should be used.
  • An INSTEAD OF DML trigger is a DML trigger created on a noneditioning view, or on a nested table column of a noneditioning view. The database fires the INSTEAD OF trigger instead of running the triggering DML statement. An INSTEAD OF trigger cannot be conditional.
  • An INSTEAD OF trigger is the only way to update a view that is not inherently updatable.
  • An INSTEAD OF trigger is always a row-level trigger. An INSTEAD OF trigger can read OLD and NEW values, but cannot change them.
Examples:
CREATE OR REPLACE TRIGGER Dept_emplist_tr
INSTEAD OF INSERT ON Dept_view
REFERENCING NEW AS Employee
PARENT AS Department
FOR EACH ROW
BEGIN
-- The insert on the nested table is translated to an insert on the base table: INSERT INTO Emp_tab VALUES (
:Employee.Empno, :Employee.Empname,:Employee.Salary, :Department.Deptno);
END;
Compound DML Triggers:
A compound DML trigger created on a table or editioning view can fire at multiple timing points. Each timing point section has its own executable part and optional exception-handling part, but all of these parts can access a common PL/SQL state. The common state is established when the triggering statement starts and is destroyed when the triggering statement completes, even when the triggering statement causes an error.
A compound DML trigger created on a noneditioning view is not really compound, because it has only one timing point section.
A compound trigger can be conditional, but not autonomous.
Two common uses of compound triggers are:
• To accumulate rows destined for a second table so that you can periodically bulk-insert them
• To avoid the mutating-table error (ORA-04091)
Compound DML Trigger Structure
The optional declarative part of a compound trigger declares variables and subprograms that all of its timing-point sections can use. When the trigger fires, the declarative part runs before any timing-point sections run. The variables and subprograms exist for the duration of the triggering statement.
A compound DML trigger created on a noneditioning view is not really compound, because it has only one timing point section. The syntax for creating the simplest compound DML trigger on a noneditioning view is:
CREATE trigger FOR dml_event_clause ON view
COMPOUND TRIGGER
INSTEAD OF EACH ROW IS BEGIN
  statement;
END INSTEAD OF EACH ROW;
A compound DML trigger created on a table or editioning view has at least one timing-point section in Table 9-2. If the trigger has multiple timing-point sections, they can be in any order, but no timing-point section can be repeated. If a timing-point section is absent, then nothing happens at its timing point.
Table 9-2 Compound Trigger Timing-Point Sections
Timing Point
Section
Before the triggering statement runs
BEFORE STATEMENT
After the triggering statement runs
AFTER STATEMENT
Before each row that the triggering statement affects
BEFORE EACH ROW
After each row that the triggering statement affects
AFTER EACH ROW

A compound DML trigger does not have an initialization section, but the BEFORE STATEMENT section, which runs before any other timing-point section, can do any necessary initializations.


If a compound DML trigger has neither a BEFORE STATEMENT section nor an AFTER STATEMENT section, and its triggering statement affects no rows, then the trigger never fires.
Compound DML Trigger Restrictions
OLD, NEW, and PARENT cannot appear in the declarative part, the BEFORE STATEMENT section, or the AFTER STATEMENT section.
Only the BEFORE EACH ROW section can change the value of NEW.
A timing-point section cannot handle exceptions raised in another timing-point section.
If a timing-point section includes a GOTO statement, the target of the GOTO statement must be in the same timing-point section.
Performance Benefit of Compound DML Triggers
A compound DML trigger has a performance benefit when the triggering statement affects many rows.
For example, suppose that this statement triggers a compound DML trigger that has all four timing-point sections.

INSERT INTO Target
  SELECT c1, c2, c3
  FROM Source
  WHERE Source.c1 > 0
Although the BEFORE EACH ROW and AFTER EACH ROW sections of the trigger run for each row of Source whose column c1 is greater than zero, the BEFORE STATEMENT section runs only before the INSERT statement runs and the AFTER STATEMENT section runs only after the INSERT statement runs.
A compound DML trigger has a greater performance benefit when it uses bulk SQL

A compound DML trigger is useful for accumulating rows destined for a second table so that you can periodically bulk-insert them. To get the performance benefit from the compound trigger, you must specify BULK COLLECT INTO in the FORALL statement (otherwise, the FORALL statement does a single-row DML operation multiple times).
Using Compound DML Triggers to Avoid Mutating-Table Error
A compound DML trigger is useful for avoiding the mutating-table error (ORA-04091).
Scenario: A business rule states that an employee's salary increase must not exceed 10% of the average salary for the employee's department. This rule must be enforced by a trigger.
Solution: Define a compound trigger on updates of the table hr.employees.
Triggers for Ensuring Referential Integrity
You can use triggers and constraints to maintain referential integrity between parent and child tables,
Table
Constraint to Declare on Table
Triggers to Create on Table
Parent
PRIMARY KEY or UNIQUE
One or more triggers that ensure that when PRIMARY KEY or UNIQUE values are updated or deleted, the desired action (RESTRICT, CASCADE, or SET NULL) occurs on corresponding FOREIGN KEY values.
No action is required for inserts into the parent table, because no dependent foreign keys exist.
Child
FOREIGN KEY, if parent and child are in the same database. (The database does not support declarative referential constraints between tables on different nodes of a distributed database.)
Disable this foreign key constraint to prevent the corresponding PRIMARY KEY or UNIQUE constraint from being dropped (except explicitly with the CASCADE option).
One trigger that ensures that values inserted or updated in the FOREIGN KEY correspond to PRIMARY KEY or UNIQUE values in the parent table.




System Triggers
A system trigger is created on either a schema or the database. Its triggering event is composed of either DDL statements (listed in "ddl_event") or database operation statements (listed in "database_event").
A system trigger fires at exactly one of these timing points:
Before the triggering statement runs
(The trigger is called a BEFORE statement trigger or statement-level BEFORE trigger.)

•After the triggering statement runs
(The trigger is called a AFTER statement trigger or statement-level AFTER trigger.)

•Instead of the triggering CREATE statement
(The trigger is called an INSTEAD OF CREATE trigger.)
Types:
SCHEMA Triggers
•DATABASE Triggers
•INSTEAD OF CREATE Triggers
SCHEMA Triggers
A SCHEMA trigger is created on a schema and fires whenever the user who owns it is the current user and initiates the triggering event.Example: BEFORE Statement Trigger on Sample Schema HR
CREATE OR REPLACE TRIGGER drop_trigger
  BEFORE DROP ON hr.SCHEMA
  BEGIN
    RAISE_APPLICATION_ERROR (
      num => -20000,
      msg => 'Cannot drop object');
  END;
/
DATABASE Triggers:
A DATABASE trigger is created on the database and fires whenever any database user initiates the triggering event.
Note:
An AFTER SERVERERROR trigger fires only if Oracle relational database management system (RDBMS) determines that it is safe to fire error triggers.
Example Triggering monitoring logons.


CREATE OR REPLACE TRIGGER check_user
  AFTER LOGON ON DATABASE
  BEGIN
    check_user;
  EXCEPTION
    WHEN OTHERS THEN
      RAISE_APPLICATION_ERROR
        (-20000, 'Unexpected error: '|| DBMS_Utility.Format_Error_Stack);
 END;
/
INSTEAD OF CREATE Triggers:
An INSTEAD OF CREATE trigger is a SCHEMA trigger whose triggering event is a CREATE statement. The database fires the trigger instead of executing its triggering statement.
Example INSTEAD OF CREATE Trigger on Schema


CREATE OR REPLACE TRIGGER t
  INSTEAD OF CREATE ON SCHEMA
  BEGIN
    EXECUTE IMMEDIATE 'CREATE TABLE T (n NUMBER, m NUMBER)';
  END;
/
  1. Trigger Design Guidelines
  • Use triggers to ensure that whenever a specific event occurs, any necessary actions are done (regardless of which user or application issues the triggering statement).
For example, use a trigger to ensure that whenever anyone updates a table, its log file is updated.
  • Do not create triggers that duplicate database features.
For example, do not create a trigger to reject invalid data if you can do the same with constraints.
  • Do not create triggers that depend on the order in which a SQL statement processes rows (which can vary).
For example, do not assign a value to a global package variable in a row trigger if the current value of the variable depends on the row being processed by the row trigger. If a trigger updates global package variables, initialize those variables in a BEFORE statement trigger.
  • Use BEFORE row triggers to modify the row before writing the row data to disk.
  • Use AFTER row triggers to obtain the row ID and use it in operations.
An AFTER row trigger fires when the triggering statement results in ORA-2292.
Note:
AFTER row triggers are slightly more efficient than BEFORE row triggers. With BEFORE row triggers, affected data blocks are read first for the trigger and then for the triggering statement. With AFTER row triggers, affected data blocks are read only for the trigger.
  • If the triggering statement of a BEFORE statement trigger is an UPDATE or DELETE statement that conflicts with an UPDATE statement that is running, then the database does a transparent ROLLBACK to SAVEPOINT and restarts the triggering statement. The database can do this many times before the triggering statement completes successfully. Each time the database restarts the triggering statement, the trigger fires. The ROLLBACK to SAVEPOINT does not undo changes to package variables that the trigger references. To detect this situation, include a counter variable in the package.
  • Do not create recursive triggers.
For example, do not create an AFTER UPDATE trigger that issues an UPDATE statement on the table on which the trigger is defined. The trigger fires recursively until it runs out of memory.
  • If you create a trigger that includes a statement that accesses a remote database, then put the exception handler for that statement in a stored subprogram and invoke the subprogram from the trigger..
  • Use DATABASE triggers judiciously. They fire every time any database user initiates a triggering event.
  • If a trigger runs the following statement, the statement returns the owner of the trigger, not the user who is updating the table:
SELECT Username FROM USER_USERS;
  • Only committed triggers fire.
A trigger is committed, implicitly, after the CREATE TRIGGER statement that creates it succeeds. Therefore, the following statement cannot fire the trigger that it creates:
CREATE OR REPLACE TRIGGER my_trigger
AFTER CREATE ON DATABASE
BEGIN
NULL;
END;
/
  • To allow the modular installation of applications that have triggers on the same tables, create multiple triggers of the same type, rather than a single trigger that runs a sequence of operations.
Each trigger sees the changes made by the previously fired triggers. Each trigger can see OLD and NEW values.

    Trigger Restrictions
In addition to the restrictions that apply to all PL/SQL units triggers have these restrictions:
  • Trigger Size Restriction
  • Trigger LONG and LONG RAW Data Type Restrictions
  • Mutating-Table Restriction
  • Only an autonomous trigger can run TCL or DDL statements.
For information about autonomous triggers.
  • A trigger cannot invoke a subprogram that runs transaction control statements, because the subprogram runs in the context of the trigger body.
For more information about subprograms invoked by triggers.
  • A trigger cannot access a SERIALLY_REUSABLE package.
  • The size of the trigger cannot exceed 32K.
If the logic for your trigger requires much more than 60 lines of PL/SQL source text, then put most of the source text in a stored subprogram and invoke the subprogram from the trigger. For information about subprograms invoked by triggers.
Trigger LONG and LONG RAW Data Type Restrictions
Note:
Oracle supports the LONG and LONG RAW data types only for backward compatibility with existing applications.
In addition to the restrictions that apply to all PL/SQL units, triggers have these restrictions:
  • A trigger cannot declare a variable of the LONG or LONG RAW data type.
  • A SQL statement in a trigger can reference a LONG or LONG RAW column only if the column data can be converted to the data type CHAR or VARCHAR2.
  • A trigger cannot use the correlation name NEW or PARENT with a LONG or LONG RAW column.


Collection:
A collection is an ordered group of elements all of the same type. It is a general concept that encompasses lists, arrays, and other familiar data types.
Each element has a unique subscript that determines its position in the collection.
The data stored in the collection may be accessed more rapidly by the database than if you were to use two tables instead.
Advantages:
  • We can collect the Bulk data.
  • Better Performance
  • Fetching process will be reduced.
Types of collections: 
  1. Associative arrays (formerly known as index-by tables)
  2. Varrays
  3. Nested tables
List of Collection methods:
  1. EXISTS(row) returns TRUE if the row specified exists.
  2. COUNT returns the number of rows.
  3. FIRST returns the row number of the first populated row.
  4. LAST returns the row number of the last populated row.
  5. PRIOR(row) returns the row number of the last row populated before the row specified.
  6. NEXT(row) returns the row number of the next row populated after the row specified.
  7. DELETE removes all rows.
  8. DELETE(row) removes the specified row.
  9. DELETE(start_row,end_row) removes all rows between and including the start_row and end_row.
  10. TRIM removes the last row.
  11. TRIM(n) removes the last n rows.
  12. EXTEND adds one row.
  13. EXTEND(n) adds n rows.
  14. EXTEND(n,m) adds n copies of row m.
Index-by-tables:
Index-by-tables is the integer and it is contain positive and negative values.
  • An associative array is a set of key and value pairs.
  • You can get the value from the array using the key (which may be a string) or an integer.
Program:declare
Type typ1 is table of varchar2(25) index by binary_integer;
name1 typ1;
cursor c1 is select name from mouse;
begin
open c1;
loop
fetch c1 bulk collect into name1 limit 5;
if(name1.count>0) then
dbms_output.put_line(name1.first);
dbms_output.put_line(name1.last);
dbms_output.put_line(name1.count);
for i in name1.first..name1.last
loop
dbms_output.put_line('Name is : ' || name1(i));
end loop;
end if;
exit when c1%notfound;
end loop;
end;


Nested Tables:
  • A nested table is a table that is embedded within another table.
  • You can insert, update, and delete individual elements in a nested table.
  • Because you can modify individual elements in a nested table, this makes them more flexible than a varray.
  • The elements for nested tables are stored in separate tables.
Program:
declare
Type typ1 is table of varchar2(25);
name1 typ1;
cursor c1 is select name from mouse;
begin
open c1;
loop
fetch c1 bulk collect into name1 limit 5;
if(name1.count>0) then
dbms_output.put_line(name1.first);
dbms_output.put_line(name1.last);
dbms_output.put_line(name1.count);
for i in name1.first..name1.last
loop
dbms_output.put_line('Name is : ' || name1(i));
end loop;
end if;
exit when c1%notfound;
end loop;
end;
Varrays:
  • You can use a varray to store an ordered set of elements having an index associated with it.
  • The elements in a varray are of the same type.
  • A varray has a maximum size that you set when creating it.
  • Elements in a varray can only be modified as a whole, not individually.
  • The elements stored in a varray are stored with the table when the size of the varray is 4KB or less, otherwise the varray is stored outside of the table.
  • When a varray is stored with the table, accessing its elements is faster than accessing elements in a nested table.
Program:
declare
Type typ1 is VARRAY(5) of varchar2(25);
name1 typ1;
cursor c1 is select name from mouse;
begin
open c1;
loop
fetch c1 bulk collect into name1 limit 5;
if(name1.count>0) then
dbms_output.put_line(name1.first);
dbms_output.put_line(name1.last);
dbms_output.put_line(name1.count);
for i in name1.first..name1.last
loop
dbms_output.put_line('Name is : ' || name1(i));
end loop;
end if;
exit when c1%notfound;
end loop;
end;
Creating Collection Data type:
CREATE Or Replace TYPE Address AS VARRAY(2) OF VARCHAR2(100);
Assigning in Table:
Create table table_name (Addressdata Address);
Insert the value:
Insert into table_name(Addressdata) values (Address(‘102, Nehru nagar’, ‘Chennai-118’);
Avoiding Collection Exceptions
In most cases, if you reference a nonexistent collection element, PL/SQL raises a predefined exception. Consider the following example:
DECLARE
TYPE NumList IS TABLE OF NUMBER;
nums NumList; -- atomically null
BEGIN
/* Assume execution continues despite the raised exceptions. */
nums(1) := 1; -- raises COLLECTION_IS_NULL (1)
nums := NumList(1,2); -- initialize table
nums(NULL) := 3 -- raises VALUE_ERROR (2)
nums(0) := 3; -- raises SUBSCRIPT_OUTSIDE_LIMIT (3)
nums(3) := 3; -- raises SUBSCRIPT_BEYOND_COUNT (4)
nums.DELETE(1); -- delete element 1
IF nums(1) = 1 THEN ... -- raises NO_DATA_FOUND (5)
The following list shows when a given exception is raised:
Collection Exception
Raised when...
COLLECTION_IS_NULL
you try to operate on an atomically null collection.
NO_DATA_FOUND
a subscript designates an element that was deleted, or a nonexistent element of an associative array.
SUBSCRIPT_BEYOND_COUNT
a subscript exceeds the number of elements in a collection.
SUBSCRIPT_OUTSIDE_LIMIT
a subscript is outside the allowed range.
VALUE_ERROR
a subscript is null or not convertible to the key type. This exception might occur if the key is defined as a PLS_INTEGER range, and the subscript is outside this range.
Bulk Binds :
The assigning of values to PL/SQL variables in SQL statements is called binding. PL/SQL binding operations fall into three categories:
  1. in-bind When a PL/SQL variable or host variable is stored in the database by an INSERT or UPDATE statement.
  2. out-bind When a database value is assigned to a PL/SQL variable or a host variable by the RETURNING clause of an INSERT, UPDATE, or DELETE statement.
  3. define When a database value is assigned to a PL/SQL variable or a host variable by a SELECT or FETCH statement.
A DML statement can transfer all the elements of a collection in a single operation, a process known as bulk binding. If the collection has 20 elements, bulk binding lets you perform the equivalent of 20 SELECT, INSERT, UPDATE, or DELETE statements using a single operation. This technique improves performance by minimizing the number of context switches between the PL/SQL and SQL engines. With bulk binds, entire collections, not just individual elements, are passed back and forth.
To do bulk binds with INSERT, UPDATE, and DELETE statements, you enclose the SQL statement within a PL/SQL FORALL statement.
To do bulk binds with SELECT statements, you include the BULK COLLECT clause in the SELECT statement instead of using INTO.
Example: Performing a Bulk Bind with DELETE
The following DELETE statement is sent to the SQL engine just once, even though it performs three DELETE operations:
DECLARE
TYPE NumList IS VARRAY(20) OF NUMBER;
depts NumList := NumList(10, 30, 70); -- department numbers
BEGIN
FORALL i IN depts.FIRST..depts.LAST
DELETE FROM emp WHERE deptno = depts(i);
END;
Handling FORALL Exceptions with the %BULK_EXCEPTIONS Attribute
The syntax follows:
FORALL index IN lower_bound..upper_bound SAVE EXCEPTIONS
{insert_stmt | update_stmt | delete_stmt}
  • All exceptions raised during the execution are saved in the new cursor attribute %BULK_EXCEPTIONS, which stores a collection of records. Each record has two fields. The first field, %BULK_EXCEPTIONS(i).ERROR_INDEX, holds the "iteration" of the FORALL statement during which the exception was raised. The second field, %BULK_EXCEPTIONS(i).ERROR_CODE, holds the corresponding Oracle error code.
  • The values stored by %BULK_EXCEPTIONS always refer to the most recently executed FORALL statement. The number of exceptions is saved in the count attribute of %BULK_EXCEPTIONS, that is, %BULK_EXCEPTIONS.COUNT. Its subscripts range from 1 to COUNT.
  • If you omit the keywords SAVE EXCEPTIONS, execution of the FORALL statement stops when an exception is raised. In that case, SQL%BULK_EXCEPTIONS.COUNT returns 1, and SQL%BULK_EXCEPTIONS contains just one record. If no exception is raised during execution, SQL%BULK_EXCEPTIONS.COUNT returns 0.
The following example shows how useful the cursor attribute %BULK_EXCEPTIONS can be:
DECLARE
TYPE NumList IS TABLE OF NUMBER;
num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
errors NUMBER;
dml_errors EXCEPTION;
PRAGMA exception_init(dml_errors, -24381);
BEGIN
FORALL i IN num_tab.FIRST..num_tab.LAST SAVE EXCEPTIONS
DELETE FROM emp WHERE sal > 500000/num_tab(i);
EXCEPTION
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
dbms_output.put_line('Number of errors is ' || errors);
FOR i IN 1..errors LOOP
dbms_output.put_line('Error ' || i || ' occurred during '||
'iteration ' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
dbms_output.put_line('Oracle error is ' ||
SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
Output:
Number of errors is 3
Error 1 occurred during iteration 2
Oracle error is ORA-01476: divisor is equal to zero
Error 2 occurred during iteration 6
Oracle error is ORA-01476: divisor is equal to zero
Error 3 occurred during iteration 10
Oracle error is ORA-01476: divisor is equal to zero

Restrictions on BULK COLLECT
The following restrictions apply to the BULK COLLECT clause:
  • You cannot bulk collect into an associative array that has a string type for the key.
  • You can use the BULK COLLECT clause only in server-side programs (not in client-side programs). Otherwise, you get the error this feature is not supported in client-side programs.
  • All targets in a BULK COLLECT INTO clause must be collections, as the following example shows:
DECLARE
TYPE NameList IS TABLE OF emp.ename%TYPE;
names NameList;
salary emp.sal%TYPE;
BEGIN
SELECT ename, sal BULK COLLECT INTO names, salary -- illegal target
FROM emp WHERE ROWNUM < 50;

END;
  • Composite targets (such as objects) cannot be used in the RETURNING INTO clause. Otherwise, you get the error unsupported feature with RETURNING clause.
  • When implicit datatype conversions are needed, multiple composite targets cannot be used in the BULK COLLECT INTO clause.
  • When an implicit datatype conversion is needed, a collection of a composite target (such as a collection of objects) cannot be used in the BULK COLLECT INTO clause.


No comments:

Post a Comment