Wednesday, 14 September 2011

Various Cursor Attributes

SQL%ROWCOUNT: Number of rows affected by most recent SQL statement.
SQL%FOUND: Boolean attribute that evaluates to TRUE if most recent SQL statement affects one or more rows.
SQL%NOTFOUND: Boolean attribute that evaluates to TRUE if most recent SQL statement does not affect any row.
SQL%ISOPEN: Always evaluates to FALSE because P/L SQL closes implicit cursors immediately after they are executed.

Various Block Co-ordination Properties

The various Block Coordination Properties are
a) Immediate
Default Setting. The Detail records are shown when the Master Record are shown.
b) Deffered with Auto Query
Oracle Forms defer fetching the detail records until the operator navigates to the detail block.
c) Deferred with No Auto Query
The operator must navigate to the detail block and explicitly execute a query

Types of Triggers in Forms

Block-processing triggers: - Block processing triggers fire in response to events related to record management in a block. E.g. When-Create-Record, When-Clear-Block, When-Database-Record, When-Remove-Record

Interface event triggers: - Interface event triggers fire in response to events that occur in the form interface. Some of these trigger, such as When-Button-Pressed, fire only in response to operator input or manipulation. Others, like When-Window-Activated, can fire in response to both operator input and programmatic control. E.g. When-Button-Pressed, When-Checkbox-Changed, Key- [all], When-Radio-Changed, When-Timer-Expired, When-Window-Activated, When-Window-Resized

Master-detail triggers: - Form Builder generates master-detail triggers automatically when you define a master-detail relation between blocks. The default master-detail triggers enforce coordination between records in a detail block and the master record in a master block. Unless you are developing your own custom block-coordination scheme, you do not need to define these triggers yourself. Instead, simply create a relation object, and let Form Builder generate the triggers required to manage coordination between the master and detail blocks in the relation. E.g. On-Check-Delete-Master, On-Clear-Details, On-Populate-Details

Message-handling triggers: - Form Builder automatically issues appropriate error and informational messages in response to runtime events. Message handling triggers fire in response to these default messaging events. E.g. On-Error, On-Message

Navigational triggers: - Navigational triggers fire in response to navigational events. For instance, when the operator clicks on a text item in another block, navigational events occur as Form Builder moves the input focus from the current item to the target item. Navigational events occur at different levels of the Form Builder object hierarchy (Form, Block, Record, Item). Navigational triggers can be further sub-divided into two categories: Pre- and Post- triggers, and When-New-Instance triggers. Pre- and Post- triggers fire as Form Builder navigates internally through different levels of the object hierarchy. As you might expect, these triggers fire in response to navigation initiated by an operator, such as pressing the [Next Item] key. However, be aware that these triggers also fire in response to internal navigation that Form Builder performs during default processing. To avoid unexpected results, you must consider such internal navigation when you use these triggers. E.g. Pre-Form, Pre-Block, Pre-Text-Item, Post-Text-Item, Post-Record, Post-Block, Post-Form
When-New-Instance triggers fire at the end of a navigational sequence that places the input focus in a different item. Specifically, these triggers fire just after Form Builder moves the input focus to a different item, when the form returns to a quiet state to wait for operator input. Unlike the Pre- and Post- navigational triggers, the When-New-Instance triggers do not fire in response to internal navigational events that occur during default form processing. E.g. When-New-Form-Instance, When-New-Block-Instance, When-New-Record-Instance, When-New-Item-Instance

Query-time triggers: - Query-time triggers fire just before and just after the operator or the application executes a query in a block. E.g. Pre-Query, Post-Query

Transactional triggers: - Transactional triggers fire in response to a wide variety of events that occur as a form interacts with the data source. E.g. On-Delete, On-Insert, On-Lock, On-Logon, On-Update, Post-Database-Commit, Post-Delete, Post-Forms-Commit, Post-Insert, Post-Update, Pre-Commit, Pre-Delete, Pre-Insert, Pre-Update

Validation triggers: - Validation triggers fire when Form Builder validates data in an item or record. Form Builder performs validation checks during navigation that occurs in response to operator input, programmatic control, or default processing, such as a Commit operation. E.g. When-Validate-Item, When-Validate-Record


delete abc a where a.rowid < (Select Max(b.rowid) from abc b where a.a=b.a and a.b=b.b);

select sal,sum(sal) over (order by sal) CumTot from salary;

Types of Canvases

Content Canvas (Default Canvas) [A content canvas is the required on each window you create]
Stack Canvas [you can display more then one stack canvas in a window at the same time]
Tab Type Window [In Tab canvas that have tab pages and have one or more then tab page]
Toolbar Canvas [A toolbar canvas often is used to create Toolbar Windows. There are two type of Toolbar window.
Horizontal Toolbar Canvas: - Horizontal Toolbar canvases are displayed at the top of the window, Just under the Main Menu Bar.
Vertical Toolbar Canvas: - While vertical Toolbar are displayed along the Left Edge of the window.

Static & Dynamic LOV

The static LOV contains the predetermined values while the dynamic LOV contains values that come at run time

Sequence of Trigger Fire while Committing

? KEY Commit
? Pre Commit
? Pre/On/Post Delete
? Pre/On/Post Update
? Pre/On/Post Insert
? On commit
? Post Database Commit

ROWCOUNT in cursor FOR loop Sure, for all the good it will do you

Question:
How do I access the %ROWCOUNT for an implicit cursor in a cursor FOR loop, as in (select * from dept) loop?
Answer:
Raymond, if you want to get information about the status of your cursor inside a cursor FOR loop, you should NOT use an implicit cursor within the loop statement, as you suggest in your question. In other words, if I worked with a table defined as follows:
CREATE TABLE question (
title VARCHAR2(100),
description VARCHAR2(2000));

INSERT INTO question VALUES ('Use implicit?',
'How can I access SQL% cursor attributes inside a cursor FOR loop
with an implicit cursor');

INSERT INTO question VALUES ('Use explicit?',
'How can I access cursor attributes inside a cursor FOR loop
with an explicit cursor?');
then I would write code like this:
DECLARE
CURSOR question_cur
IS
SELECT *
FROM question;
BEGIN
FOR rec IN question_cur
LOOP
DBMS_OUTPUT.put_line (question_cur%ROWCOUNT);
END LOOP;
END;
/
You can, in fact, reference SQL%ROWCOUNT inside a cursor FOR loop built around an implicit cursor, but it turns out that the SQL% cursor attributes are not changed by the actions of a implicit-based cursor FOR loop. Check out the rowcount.sql script to see a demonstration of this behavior.
And if you don't really need to obtain attribute information for each row fetched, you are even better off using a BULK COLLECT, as in:
DECLARE
TYPE question_tc IS TABLE OF question%ROWTYPE
INDEX BY PLS_INTEGER;

l_questions question_tc;
BEGIN
SELECT *
BULK COLLECT INTO l_questions FROM question;

DBMS_OUTPUT.put_line (SQL%ROWCOUNT);
DBMS_OUTPUT.put_line (l_questions.COUNT);
END;

Report Level Triggers (Sequence)

• Before parameter form
• After parameter form
• Before Report
• Between Pages
. After Report

Property Class & Visual Attributes

A property class is a named object that contains a list of properties and their settings. Once you create a property class you can base other objects on it. An object based on a property class can inherit the setting of any property in the class that makes sense for that object. Property class inheritance is an instance of sub classing. Conceptually, you can consider a property class as a universal sub-classing parent. Property classes are separate objects, and, as such, can be copied between modules as needed. Perhaps more importantly, property classes can be sub classed in any number of modules. Property class inheritance is a powerful feature that allows you to quickly define objects that conform to your own interface and functionality standards. Property classes also allow you to make global changes to applications quickly. By simply changing the definition of a property class, you can change the definition of all objects that inherit properties from that class.
Visual attributes are the font, color, and pattern properties that you set for form and menu objects that appear in your application's interface. Visual attributes can include the following properties: Font properties, Color and pattern properties. Every interface object has a Visual Attribute Group property that determines how the object's individual visual attribute settings (Font Size, Foreground Color, etc.) are derived
Named visual attributes define only font, color, and pattern attributes; property classes can contain these and any other properties.
You can change the appearance of objects at runtime by changing the named visual attribute programmatically; property class assignment cannot be changed programmatically.
When an object is inheriting from both a property class and a named visual attribute, the named visual attribute settings take precedence, and any visual attribute properties in the class are ignored.
Property Class has triggers and Visual Attributes don't have same.

PL SQL Tables Arrays

PL/SQL tables are declared in the declaration portion of the block. A table is a composite datatype in PL/SQL. PL/SQL tables can have one column and a primary key neither of which can be named. The column can be any scalar type but primary key should be a BINARY_INTEGER datatype.
Rules for PL/SQL Tables:
A loop must be used to insert values into a PL/SQL Table
You cannot use the Delete command to delete the contents of PL/SQL Table. You must assign an empty table to the PL/SQL table being deleted.

Object Libraries

The Object Library provides an easy method of reusing objects and enforcing standards across the entire development organization.
Object Library can be used to:

Create, store, maintain, and distribute standard and reusable objects.
Rapidly create applications by dragging and dropping predefined objects to your form.

There are several advantages to using object libraries to develop applications:
Object libraries are automatically re-opened when you startup Form Builder, making your reusable objects immediately accessible.
You can associate multiple object libraries with an application. For example, you can create an object library specifically for corporate standards, and you can create an object library to satisfy project-specific requirements.
Object libraries feature Smart Classes-- objects that you define as being the standard. You use Smart Classes to convert objects to standard objects.

Object Groups

An object group is a container for a group of objects. You define an object group when you want to package related objects so you can copy or subclass them in another module.
Object groups provide a way to bundle objects into higher-level building blocks that can be used in other parts of an application and in subsequent development projects. For example, you might build an appointment scheduler in a form and then decide to make it available from other forms in your applications. The scheduler would probably be built from several types of objects, including a window and canvas, blocks, and items that display dates and appointments, and triggers that contain the logic for scheduling and other functionality. If you packaged these objects into an object group, you could then copy them to any number of other forms in one simple operation.
You can create object groups in form and menu modules. Once you create an object group, you can add and remove objects to it as desired.

Normalization De-Normalization

Normalization: It's the process of efficiently organizing data in a database. There are two goals of the normalization process: eliminate redundant data (for example, storing the same data in more than one table) and ensure data dependencies make sense (only storing related data in a table). Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored.
1. Eliminate Repeating Groups - Make a separate table for each set of related attributes, and give each table a primary key.
2. Eliminate Redundant Data - If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3. Eliminate Columns Not Dependent On Key - If attributes do not contribute to a description of the key, remove them to a separate table.
4. Isolate Independent Multiple Relationships - No table may contain two or more 1:n or n:m relationships that are not directly related.
5. Isolate Semantically Related Multiple Relationships - There may be practical constrains on information that justify separating logically related many-to-many relationships.
1st Normal Form (1NF)
Def: A table (relation) is in 1NF if
1. There are no duplicated rows in the table.
2. Each cell is single-valued (i.e., there are no repeating groups or arrays).
3. Entries in a column (attribute, field) are of the same kind.
Note: The order of the rows is immaterial; the order of the columns is immaterial.
Note: The requirement that there be no duplicated rows in the table means that the table has a key (although the key might be made up of more than one column—even, possibly, of all the columns).
2nd Normal Form (2NF)
Def: A table is in 2NF if it is in 1NF and if all non-key attributes are dependent on all of the key.
Note: Since a partial dependency occurs when a non-key attribute is dependent on only a part of the (composite) key, the definition of 2NF is sometimes phrased as, "A table is in 2NF if it is in 1NF and if it has no partial dependencies."
3rd Normal Form (3NF)
Def: A table is in 3NF if it is in 2NF and if it has no transitive dependencies.
Boyce-Codd Normal Form (BCNF)
Def: A table is in BCNF if it is in 3NF and if every determinant is a candidate key.
4th Normal Form (4NF)
Def: A table is in 4NF if it is in BCNF and if it has no multi-valued dependencies.
5th Normal Form (5NF)
Def: A table is in 5NF, also called "Projection-Join Normal Form" (PJNF), if it is in 4NF and if every join dependency in the table is a consequence of the candidate keys of the table.
Domain-Key Normal Form (DKNF)
Def: A table is in DKNF if every constraint on the table is a logical consequence of the definition of keys and domains.

De-Normalization:
De-normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access. You may apply De-normalization in the process of deriving a physical data model from a logical form.

Example for Normalization

The Daisy Hill Puppy farm currently has the following data items for puppies:
Puppy Number
+ Puppy Name
+ Kennel Code
+ Kennel Name
+ Kennel Location
+ { Trick ID
+ Trick Name
+ Trick Where Learned
+ Skill Level
}1...n
When the file was first set up, it was thought that no puppy could possibly learn more than 10 tricks before graduation, but one puppy, Snoopy, learned more than 40 tricks before his adoption by Charles Brown. This forced a restructuring of the entire file, but it also left most of the file filled with meaningless spaces since the average puppy learned only three of the forty possible tricks. In addition, more and more people are asking for puppies, which know specific tricks. Right now the entire file of 25,693 puppies needs to be searched to find a match.

1NF -- Eliminate Repeating Groups
Make a separate table for each set of related attribute, and give each table a primary key. How?

Split the table into two tables:
Puppy Table = Puppy Number
+ Puppy Name
+ Kennel Code
+ Kennel Name
+ Kennel Location

Trick Table = Puppy Number + Trick ID
+ Trick Name
+ Trick Where Learned
+ Skill Level
It would help to have all the trick data to be in a separate table. This would decrease the table size, since every puppy who knows trick 43 would have its name, Roll Over, too. If the last puppy to know trick 43 were run over by a car, his record will be removed and we will lose all knowledge of trick 43.

2NF -- Eliminate Redundant Data
If an attribute depends on only part of a multi-valued key, remove it into a separate table. How?

Split the Trick Table into two: Tricks and Puppy Tricks.
Tricks = Trick ID
+ Trick Name

Puppy Tricks = Puppy Number + Trick ID
+ Trick Where Learned
+ Skill Level

Puppy Table = Puppy Number
+ Puppy Name
+ Kennel Code
+ Kennel Name
+ Kennel Location
Now suppose no puppies from the K9-Kennel were currently stored in the database. There then remains no record of K9-Kennel's existence!

3NF -- Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. How?

Split Puppy Table into two tables: Puppies and Kennels
Tricks = Trick ID
+ Trick Name

Puppy Tricks = Puppy Number + Trick ID
+ Trick Where Learned
+ Skill Level
Puppies = Puppy Number
+ Puppy Name
+ Kennel Code

Kennels = Kennel Code
+ Kennel Name
+ Kennel Location
Suppose we want to add a new attribute to the Puppy-Trick table, "Costume." This way we can find puppies who can "sit-up- and-beg" while wearing a Ted Turner mask. The two need not relate, for example, a puppy may be able to walk upright while wearing a wet suit, but may not be able to do both at the same time. The new Puppy Tricks table will look like this:
Puppy Tricks = Puppy Number + Trick ID + Costume
+ Trick Where Learned
+ Skill Level

4NF -- Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related. Here Puppy Number specifies a well-defined set of Trick IDs and well-defined sets of costumes. Thus there is multi-dependency. How do we prevent anomalies in this case?

Split Puppy Tricks in two, Puppy Tricks and Puppy Costumes.
Puppy Tricks = Puppy Number + Trick ID
+ Trick Where Learned
+ Skill Level

Puppy Costumes = Puppy Number + Costume

Normalization and Normal Forms Example

Example Table "company", not normalized:
Company
Attribute
Name John Walker Jim Bim

Pers.ID 1 1

Dept. No. 1 2
Dept. Name Europe USA
Project No. 5, 7, 8 5, 8
Project Name Soap, Pasta, OliveOil Soap, OliveOil
Job in Project Analysis, Leader, Marketing Leader, Leader
Salary/h 13, 18, 15 18, 18

First NF (1NF): A relation cannot have repeating fields or groups (no field must have more than one value):
Employees
Attribute
Name John Walker Jim Bim

Pers. ID 1 1

Dept. No. 1 2
Dept. Name Europe USA
Projects
Attribute

Project No. 5 7 8
Project Name Soap Pasta OliveOil
Project Assignment
Attribute

Project No. 5 5 7 8 8

Pers. ID 1 1 1 1 1

Dept. No. 1 2 1 1 2
Job in Project Analysis Leader Leader Marketing Leader
Salary/h 13 18 18 15 18

Second NF (2NF): 1NF + every non-key field must be functionally dependent on all of the key.
Here: In the relation "Employees" the "Dept. Name" depends only on the "Dept. No." but not on "Pers. ID", thus not on all parts of the key:
Employees
Attribute
Name John Walker Jim Bim

Pers. ID 1 1

Dept. No. 1 2
Departments
Attribute

Dept. No. 1 2
Dept. Name Europe USA

The relations "Projects" and "Project Assignment" can remain as they are:
Projects
Attribute

Project No. 5 7 8
Project Name Soap Pasta OliveOil
Project Assignment
Attribute

Project No. 5 5 7 8 8

Pers. ID 1 1 1 1 1

Dept. No. 1 2 1 1 2
Job in Project Analysis Leader Leader Marketing Leader
Salary/h 13 18 18 15 18
________________________________________
Third NF (3NF): There must be no transitive dependency that is an attribute depends on one or more other non-key attributes.
Here: The relations "Employees", "Depends" and "Projects" can remain as they are.
But in "Projects Assignment" the salary depend only on the job, thus only indirectly on the key. This is a transitive dependency. We have to add a "Payment" relation.
Employees
Attribute
Name John Walker Jim Bim

Pers. ID 1 1

Dept. No. 1 2
Departments
Attribute

Dept. No. 1 2
Dept. Name Europe USA

Projects
Attribute

Project No. 5 7 8
Project Name Soap Pasta OliveOil
Payment
Attribute

Job Analysis Marketing Leader
Salary/h 13 15 18

Project Assignment
Attribute

Project No. 5 5 7 8 8

Pers. ID 1 1 1 1 1

Dept. No. 1 2 1 1 2
Job in Project Analysis Leader Leader Marketing Leader

Note: We could add an"Assignment No." in the "Project Assignment" relation as new key to avoid having only key attributes.
Now the DB is in 3NF: No Data-field modification should necessitate any other "follow-up" adjustment in any other data-field, because of transitive dependencies

ModelModeless Windows (Difference)

A window can be either modeless or modal. A modal window (often a dialog) requires the end user to respond before continuing to work in the current application. A modeless window requires no such response.
You can display multiple modeless windows at the same time, and end users can navigate freely among them. Modeless windows remain displayed until they are dismissed by the end user or hidden programmatically. You can set the Hide on Exit property for a modeless window to specify whether it should remain displayed when the end user navigates to another window. Modal windows are usually used as dialogs, and have restricted functionality compared to modeless windows. On some platforms, modal windows are "always-on-top" windows that cannot be layered behind modeless windows. The Hide on Exit property does not apply to modal windows. Modal dialog windows cannot have scroll bars

Matrix Reports (Matrix By Groups)

A matrix (cross tab) report contains one row of labels, one column of labels, and information in a grid format that is related to the row and column labels. A distinguishing feature of matrix reports is that the number of columns is not known until the data is fetched from the database.
To create a matrix report, you need at least four groups: one group must be a cross-product group, two of the groups must be within the cross-product group to furnish the "labels," and at least one group must provide the information to fill the cells. The groups can belong to a single query or to multiple queries.

A matrix with group report is a group above report with a separate matrix for each value of the master group.

A nested matrix (crosstab) report is a matrix report in which at least one parent/child relationship appears within the matrix grid.
The new Child Dimension property of the nested group enables you to eliminate empty rows and/or columns in your single-query nested matrix.

Types of Matrix Reports
Simple Matrix Report: Is a matrix with only two dimensions
Nested Matrix Report: Has multiple dimensions going across and/or down the page
Multi-Query Matrix with Break: Is similar to a nested matrix report in that it has more than two dimensions. Does not display records that do not contain data
Matrix Break Reports: Contains a new matrix for each master record

Master-Detail Relation

Master-Detail Relation (Triggers/Procedures/Properties)

On-Check-Delete-Master: - Fires when Form Builder attempts to delete a record
in a block that is a master block in a master-detail relation.
On-Clear-Details: - Fires when Form Builder needs to clear records in a block
that is a detail block in a master-detail relation because those records no longer
correspond to the current record in the master block.
On-Populate-Details: - Fires when Form Builder needs to fetch records into a block that is the detail block in a master-detail relation so that detail records are synchronized with the current record in the master block.

(i) Isolated: - Masters Can be deleted when Child is existing
Triggers: - On Populate details Block
On Clear Details Form
Procedure
Check Package Failure
Clear all master Detail
Query Master Detail
(ii) Non- Isolated: - Masters cannot be deleted when Child is existing.
Triggers: - On Populate details Block
On Check Delete master Block
On Clear Details Form
Procedure
Check Package Failure
Clear all master Detail
Query Master Detail
(iii) Cascading: - Child Record Automatically Deleted when Masters is deleted.
Triggers: - On Populate details Block
Pre Delete Block
On Clear Details Form
Procedure
Check Package Failure
Clear all master Detail
Query Master Detail

List Item Types

Poplist: The poplist style list item appears initially as a single field (similar to a text item field). When the end user selects the list icon, a list of available choices appears.
Tlist: The Tlist style list item appears as a rectangular box, which displays a fixed number of values. When the Tlist contains values that cannot be displayed (due to the displayable area of the item), a vertical scroll bar appears, allowing the end user to view and select undisplayed values.
Combo Box: The combo box style list item combines the features found in poplists and text items. It displays fixed values and can accept a user-entered value.

Lexical & Bind Parameters in Reports

Lexical Parameters: Lexical references are placeholders for text that you embed in a SELECT statement. You can use lexical references to replace the clauses appearing after SELECT, FROM, WHERE, GROUP BY, ORDER BY, HAVING, CONNECT BY, and START WITH.
You cannot make lexical references in a PL/SQL statement. You can, however, use a bind reference in PL/SQL to set the value of a parameter that is then referenced lexically in SQL. Look at the example below.
You create a lexical reference by entering an ampersand (&) followed immediately by the column or parameter name. A default definition is not provided for lexical references. Therefore, you must do the following:
? Before you create your query, define a column or parameter in the data model for each lexical reference in the query. For columns, you must enter Value if Null, and, for parameters, you must enter Initial Value. Report Builder uses these values to validate a query with a lexical reference.
? Create your query containing lexical references.

Bind Parameters: Bind references (or bind variables) are used to replace a single value in SQL or PL/SQL, such as a character string, number, or date. Specifically, bind references may be used to replace expressions in SELECT, WHERE, GROUP BY, ORDER BY, HAVING, CONNECT BY, and START WITH clauses of queries. Bind references may not be referenced in FROM clauses or in place of reserved words or clauses.
You create a bind reference by entering a colon (:) followed immediately by the column or parameter name. If you do not create a column or parameter before making a bind reference to it in a SELECT statement, Report Builder will create a parameter for you by default.

Key-nextPost-Text

Post-Text–Item: Fires during the Leave the Item process for a text item. Specifically, this trigger fires when the input focus moves from a text item to any other item.

Key-Next-Item: The key-next is fired as a result of the key action. Key next will not fire unless there is a key event.

Inline Views & Top N Analysis

The Inline view: It is a construct in Oracle SQL where you can place a query in the SQL FROM, clause, just as if the query was a table name.
A common use for in-line views in Oracle SQL is to simplify complex queries by removing join operations and condensing several separate queries into a single query.

Top N Analysis: The task of retrieving the top or bottom N rows from a database table. You can do so either by using the ROWNUM pseudocolumn available in several versions of Oracle or by utilizing new analytic functions available in Oracle 8i: RANK () and DENSE_RANK ().

Using the ROWNUM Pseudocolumn
One-Way to solve this problem is by using the Oracle pseudocolumn ROWNUM. For each row returned by a query, the ROWNUM pseudocolumn returns a number indicating the order in which Oracle selects the row from a table or set of joined rows.
E.g. To select top 5 rows
SELECT Empno, Ename, Job, Mgr, Hiredate, Sal
FROM
(SELECT Empno, Ename, Job, Mgr, Hiredate, Sal
FROM Emp
ORDER BY NVL (Sal, 0) DESC)
WHERE ROWNUM < 6;

Utilizing Oracle 8i's Ranking Functions
Another way to perform a top-N query uses the new Oracle 8i feature called "analytic functions.
SELECT Empno, Ename, Job, Mgr, Sal,
RANK () OVER
(ORDER BY SAL Desc NULLS LAST) AS Rank,
DENSE_RANK () OVER
(ORDER BY SAL Desc NULLS LAST) AS Drank
FROM Emp
ORDER BY SAL Desc NULLS LAST;

The difference between RANK () and DENSE_RANK () is that RANK () leaves gaps in the ranking sequence when there are ties. In our case, Scott and Ford tie for second place with a $3,000 salary; Jones' $2,975 salary brings him in third place using DENSE_RANK () but only fourth place using RANK (). The NULLS FIRST | NULLS LAST clause determines the position of rows with NULL values in the ordered query.

How to attach same LOV to multiple items

We can use the same LOV for 2 columns by passing the return values in global values and using the global values in the code.

Format Triggers (What are they)

A format trigger is a PL/SQL function executed before an object is formatted. A trigger can be used to dynamically change the formatting attributes of the object.

Flex & Confine Mode in Reports

Confine mode:
Switched on by default; change via View® View Options® Layout...
It prevents operations which would cause a report not to work e.g. moving a field outside its parent repeating frame
Flex mode:
Moves the object it's enclosing objects and objects in their push path simultaneously to maintain the same overall relationship in the report. E.g. if you try to move a field outside its repeating frame, the Repeating Frame will grow to accommodate the field and so will any objects around the repeating frame.
Only one object can be moved/resized at one time in flex mode - if you try more than one only one whose control point is clicked on will be done, the other objects will be de-selected.
Objects can be moved/resized horizontally or vertically; not diagonally.

Dynamically create LOVList Item

You can also add list elements individually at runtime by using the ADD_LIST_ELEMENT built-in subprogram, or you can populate the list from a record group at runtime using the POPULATE_LIST built-in. If you populate the list from a record group, be sure that the record group you are using to populate the list contains the relevant values before you call POPULATE_LIST. If the record group is a static record group, it will already contain the appropriate values. Otherwise, you should populate the group at runtime using one of the record group subprograms.

Different Database Triggers

Database triggers are PL/SQL, Java, or C procedures that run implicitly whenever a table or view is modified or when some user actions or database system actions occur. Database triggers can be used in a variety of ways for managing your database. For example, they can automate data generation, audit data modifications, enforce complex integrity constraints, and customize complex security authorizations.
Row Triggers
A row trigger is fired each time the table is affected by the triggering statement.
For example, if an UPDATE statement updates multiple rows of a table, a row trigger is fired once for each row affected by the UPDATE statement.
If a triggering statement affects no rows, a row trigger is not executed at all.
Row triggers are useful if the code in the trigger action depends on data provided by the triggering statement or rows that are affected.
Statement Triggers
A statement trigger is fired once on behalf of the triggering statement, regardless of the number of rows in the table that the triggering statement affects (even if no rows are affected).
For example, if a DELETE statement deletes several rows from a table, a statement-level DELETE trigger is fired only once, regardless of how many rows are deleted from the table.
Statement triggers are useful if the code in the trigger action does not depend on the data provided by the triggering statement or the rows affected.
For example, if a trigger makes a complex security check on the current time or user, or if a trigger generates a single audit record based on the type of triggering statement, a statement trigger is used.

BEFORE vs. AFTER Triggers
When defining a trigger, you can specify the trigger timing.
That is, you can specify whether the trigger action is to be executed before or after the triggering statement.
BEFORE and AFTER apply to both statement and row triggers.
BEFORE Triggers BEFORE triggers execute the trigger action before the triggering statement. This type of trigger is commonly used in the following situations:
BEFORE triggers are used when the trigger action should determine whether the triggering statement should be allowed to complete. By using a BEFORE trigger for this purpose, you can eliminate unnecessary processing of the triggering statement and its eventual rollback in cases where an exception is raised in the trigger action.
BEFORE triggers are used to derive specific column values before completing a triggering INSERT or UPDATE statement.

AFTER Triggers AFTER triggers execute the trigger action after the triggering statement is executed. AFTER triggers are used in the following situations:
AFTER triggers are used when you want the triggering statement to complete before executing the trigger action.
If a BEFORE trigger is already present, an AFTER trigger can perform different actions on the same triggering statement.

Combinations
Using the options listed in the previous two sections, you can create four types of triggers:
BEFORE statement trigger Before executing the triggering statement, the trigger action is executed.
BEFORE row trigger Before modifying each row affected by the triggering statement and before checking appropriate integrity constraints, the trigger action is executed provided that the trigger restriction was not violated.
AFTER statement trigger After executing the triggering statement and applying any deferred integrity constraints, the trigger action is executed.
AFTER row trigger After modifying each row affected by the triggering statement and possibly applying appropriate integrity constraints, the trigger action is executed for the current row provided the trigger restriction was not violated. Unlike BEFORE row triggers, AFTER row triggers lock rows.

New Database Triggers
Startup, Shutdown, Logon, Logoff, Alter, Create, Drop

Diff bw Package Spec & Body

Packages provide a method of encapsulating and storing related procedures, funtions and other package constructs as a unit in the database. They offer increased functionality (for example, global package variables can be declared and used by any procedure in the package). They also improve performance (for example, all objects of the package are parsed, compiled, and loaded into memory once).
Package specification contains declarations of public constructs where as the package body contains definitions of all those public constructs and declarations & definitions of private constructs.

Column Mode Property in Reports

The Column Mode property controls how Report Builder fetches and formats data for instances of repeating frames. With Column Mode set to Yes, the next instance of a repeating frame can begin formatting before the previous instance is completed. With Column Mode set to No, the next instance cannot begin formatting before the previous instance is completed. Column Mode is used mainly for master repeating frames or repeating frames that contain fields that may expand vertically or horizontally (i.e., elasticity is Variable or Expand).

Can pipelined functions help me load data into tables

Question:

I am trying to get a handle on understanding and using pipelining with table functions. We have very high data flow into the tables and need to improve processing speed. I've read the documentation but it is not really making sense to me. Will pipelining return the entire row of a table queried, or just a collection objects and values?

Answer:

James, pipelined functions can be very handy for improving the performance of queries that need to run in parallel, but I am not sure how useful they are in speeding up the performance of data loads into tables.
Here is the skinny on pipelined functions:
• Prior to Oracle9i Release 2 and support for pipelined functions (example below), whenever you called a PL/SQL function inside a query, the execution of that query would serialize. In other words, only one instance of that query could run on one processor. Therefore if you were utilizing Parallel Query, which many Oracle customers do when working with data warehouses, parallelization would be disabled for that query. Now, you can define a pipelined function so that it, too, executes in parallel, thereby potentially speeding up the execution of your query that contains the function.
• Pipelined functions must take as their incoming parameter a cursor result set, in the form or a REF CURSOR. They must return a nested table of object types. Each row in a nested table could conform to a row in a relational table, but it doesn't have to; the whole idea here is that you can now apply arbitrarily complex PL/SQL functions to result sets.
This question intersects nicely with the answer to the "Performance Makeover" puzzle. In that solution, I talk about transformative table functions and offer the pipelined stockpivot function as another way to speed up the pivot operation.
You know you've got a pipelined function when the header of the function includes the PIPELINED clause and the function uses PIPE ROW to "pipe" data out from the function, asynchronous to termination of the function.

Can one use synonyms with PLSQL packages

Question:

Steven, I would like to know if you can use synonyms with PL/SQL packages—such as synonyms named the same as a table in another schema so you don't have to copy data for testing purposes.

Answer:

Richard, you certainly can reference objects inside PL/SQL programs, including packages, that are actually synonyms pointing to underlying database objects.
And if you define your program to use "invoker rights" with the AUTHID CURRENT_USER clause, then when the program is run, that synonym will be resolved according to the privileges of the caller of the program.
In this manner, a single program can be run against tables in different schemas, without making any changes to your program. I used AUTHID CURRENT_USER in the tabcount function presented elsewhere in this Q&A:
CREATE OR REPLACE FUNCTION tabcount (
tab IN VARCHAR2, whr IN VARCHAR2 := NULL)
RETURN PLS_INTEGER
AUTHID CURRENT_USER

Can I start a timer inside PLSQL

Question:

I have a requirement to have a timer start after an insert into a table. After a short time, I would initiate another activity. How can I program that requirement in PL/SQL?

Answer:

Pio, it's not hard to start a timer in your session. Download the Timer or Stopwatch Object Type Definition here, which allows you to create an object type-based "stopwatch" so you can start and stop a timer from within PL/SQL. Here is an example of using such a timer:
DECLARE
c_iterations CONSTANT PLS_INTEGER := 100;

stopwatch_tmr tmr_t :=
NEW tmr_t ('Run My Test', c_iterations);
BEGIN
stopwatch_tmr.go;
FOR indx IN 1 .. c_iterations
LOOP
run_my_test;
END LOOP;
stopwatch_tmr.stop;
END;
/

The tricky part to your question has to do with being able to detect that a certain amount of time has passed, which then triggers a new activity. PL/SQL executes in a single thread. If you want to "kick off" new activities (and then continue on with your current activity), you essentially need to send a message to another session, which then gets moving on your request.
You can use either Oracle's Advanced Queuing facility or the simpler DBMS_PIPE package to implement messaging between sessions.
You might also consider DBMS_JOB or Oracle 10g's new DBMS_SCHEDULER. If you simply want to start a new activity in, say, 30 seconds, you could submit a job with a start time of 30 seconds in the future, and no repeat execution of the job. The job manager would then wake up, check for the job and kick it off, independently of the session from which the job was scheduled.

Can a dynamic cursor be BULK COLLECTed into variables

Question:

Can a dynamic cursor be BULK COLLECTed into variables?

Answer:

if you are living in the world of Oracle9i and later, I have some very good news for you: not only can a dynamic query be BULK COLLECTed into collections, but you can even transfer all your data into a single collection of records! This approach leads to blissful code along the lines of:
DECLARE
TYPE employee_aat IS TABLE OF employee%ROWTYPE
INDEX BY PLS_INTEGER;
l_employee employee_aat;
l_row PLS_INTEGER;
BEGIN
SELECT *
BULK COLLECT INTO l_employee
FROM employee
ORDER BY last_name
;

-- Now iterate through each row in the collection.
l_row := l_employee.FIRST;
WHILE (l_row IS NOT NULL)
LOOP
-- Now work with the row:
l_employee(l_row)
l_row := l_employee.NEXT (l_row);
END LOOP;
END;

Calling an operating system command from within PLSQL

Question:
For years, the only way to execute an operating system command was to use DBMS_PIPE to "pipe" out the command to an operating system shell script. Is there now an easier way to do this in PL/SQL?
Answer:
Oh, yes, executing OS commands has gotten much easier in PL/SQL, with the introduction (back in Oracle8 Database) of external procedures! Having said that, it's still not exactly a piece of cake, and this answer cannot address all the issues that are likely to come. Instead, I offer below an extract (written by my co-author, Bill Pribyl) from the fourth edition of Oracle PL/SQL Programming, Fourth Edition (O'Reilly), Chapter 27 ("External Procedures") to give you an idea of what is involved.
(Start Excerpt)
This example consists of a very simple C function, extprocsh(), which accepts a string and passes it to the system function for execution:
int extprocsh(char *cmd)
{
return system(cmd);
}
The function returns the result code as provided by system, a function normally found in the C runtime library (libc) on Unix, or in msvcrt.dll on Microsoft platforms.
After saving the source code in a file named extprocsh.c, I can use the GNU C compiler to generate a shared library. On my 64-bit Solaris machine running GCC 3.4.2 and Oracle Database 10g Release 2, I used the following compiler command:
gcc -m64 extprocsh.c -fPIC -G -o extprocsh.so
Similarly, on Microsoft Windows XP Pro running GCC 3.2.3 from Minimal GNU for Windows (MinGW), also with Oracle Database 10g Release 2, this works:
c:\MinGW\bin\gcc extprocsh.c -shared -o extprocsh.dll
These commands generate a shared library file, extprocsh.so or extprocsh.dll. Now I need to put the library file somewhere that Oracle can find it. Depending on your Oracle version, that may be easier said than done!

After copying the file and/or making adjustments to the listener, I also need to define a "library" inside Oracle to point to the DLL:

CREATE OR REPLACE LIBRARY extprocshell_lib
AS '/u01/app/oracle/local/lib/extprocsh.so'; -- Unix

CREATE OR REPLACE LIBRARY extprocshell_lib
AS 'c:\oracle\local\lib\extprocsh.dll'; -- Microsoft
Don't by confused by the term "library" here; it's really just a file name alias that can be used in Oracle's namespace. Also note that performing this step requires Oracle's CREATE LIBRARY privilege, which is one of the security hoops I mentioned earlier.
Now I can create a PL/SQL call specification that uses the newly created library:
CREATE OR REPLACE FUNCTION shell(cmd IN VARCHAR2)
RETURN PLS_INTEGER
AS
LANGUAGE C
LIBRARY extprocshell_lib
NAME "extprocsh"
PARAMETERS (cmd STRING, RETURN INT);
That's all there is to it! Assuming that the DBA has set up the system environment to support external procedures, shell() is now usable anywhere you can invoke a PL/SQL function—SQL*Plus, Perl, Pro*C, etc. From an application programming perspective, calling an external procedure is indistinguishable from calling a conventional procedure.
For example:
DECLARE
result PLS_INTEGER;
BEGIN
result := shell('cmd'));
END;
Or even:

SQL> SELECT shell('cmd') FROM DUAL;
If successful, this will return zero:
SHELL2('cmd')
-------------
0
Keep in mind that if the operating system command would normally display output to stdout or stderr, that output will go to the bit bucket unless you modify your program to return it to PL/SQL. You can, subject to OS-level permissions, redirect that output to a file; here is a trivial example of saving a file containing a directory listing:

result := shell('ls / > /tmp/extproc.out')); -- Unix
result := shell('cmd /c "dir c:\ > c:\temp\extproc.out"')); -- Microsoft
These operating system commands will execute with the same privileges as the Oracle Net listener that spawns the extproc process.
(End Excerpt)
That should give you an idea of what you need to do. For more information about external procedures, check out the Application Developer's Guide - Fundamentals book.
In addition to relying on external procedures, if you are running Oracle Database 10g, then you can also take advantage of DBMS_SCHEDULER. According to chapter 93 of the Oracle Database PL/SQL Packages and Types Reference, when you submit a job, you can specify "that the job is a job external to the database. External jobs are anything that can be executed from the operating system's command line. Anydata arguments are not supported with a job or program type of EXECUTABLE. The job owner must have the CREATE EXTERNAL JOB system privilege before the job can be enabled or run."

Call From-New Form-Open Form

Call Form: Runs an indicated form while keeping the parent form active. Form Builder runs the called form with the same Runform preferences as the parent form. When the called form is exited Form Builder processing resumes in the calling form at the point from which you initiated the call to CALL_FORM.
PROCEDURE CALL_FORM (formmodule_name VARCHAR2, display NUMBER, switch_menu NUMBER, query_mode NUMBER, data_mode NUMBER, paramlist_name/id VARCHAR2);

New Form: Exits the current form and enters the indicated form. The calling form is terminated as the parent form. If the calling form had been called by a higher form, Form Builder keeps the higher call active and treats it as a call to the new form. Form Builder releases memory (such as database cursors) that the terminated form was using.
Form Builder runs the new form with the same Runform options as the parent form. If the parent form was a called form, Form Builder runs the new form with the same options as the parent form.
PROCEDURE NEW_FORM (formmodule_name VARCHAR2, rollback_mode NUMBER, query_mode NUMBER, data_mode NUMBER, paramlist_name/id VARCHAR2);

Open Form: Opens the indicated form. Use OPEN_FORM to create multiple-form applications, that is, applications that open more than one form at the same time.
PROCEDURE OPEN_FORM (form_name VARCHAR2, activate_mode NUMBER, session_mode NUMBER, data_mode NUMBER, paramlist_id/name PARAMLIST);

Calculate elapsed time Use an interval

Question:
I want to write a function that tells me the gap or period of elapsed time between the end of one timesheet record and the beginning of another.
For example:
Start End
05-DEC-05 08:30 05-DEC-05 16:29
06-DEC-05 08:30 06-DEC-05 16:29
Is it better to return a date or a period of elapsed time?
Answer:
The best way to calculate the amount of time between two dates is to take advantage of the INTERVAL and TIMESTAMP datatypes, introduced in Oracle9i Database. The following function takes advantage of these datatypes to provide a function that accepts two dates and returns the interval of time between them:
CREATE OR REPLACE FUNCTION date_diff (
start_date_in IN DATE
, end_date_in IN DATE
)
RETURN INTERVAL DAY TO SECOND
IS
BEGIN
RETURN CAST ( end_date_in AS TIMESTAMP WITH TIME ZONE )
- CAST ( start_date_in AS TIMESTAMP WITH TIME ZONE );
END date_diff;
/
In the rest of this answer, I provide some other examples of working with these types. For more information on intervals and timestamps, check out Chapter 10 of Oracle PL/SQL Programming (O'Reilly) or Oracle's PL/SQL User Guide and Reference, under "PL/SQL Date, Time, and Interval Types."
In the first block below, I have declared two timestamps: boss_free contains the timestamp indicating when my manager is available to talk; steven_leaves contains the date/time when I must depart for my next trip.
To determine the amount of time we have to talk, I declare an INTERVAL datatype, window. I assign the difference between my two timestamps to the interval variable. I then display the interval value (two hours and forty-five minutes) with a call to DBMS_OUTPUT.PUT_LINE.
SQL> DECLARE
2 boss_free TIMESTAMP WITH TIME ZONE;
3 steven_leaves TIMESTAMP WITH TIME ZONE;
4 --
5 window INTERVAL DAY TO SECOND;
6 BEGIN
7 boss_free := TO_TIMESTAMP_TZ (
8 '29-JAN-2002 12:00:00.0 PST',
9 'DD-MON-YYYY HH24:MI:SSXFF TZD' );
10
11 steven_leaves := TO_TIMESTAMP_TZ (
12 '29-JAN-2002 16:45:00.0 CST',
13 'DD-MON-YYYY HH24:MI:SSXFF TZD' );
14
15 window := steven_leaves - boss_free;
16
17 DBMS_OUTPUT.PUT_LINE ( window );
18 END;
19 /
+00 02:45:00.000000
If you are starting with data declared as DATEs, you cannot directly assign a difference of two dates to an interval. Instead, you will need to either:
• Convert the dates to timestamps with the CAST operator
• Convert the difference of the two dates to an interval with the NUMTODSINTERVAL or NUMTOYMINTERVAL built-in functions.
You can also compare the results of a difference between two dates to an interval.
All of these variations are shown in the script below:
SQL> DECLARE
2 boss_free TIMESTAMP WITH TIME ZONE;
3 steven_leaves TIMESTAMP WITH TIME ZONE;
4 --
5 d_boss_free DATE := SYSDATE;
6 d_steven_leaves DATE := SYSDATE + 1.25;
7 --
8 window INTERVAL DAY TO SECOND;
9 BEGIN
10 boss_free := CAST ( d_boss_free AS TIMESTAMP WITH TIME ZONE );
11 steven_leaves := CAST ( d_steven_leaves AS TIMESTAMP WITH TIME ZONE );
12 --
13 window := NUMTODSINTERVAL ( d_boss_free - d_steven_leaves, 'DAY' );
14 DBMS_OUTPUT.put_line ( 'DATE diff = ' || TO_CHAR ( window ));
15 --
16 window := boss_free - steven_leaves;
17 DBMS_OUTPUT.put_line ( 'TS diff = ' || TO_CHAR ( window ));
18
19 --
20 IF ( d_boss_free - d_steven_leaves ) > window
21 THEN
22 DBMS_OUTPUT.put_line ( 'Compare difference of dates to interval!' );
23 END IF;
24
25 --
26 DBMS_OUTPUT.put_line ( 'date_diff = '
27 || TO_CHAR ( date_diff ( steven_leaves, boss_free ))
28 );
29 END;
30 /
DATE diff = -01 06:00:00.000000
TS diff = -01 06:00:00.000000
Compare difference of dates to interval!
date_diff = -01 06:00:00.000000

BULK COLLECT and FOR UPDATE OF A Fine Match

Question:

Can I bulk collect using the FOR UPDATE OF clause in a SELECT statement? For example:
SELECT *
BULK COLLECT INTO l_employee
FROM employee
FOR UPDATE OF employee
ORDER BY last_name
Or in other words: I want to ensure that nobody updates the selected records while I run the BULK COLLECT process. Without BULK COLLECT, the FOR UPDATE OF clause will do the job.
I read somewhere that FOR UPDATE was not allowed in BULK COLLECT. Is that accurate?

Answer:

Luis, you certainly can include a FOR UPDATE OF clause in a BULK COLLECT query. The following script demonstrates. If you run all the code up through the first anonymous block in one session, and then run the UPDATE statement at the end in a second session, you will find that this UPDATE is blocked until the first session completes its changes and COMMITs.
Note: you will need EXECUTE privileges on DBMS_LOCK, as I use its SLEEP procedure to pause the block long enough to give you the opportunity to run the last UPDATEZ statement in the second session.
REM Run in first session...

DROP TABLE test_table;
CREATE TABLE test_table (name VARCHAR2(100), description VARCHAR2(1000));
INSERT INTO test_table VALUES ('GEORGIE', 'Handsome fellow');
INSERT INTO test_table VALUES ('PAULA', 'Sharp as a tack');
COMMIT;

DECLARE
TYPE test_table_tc IS TABLE OF test_table.description%TYPE
INDEX BY PLS_INTEGER;

l_list test_table_tc;
BEGIN
SELECT description
BULK COLLECT INTO l_list
FROM test_table
FOR UPDATE OF name;

DBMS_LOCK.SLEEP (30);

FORALL indx IN l_list.FIRST .. l_list.LAST
UPDATE test_table SET name = l_list(indx);

COMMIT;
END;
/

REM Run in second session within 30 seconds....

BEGIN
UPDATE test_table SET name = 'PORGIE'
WHERE name = 'GEORGIE';
END;
/

Bi-directional cursors

Question:
I want to move back and forth within a cursor result set, comparing the contents of distinct rows. I also want to read from the end of the result set to the beginning. How can I do that inside a PL/SQL program?
Answer:
I assume that your question refers to server-side-only code that needs to traverse a cursor's result set. An entirely different set of considerations come into play if you are talking about transferring data from the server to a user interface, such as a Web page, and allowing that front-end environment to flexibly move through result sets.
Sadly, Oracle does not yet support bi-directional access to cursor result sets (a.k.a, "scrollable cursors") through a PL/SQL interface. You might well find, however, that you can achieve the desired effect with a combination of the following:
• Multiple queries (each with different ORDER BY clauses that correspond to the different ways you need to traverse the result set)
• Analytic functions: As the SQL Reference states, "Analytic functions compute an aggregate value based on a group of rows. They differ from aggregate functions in that they return multiple rows for each group. The group of rows is called a window and is defined by the analytic_clause. For each row, a sliding window of rows is defined. The window determines the range of rows used to perform the calculations for the current row..."
For tables with relatively small numbers of rows, the use of multiple queries might yield a satisfactory implementation. If, however, your result set is very large, you might run into some performance issues. In addition, you may still not be able to reference arbitrary rows within the result set as desired.
Fortunately, you can achieve the desired effect of a bi-directional cursor rather easily by caching the result in a PL/SQL collection. Once the data has been moved into the cache, you can move back and forth through the result set, compare rows, etc., with complete freedom and a high degree of efficiency. Here I will demonstrate how you can build and move through such a cache.
Recall that PL/SQL program data consumes Process Global Area memory (PGA), distinct from the System Global Area (SGA), and that there is a separate PGA for each session connected to an Oracle instance. With large result sets, you are going to be manipulating lots of data, the PGA will require lots of memory for the collection.
This technique will make the most sense under these circumstances:
• You are running this program for a small number of simultaneous sessions or it is a single batch process. You must have sufficient memory to hold the cache(s) you will create to emulate bi-directional cursors.
• The data in the result set is static (or you want to ignore any changes that occur once your program starts). Once you have copied your result set to your collection-based cache, any changes to the tables that contributed to your result set will not be reflected in the cache—even if those changes are committed in some other session. This is a "one off", static copy of the table (or whatever result set you have defined with your query).
Listing 1 offers an example of bi-directional cursor processing built around a collection of records with the same structure (and data) as the jokes table defined below:
CREATE TABLE jokes (
joke_id INTEGER,
title VARCHAR2(100),
text VARCHAR2(4000)
)
/
Listing 1
1 CREATE OR REPLACE PROCEDURE bidir_example
2 IS
3 TYPE joke_tt IS TABLE OF jokes%ROWTYPE
4 INDEX BY PLS_INTEGER;
5
6 joke_cache joke_tt;
7 l_row PLS_INTEGER;
8 BEGIN
9 SELECT *
10 BULK COLLECT INTO joke_cache
11 FROM jokes;
12
13 DBMS_OUTPUT.put_line ('From first to last...');
14 l_row := joke_cache.FIRST;
15
16 WHILE (l_row IS NOT NULL)
17 LOOP
18 DBMS_OUTPUT.put_line (' ' || joke_cache (l_row).title);
19 l_row := joke_cache.NEXT (l_row);
20 END LOOP;
21
22 DBMS_OUTPUT.put_line ('From last to first...');
23 l_row := joke_cache.LAST;
24
25 WHILE (l_row IS NOT NULL)
26 LOOP
27 DBMS_OUTPUT.put_line (' ' || joke_cache (l_row).title);
28 l_row := joke_cache.PRIOR (l_row);
29 END LOOP;
30
31 DBMS_OUTPUT.put_line ('Compare fifth row to twelfth row...');
32
33 IF LENGTH (joke_cache (5).title) > LENGTH (joke_cache (12).title)
34 THEN
35 DBMS_OUTPUT.put_line (' Fifth row title longer than twelfth.');
36 ELSE
37 DBMS_OUTPUT.put_line (' Fifth row title is not longer than twelfth.');
38 END IF;
39
40 joke_cache.DELETE;
41* END bidir_example;
In lines 3-6, I declare a collection type that mimics the structure of the jokes relational table, and then instantiate a variable based on that type (joke_cache).
In lines 9 – 11, I copy the contents of the jokes table into the cache, using the BULK COLLECT query syntax, the most efficient way to retrieve multiple rows of data into a PL/SQL program. Notice that I am retrieving all the rows in the joke table. I do not use the LIMIT clause to fetch a subset of the result set's rows, because I have already established that I have sufficient memory to create the larger cache. As you explore places to apply this caching technique in your applications, you should strongly consider using the FETCH...BULK COLLECT...LIMIT syntax so that you can control the size of your PGA memory.
Lines 13-20 demonstrate the use of the FIRST and NEXT built-in collection methods to iterate through the collection (result set) from the first row to the last. Lines 22-29 show how to go in reverse, from the last row to the first using the LAST and PRIOR methods.
Finally, lines 31-38 show how you can compare values in different rows, simply by referencing the index value of the rows in which you are interested.

Best way to tell if row exists

Question:
What is the best approach to determining whether or not a row exists in a table? Currently I have implemented two different methods: 1. Do a SELECT count(*) INTO lv_tester FROM table. Then IF lv_tester > 0 THEN Some Logic.... ELSE RAISE error END IF; 2. Use a Cursor Loop and a local variable to see if a row was fetched inside the loop. I was curious if there is a better approach that I am missing.
Answer:
Your first approach might be fine, depending on your usage. The second approach is generally something to be avoided. Let's take a closer look.
One of the wonderful things about PL/SQL (and pretty much any robust programming language) is that there are so many different ways to implement the functionality you need.
One of the terrible things about PL/SQL (and pretty much any robust programming language) is that there are so many different ways to implement the functionality you need.
What is the chance, a developer might ponder, that she or has found the optimal implementation? It's enough to keep you up late at night, pondering the paradoxes of life as a programmer!
The general question you have asked is: "How do I best check to see if a row exists?"
To provide the most comprehensive answer, I would also need to hear the answers to these questions:
• What do you want to do after you get the answer to your question?
• Are you checking for existence of a row based on unique criteria (for example, a primary key) or criteria that might return multiple rows?
• Do you expect the check for existence to mostly fail or mostly succeed?
I will address these questions in my answer below.
What do you want to do after you get the answer to your question?
Suppose that if the row you are interested in does exist, you then need to fetch that record, change the values of the columns in that row, and more. In this case, you should simply execute a query or function (containing a query) to fetch the row of interest. Otherwise you will query the database to check for existence, and then query again to retrieve the row.
If you are going to use a query directly in your application code, you can rely on the fact that the implicit SELECT INTO will raise NO_DATA_FOUND if no rows are found, or TOO_MANY_ROWS if more than one is found.
My preference, however, is to hide the query inside a function that will generally trap the NO_DATA_FOUND and quietly return a record with null for every field of the return record. After calling this function, you can examine the contents of the row (now in the form of a PL/SQL record) to see if you fetched successfully (as in: is the record's primary key field null? This would be impossible unless no row was found). In this case, the answer to "Does the row exist?" comes "free" with the fetching of the row. Here is an example:

CREATE OR REPLACE FUNCTION one_employee (
employee_id_in IN employees.employee_id%TYPE
)
RETURN employees%ROWTYPE
IS
l_return employees%ROWTYPE;
BEGIN
SELECT *
INTO l_return
FROM employees
WHERE employee_id = employee_id_in;

RETURN l_return;
EXCEPTION
-- The record will be empty...
WHEN NO_DATA_FOUND THEN RETURN l_return;
END one_employee;
Now, if you only need to know if the row exists, but not actually manipulate the contents of that row, you are best off calling a function that returns a Boolean (TRUE = row found, FALSE = row not found), hiding the implementation details.
Multiple rows or single row for your criteria?
Now, suppose you only need to know if the row exists, but you don't need to manipulate its contents. In this case, we can then focus our attention on construction of a function to return a Boolean value: does the row exist? The way we implement the query insdide the function depends in part on whether or not you expect to find more than one row matching your criteria. If you are checking for existence of a primary key or some other unique value, it makes perfect sense to use SELECT COUNT(*), as you mentioned in your question. It will either return 0 or 1, and you can return the Boolean value based on that result. Here is an example:

CREATE OR REPLACE FUNCTION employee_exists (
employee_id_in IN employees.employee_id%TYPE
)
RETURN BOOLEAN
IS
l_count PLS_INTEGER;
BEGIN
SELECT COUNT ( * )
INTO l_count
FROM employees
WHERE employee_id = employee_id_in;

RETURN l_count = 1;
END employee_exists;
When this function defined, I can then see if a row exists for a department as follows:

BEGIN
IF employee_exists (l_emp_id)
THEN
...
END IF;
END;
/
Now I do not have to write a query each time I need to get this information; I simply call my pre-defined subprogram, as though it were a service offered to me by my PL/SQL development environment.
By taking this approach (encapsulating queries behind functions), I:
• reuse logic, rather than write the same code again and again. This leads to a big boost in my productivity and in the maintainability of my code base.
• make it easier to optimize, fix or evolve my lookup query, if needed. It is in just one place, so I make the changes just once and the impact is felt across the entire application.
If your query might return multiple rows then I suggest you avoid SELECT COUNT(*) (unnecessary overhead; you don't really care how many rows there are; you just want to know if there is at least one) and instead write a function that executes a SELECT INTO based on the appropriate WHERE clause. The following program tells us, for example, if there are any employees in a given department.

CREATE OR REPLACE FUNCTION employee_in_dept_exists (
department_id_in IN employees.department_id%TYPE
)
RETURN BOOLEAN
IS
l_count PLS_INTEGER;
BEGIN
SELECT 1
INTO l_count
FROM employees
WHERE department_id = department_id_in;

RETURN TRUE;
EXCEPTION
WHEN NO_DATA_FOUND THEN RETURN FALSE;
WHEN TOO_MANY_ROWS THEN RETURN TRUE;
END employee_in_dept_exists;
Notice that I select the literal value of 1 (one). It doesn't really matter what I return, since that information is not relevant here; the only issue is whether or not I return something.
I have avoided COUNT(*) against the table of interest since it would be inefficient, and also answers the wrong question. You should only use COUNT(*) if you need to know the number of rows that match a certain criteria (or if your criteria will identify at most one row, as shown earlier).
There is also no reason to use a cursor FOR loop, since I am only retrieving a single piece of information: a Boolean, TRUE or FALSE, does the row exist? You should only use a cursor FOR loop if you intend to iterate through all the rows in a result set.
No, in this case, the best approach to rely on that old standby and efficient querying form: the SELECT INTO statement (aka, the implicit one row query). SELECT INTO will return data if a single row is found (which means the row exists). SELECT INTO will raise NO_DATA_FOUND if no row is found (the row does not exist). And it will raise TOO_MANY_ROWS if more than one row is found (in which case the row or rows do exist).
Is this approach the most efficient? Hard to say. There are many ways to write a query to check for a single row. Certainly, there is overhead associated with raising and handling exceptions, which would make this approach less desirable if you know that in many cases, your function will return FALSE (no row found).
For example, Bryn Llewellyn, PL/SQL Product Manager, offered this alternative when he reviewed my answer:

CREATE OR REPLACE FUNCTION employee_in_dept_exists (
department_id_in IN employees.department_id%TYPE
)
RETURN BOOLEAN
IS
l_count PLS_INTEGER;
BEGIN
SELECT COUNT ( * )
INTO l_count
FROM DUAL
WHERE EXISTS ( SELECT 1
FROM employees
WHERE department_id = department_id_in );

RETURN l_count = 1;
END employee_in_dept_exists;
I am sure you will have your own ideas about how to implement your query. However you write that SELECT statement, though, be sure to put it inside a function, so that it can be easily re-used; the details of the implementation are hidden; and you can easily change that implementation as future releases of Oracle make new approaches possible.

Best Practices for String Procedures and Tracing

Question:
What is the best way to write generic procedures to work on strings and/or lists of strings? For example, what is the best way to write a function to parse a delimited string into a collection, with a specific separator? I assume that function parameters should be declared as VARCHAR2(with no length) and "internal" strings should be declared as VARCHAR2(32767). What is the best way of representing a list of strings in the most generic way possible? Also, is there a good way to extend the code to support CLOBs (character large objects) as well, without duplicating the code (to cover every possible combination of parameter types)?
Answer:
Sounds as if you like to write generic, reusable code. So do I. It is intellectually stimulating and can save me lots of time, because I can avoid writing the same things over and over again.
On the other hand, it is also important to recognize that a generic program is usually more complex and harder to test than more-specific variations on an algorithm. Working out requirements for something that should be truly generic is never as easy as it seems at first glance. Furthermore, we often venture unknowingly into the dangerous terrain of overdesign. "Gee, why don't I make this program really flexible by adding five other parameters to modify the default behavior of the program?"
Before I answer your questions, I'd like to offer some high-level advice on your next generic utility creation adventure:
• Don't overdesign. Be very practical about what you need now and what you can clearly see you will need in the near future. Don't daydream about possible requirements as you design your program.
• Don't overcode. Make sure that you hew closely to the design you have settled on. Again, when you are writing generic code, you will find yourself thinking about more and more possibilities, even as you write the program.
• Cut your losses. You might find as you start to build your generic program that it is tougher than you expected. Listen to your intuition. If the little voice inside your head is saying, "Yes, I know I can do this . . . but is it really all that important?" you should think seriously about abandoning your generic approach and instead write a more specific program from scratch that meets your current requirements.
Having said all that, your question covers three areas:
1. How should you declare "internal" strings—local variables inside your generic functions? When declaring a local variable of type VARCHAR2, you must specify a maximum length. The question then becomes how best to do this. Should you declare it as VARCHAR2(32767), because that is the largest possible value? Will that use excessive amounts of memory, if only small string values are manipulated?
2. What is the best way to declare and work with lists of strings?
3. How can you best write programs to work with both VARCHAR2 variables and CLOBs?
Here are my answers, followed by examples and additional explanations:
1. The best way to declare your local variables is to define a subtype that is based on the biggest possible string size in a shared utility package. All of your string functions can then reference that subtype when declaring local variables. If you create a single package containing all your string functions, you can declare the subtype in that package. And you will not use up any more memory than needed, because Oracle Database allocates memory as needed for string variables declared with a length of 32,767 characters.
2. To work with a list of strings, you need to declare the list based on an already-defined collection type. You can use a type offered by Oracle Database, such as DBMS_SQL.VARCHAR2S or DBMS_SQL.VARCHAR2A. You can also define your own type and then reference that type.
3. You should use CLOBs only when you need to manipulate strings longer than 32,767 bytes or characters in length. Assuming, then, that you are running Oracle9i Database Release 2, you should generally be able to use the same code for many of your functions, because the newer versions of Oracle Database allow you to execute built-in string functions natively against CLOBs, as well as VARCHAR2 string variables.
Let's take a look at each of these answers in more detail.
Avoiding hard-coded VARCHAR2 constraints. A program I often use to demonstrate various best-practices techniques is a simple variant on SUBSTR, which I call BETWNSTR. The built-in SUBSTR function takes a string, the starting location, and the number of characters, and returns that portion of the string. You can get more information about SUBSTR from the Oracle Database SQL Reference.
SUBSTR is dandy, but I often have the starting and ending locations in a string and want the substring between those two positions. To use SUBSTR, I must calculate the number of characters between start and end. I find that I always forget the relevant formula (end - start + 1). So I put together an itty-bitty function to remember the formula, as follows:
FUNCTION betwnstr (
string_in IN VARCHAR2
, start_in IN PLS_INTEGER
, end_in IN PLS_INTEGER
)
RETURN VARCHAR2
IS
l_return VARCHAR2(32767);
BEGIN
l_return := SUBSTR ( string_in, start_in, end_in - start_in + 1 );
RETURN l_return;
END betwnstr;
Here are two notes to make about this implementation:
• I have declared a local variable, but I don't really need it here. It is present to demonstrate the best practice of how to best declare such variables.
• This is a simplified implementation of BETWNSTR to demonstrate best practices. It does not handle all the cases a real "between string" utility should.
Code Listing 1: string_pkg with BETWNSTR
CREATE OR REPLACE PACKAGE string_pkg
IS
SUBTYPE maxvarchar2_t IS VARCHAR2 ( 32767 );

FUNCTION betwnstr (
string_in IN VARCHAR2
, start_in IN PLS_INTEGER
, end_in IN PLS_INTEGER
)
RETURN VARCHAR2;
END string_pkg;
/

CREATE OR REPLACE PACKAGE BODY string_pkg
IS
FUNCTION betwnstr (
string_in IN VARCHAR2
, start_in IN PLS_INTEGER
, end_in IN PLS_INTEGER
)
RETURN VARCHAR2
IS
l_return maxvarchar2_t;
BEGIN
l_return := SUBSTR ( string_in
, start_in
, end_in - start_in + 1 );
RETURN l_return;
END betwnstr;
END string_pkg;
/
Now I am going to move the BETWNSTR function into a package and move the hard-coded local variable declaration, as shown in Listing 1. Notice that I have now declared the following subtype:
SUBTYPE maxvarchar2_t IS
VARCHAR2 ( 32767 );
I use that subtype in my declaration of the local variable.
It is true that I do still have a hard-coding of the number 32767 in my package, but it appears only once and serves as the single point of definition for the new, biggest VARCHAR2-allowed datatype.
If Oracle Database increases the maximum size for a VARCHAR2, I can make a change in one place and recompile string_pkg and all programs that use this package, and then my application will be able to immediately take advantage of the expanded datatype.
Work with lists of strings. As noted above, to work with a list of strings you need to declare the list based on an already-defined collection type. You can use a type offered by Oracle Database, such as DBMS_SQL.VARCHAR2S (maximum of 255 characters per string) or DBMS_SQL.VARCHAR2A (maximum of 32,767 characters per string, introduced in Oracle9i Database Release 2). You can also define your own type, again perhaps in your centralized string package, and then reference that type.
Suppose I want to add a function to my string package that accepts a list of elements separated by a delimiter (a comma-delimited list, or a string such as that returned by DBMS_UTILITY.FORMAT_CALL_STACK, delimited by new-line characters) and returns a collection whose elements are those strings between delimiters. I can take one of two approaches:
1. Use a collection type defined by Oracle Database and available automatically to all programmers, such as those found in the DBMS_SQL package. Here is an example of the function header for this approach:
CREATE OR REPLACE PACKAGE
string_pkg
IS
SUBTYPE maxvarchar2_t
IS VARCHAR2 ( 32767 );

FUNCTION betwnstr (
string_in IN VARCHAR2
, start_in IN PLS_INTEGER
, end_in IN PLS_INTEGER
)
RETURN VARCHAR2;

FUNCTION list_to_collection (
string_in IN VARCHAR2
, delim_in IN VARCHAR2
DEFAULT ' , '
)
RETURN DBMS_SQL.varchar2a;
END string_pkg;
/
2. Alternatively, I define my own collection type in the package and then use that in the function interface:
CREATE OR REPLACE PACKAGE
string_pkg
IS
SUBTYPE maxvarchar2_t IS
VARCHAR2 ( 32767 );

TYPE maxvarchar2_aat IS
TABLE OF maxvarchar2_t
INDEX BY PLS_INTEGER;
FUNCTION betwnstr (
string_in IN VARCHAR2
, start_in IN PLS_INTEGER
, end_in IN PLS_INTEGER
)
RETURN VARCHAR2;

FUNCTION list_to_collection (
string_in IN VARCHAR2
, delim_in IN VARCHAR2
DEFAULT ' , '
)
RETURN maxvarchar2_aat;
END string_pkg;
/
The first option (using DBMS_SQL.VARCHAR2A) saves some typing and can be used with programs that are already using collections based on the DBMS_SQL type. The downside is that it introduces an external dependency in this package. Because the DBMS_SQL package is present in all versions of Oracle Database, it's hard to get too worried about this external dependency.
Generally, though, you are better off avoiding such dependencies or at least minimizing the number of dependencies in a program. The second option (a collection type defined directly within the package specification) allows you to define a completely self-contained utility.
Functions for VARCHAR2s and CLOBs. In Oracle9i Database Release 2 and above, you can in many circumstances treat VARCHAR2 and CLOB interchangeably. Visit the "PL/SQL Semantics for LOBs" section in the Oracle Database Application Developer's Guide—Large Objects, to get more details about how Oracle Database will implicitly convert between VARCHAR2 and CLOB, and how you can apply many of the standard VARCHAR2 built-in functions directly to CLOBs.
Because CLOBs can be much larger than VARCHAR2 strings, you can think of a VARCHAR2 variable as being (logically) a subtype of CLOB. So a function that works with CLOBs should also work with VARCHAR2s.
One might then be tempted to say, "Let's just use CLOBs as the standard datatype for string functions." I could then build a VARCHAR2 "skin" that simply invokes the CLOB-based algorithm for its implementation. Given the performance characteristics of CLOB, however, it is recommended that you use CLOB only when you're sure that VARCHAR2 (32767) is not big enough.
So from a performance standpoint, I will want to implement the CLOB variation of list_to_collection, one that accepts a CLOB and returns a collection of CLOBs, with a copy-and-paste operation, simply replacing the VARCHAR2 declarations with CLOBs wherever appropriate.
And until the performance characteristics of CLOBs improve, you will generally want to create separate programs for this datatype.
Avoiding the hard-coding of the 32,767 literal. Now, I have a problem with the above program: I have hard-coded the 32767 maximum length into my call to SUBSTR. I could get around this problem by adding a constant to the package specification like this:
CREATE OR REPLACE PACKAGE string_pkg
IS
c_max_varchar2_len CONSTANT PLS_INTEGER := 32767;
SUBTYPE maxvarchar2_t IS VARCHAR2 (32767);
I can then use this constant in my call to SUBSTR:
16 FOR clob_index IN 1 .. l_clobs.COUNT
17 LOOP
18 l_return ( clob_index ) :=
19 SUBSTR ( l_clobs ( clob_index )
20 , 1
21 , c_max_varchar2_len );
22 END LOOP;
This is better, but I still have multiple occurrences of the 32767 literal in my package specification.
If I am properly obsessed with avoiding repetition of such values in my code, I could take things a step further and take advantage of conditional compilation as follows:
ALTER SESSION SET plsql_ccflags = 'max_varchar2_length:32767'
/

CREATE OR REPLACE PACKAGE string_pkg
IS
SUBTYPE maxvarchar2_t IS VARCHAR2 ( $$max_varchar2_length );
and then my function implementation would look like this:
16 FOR clob_index IN 1 .. l_clobs.COUNT
17 LOOP
18 l_return ( clob_index ) :=
19 SUBSTR ( l_clobs ( clob_index )
20 , 1
21 , $$max_varchar2_length );
22 END LOOP;
For more information on conditional compilation, check out:
/technology/tech/pl_sql/pdf/Plsql_Conditional_Compilation.pdf
/technology/tech/pl_sql/pdf/Plsql_Conditional_Compilation_PPT.pdf
Conclusions. We should all pay attention to the opportunities to create reusable programs, especially handy string functions. In the process of creating these generic utilities, we should do everything we can to make the code easy to maintain (avoid hard-codings and redundant algorithms), while still ensuring that the code is efficient enough to be useful in a production environment.
Code Listing 2: string_fun Package with full BETWNSTR implementation
The following script was created by Bryn Llewellyn, PL/SQL Product Manager, to compare VARCHAR2 and CLOB performance.
Code Listing 3: Script to compare VARCHAR2 and CLOB performance
Using the CLOB implement as the foundation algorithm. Recognizing that CLOBs are slower than VARCHAR2s, I thought I would still show you how you could implement the CLOB version of "list to collection" as the foundation algorithm and then implement the VARCHAR2 version on top of that. It is a useful technique for avoiding duplication of application logic, one that you should be entirely comfortable applying to your own situations. Of course, avoiding code redundancy usually cannot trump performance bottlenecks.
Here we go...
I will add a CLOB variation of list_to_collection to my package, one that accepts a CLOB and returns a collection of CLOBs, each element of which contains a delimited item from the incoming CLOB.
My string package specification now needs another collection type and function header:
CREATE OR REPLACE PACKAGE string_pkg
IS
... previous VARCHAR2 elements ...

TYPE clob_aat IS TABLE OF CLOB
INDEX BY PLS_INTEGER;

FUNCTION cloblist_to_collection (
string_in IN CLOB
, delim_in IN VARCHAR2 DEFAULT ','
)
RETURN clob_aat;
END string_pkg;
/
Now, I could implement this function with a copy-and-paste operation, but I would really much rather have just one instance of the parsing algorithm used in the list_to_collection function. So here is what I will do:
1. Create cloblist_to_collection by copying list_to_collection.
2. Change all relevant declarations from VARCHAR2 to the CLOB types (individual variables and collection types).
3. Replace the algorithm inside list_to_collection with a call to cloblist_to_collection.
Let's take a look. Here is the header and declaration section of cloblist_to_collection:
FUNCTION cloblist_to_collection (
string_in IN CLOB
, delim_in IN VARCHAR2 DEFAULT ','
)
RETURN clob_aat
IS
l_loc PLS_INTEGER;
l_row PLS_INTEGER := 1;
l_startloc PLS_INTEGER := 1;
l_return clob_aat;
Notice that I have changed the string_in datatype to CLOB, and return the clob_aat collection type instead of maxvarchar2_aat. The body of this function remains unchanged — it looks exactly the same as it did when it implemented list_to_collection.
Now I must "reinvent" the list_to_collection function. Here is my implementation, with explanation below:
1 FUNCTION list_to_collection (
2 string_in IN VARCHAR2
3 , delim_in IN VARCHAR2 DEFAULT ','
4 )
5 RETURN maxvarchar2_aat
6 IS
7 l_clobs clob_aat;
8 l_return maxvarchar2_aat;
9 BEGIN
10 -- Parse the string as a CLOB.
11 l_clobs := cloblist_to_collection (
12 TO_CLOB ( string_in ), delim_in );
13
14 -- Copy the individual items to the string collection.
15 -- Use SUBSTR to avoid VALUE_ERROR exceptions.
16 FOR clob_index IN 1 .. l_clobs.COUNT
17 LOOP
18 l_return ( clob_index ) :=
19 SUBSTR ( l_clobs ( clob_index )
20 , 1
21 , 32767);
22 END LOOP;
23
24 RETURN l_return;
25* END list_to_collection;


Line(s) Significance
7-8 Declare a CLOB collection for use in the call to cloblist_to_collection, and another collection to be returned by the function.
11-12 Convert the VARCHAR2 string to CLOB and then pass it to the cloblist_to_collection to do the parsing.
16-22 Copy the contents of the CLOB collection to the "maximum VARCHAR2" collection. Use SUBSTR to take only the first 32,767 characters of each CLOB, to avoid raising VALUE_ERROR.
Note: instead of truncating and possibly losing string data with SUBSTR, you might want to adjust the algorithm to break up individual items that are longer than 32767 characters and pass them back as separate items in the "maximum VARCHAR2" collection

Best Practices for Retrieving Objects

Question:
I have declared an object type, varray, that has three columns of datatype number, varchar2, and another object, respectively. How can I retrieve the third field (object type) from the varray?
Answer:
Let's construct a varray (a type of collection in which you specify the maximum number of elements that may be defined in the collection) that follows your requirements, and I will show how you can reference each and every part of it. Because I like to eat, I will work with a food paradigm for the example.
First I create an object type for general food things; its three attributes let me keep track of the name of the food item, its food group, and the dominant color of the food:
CREATE TYPE food_t AS OBJECT (
name VARCHAR2 ( 100 )
, food_group VARCHAR2 ( 100 )
, color VARCHAR2 ( 100 )
);
/
Next, I create a meal object type composed of the number of people served, the type of meal, and the food served:
CREATE TYPE meal_t AS OBJECT (
number_served INTEGER
, meal_type VARCHAR2 ( 100 )
, food_served food_t
);
/
Now I create a varray of up to three elements to hold all the meals in a day:
CREATE TYPE meals_vat
IS VARRAY ( 3 ) OF meal_t;
/
Next I write the code in Listing 1 to populate the varray with three meals for the day: breakfast, lunch, and dinner.
Code Listing 1: Populating the varray
DECLARE
-- A locally defined varray initialized with no elements.
l_one_day_of_meals meals_vat := meals_vat ( );
BEGIN
-- Make room for the three meals.
l_one_day_of_meals.EXTEND ( 3 );

-- Add breakfast, using the constructor for both the meal
-- and within it the food object type instance.
l_one_day_of_meals ( 1 ) :=
meal_t ( 4, 'BREAKFAST'
, food_t ( 'Scrambled Eggs', 'Protein', 'Yellow' ));

-- Add lunch
l_one_day_of_meals ( 2 ) :=
meal_t ( 6, 'LUNCH'
, food_t ( 'Deluxe Salad', 'Vegetables', 'Mostly Green' ));

-- Add dinner
l_one_day_of_meals ( 3 ) :=
meal_t ( 10, 'DINNER'
, food_t ( 'Tofu and Rice', 'Protein', 'White' ));
END;
/
Note that I use constructor functions (functions that have the same name as the type on which they are defined, created by Oracle Database) to create object type instances in line with the assignment. I could also declare local variables to hold the intermediate objects, as in the following:
DECLARE
-- A locally defined varray
-- initialized with no elements.
l_one_day_of_meals meals_vat
:= meals_vat ( );
-- A local object type
-- instance for breakfast
l_breakfast food_t
:= food_t ( 'Scrambled Eggs',
'Protein', 'Yellow' );
BEGIN
l_one_day_of_meals.EXTEND;
l_one_day_of_meals ( 1 )
:= meal_t ( 4, 'BREAKFAST',
l_breakfast );
END;
/
That is how to add elements to the varray. Now let's access the values within an element in the varray using the code shown in Listing 2.
Code Listing 2: Accessing an element in a varray
SQL> DECLARE
2 -- A locally defined varray initialized with one element.
3 l_one_day_of_meals meals_vat
4 := meals_vat ( meal_t ( 4
5 , 'BREAKFAST'
6 , food_t ( 'Scrambled Eggs', 'Protein', 'Yellow' )
7 )
8 );
9 BEGIN
10 -- If more than 2 people are served,
11 -- then show the name of the food.
12 IF l_one_day_of_meals ( 1 ).number_served > 2
13 THEN
14 DBMS_OUTPUT.put_line (
15 l_one_day_of_meals ( 1 ).food_served.name );
16 END IF;
17 END;
Note the interesting lines in Listing 2:
Lines 4-8. In this block, I initialize the varray with a single element, calling both the meal_t and food_t constructor functions to load up the first row in the varray with my breakfast data.
Line 12. I obtain the number of people served breakfast, by specifying the varray and then the index in that collection:
l_one_day_of_meals ( 1 )
Line 15. I show the name of the food served for breakfast using dot notation (specifying object.attribute, just as I would specify a table's column with table.column) to get to the attribute of the element in the varray:
l_one_day_of_meals ( 1 ).food_served
The food_served attribute is, however, very different from number_served. Rather than being a scalar value, it is itself an object. So I can obtain the value of an attribute of that object by using dot notation once again, followed by the name of that attribute:
l_one_day_of_meals ( 1 ).food_served.name
That should give you a solid understanding of how to reference elements and subelements in these complex structures.

Best Practices for Managing Old and New Information

Question:
Hello Steven, My question is about Oracle Triggers. Here is the problem. My purpose is to audit updates so i want to store before and after value of any column of a specified table. In order to achieve this i used :new and :old structures with hard coded column names. But this cause difficulties while adding and dropping columns. i want something more dynamic without coding column names directly. if it is possible, could you help me? current implementation Table definition: CREATE TABLE LOAD_A ( A1 VARCHAR2 (10), A2 VARCHAR2 (10) ) ; create or replace trigger ins_load_a AFTER update ON SCOTT.load_a FOR EACH ROW declare a_rec scott.load_a%rowtype; BEGIN a_rec.a1 := :old.a1; -- (1) a_rec.a2 := :old.a2; -- (2) save_old_values(a_rec); END; thanks.
Answer:
The bad news is that up through Oracle Database 10g, you cannot pass :OLD and :NEW as arguments to procedures. The good news is that at least you don't have to write all the code needed to accomplish your goal.
Let's take a step back: you want to pass the old and new values available inside a trigger to a stored procedure to process the data (write the information to a log table, execute validation logic, and so on). You cannot pass :OLD and :NEW as records; instead, you must copy the individual fields of those pseudorecords to a real record that can be passed to the procedure.
It certainly is tedious and time-consuming to write that code yourself, especially for tables with many columns. Fortunately, it is quite easy to generate the code you desire.
I offer you the gen_audit_trigger_text procedure shown in Listing 1 to help you achieve your goal. I ran this program for the employees table and, after some formatting, had the resulting code shown in Listing 2.
Code Listing 1: gen_audit_trigger_text
CREATE OR REPLACE PROCEDURE gen_audit_trigger_text (
table_in IN VARCHAR2
, owner_in IN VARCHAR2 := USER
, program_name_in IN VARCHAR2 := 'process_data'
)
IS
c_rowtype CONSTANT VARCHAR2 (100) := table_in || '%ROWTYPE';
l_columns DBMS_SQL.varchar2s;

PROCEDURE gen_copy_proc (old_or_new_in IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.put_line ( 'FUNCTION copy_'
|| old_or_new_in
|| ' RETURN '
|| c_rowtype
|| ' IS l_return '
|| c_rowtype
|| '; BEGIN '
);

FOR indx IN 1 .. l_columns.COUNT
LOOP
DBMS_OUTPUT.put_line ( ' l_return.'
|| l_columns (indx)
|| ' := '
|| ':'
|| old_or_new_in
|| '.'
|| l_columns (indx)
|| ';'
);
END LOOP;

DBMS_OUTPUT.put_line ('RETURN l_return;');
DBMS_OUTPUT.put_line ('END copy_' || old_or_new_in || ';');
END gen_copy_proc;
BEGIN
SELECT LOWER (column_name) column_name
BULK COLLECT INTO l_columns
FROM all_tab_columns
WHERE owner = UPPER (owner_in) AND table_name = UPPER (table_in);

DBMS_OUTPUT.put_line ('DECLARE');
DBMS_OUTPUT.put_line (' my_Old ' || table_in || '%ROWTYPE;');
DBMS_OUTPUT.put_line (' my_New ' || table_in || '%ROWTYPE;');
gen_copy_proc ('old');
gen_copy_proc ('new');
DBMS_OUTPUT.put_line ('BEGIN');
DBMS_OUTPUT.put_line (' my_Old := copy_Old ();');
DBMS_OUTPUT.put_line (' my_New := copy_New ();');
DBMS_OUTPUT.put_line (' ' || program_name_in || '(my_Old, my_new);');
DBMS_OUTPUT.put_line ('END;');
END gen_audit_trigger_text;
/
Code Listing 2: Result of gen_audit_trigger_text procedure on employees table
DECLARE
my_old employees%ROWTYPE;
my_new employees%ROWTYPE;

FUNCTION copy_old
RETURN employees%ROWTYPE
IS
l_return employees%ROWTYPE;
BEGIN
l_return.employee_id := :OLD.employee_id;
l_return.first_name := :OLD.first_name;
l_return.last_name := :OLD.last_name;
l_return.email := :OLD.email;
l_return.phone_number := :OLD.phone_number;
l_return.hire_date := :OLD.hire_date;
l_return.job_id := :OLD.job_id;
l_return.salary := :OLD.salary;
l_return.commission_pct := :OLD.commission_pct;
l_return.manager_id := :OLD.manager_id;
l_return.department_id := :OLD.department_id;
RETURN l_return;
END copy_old;

FUNCTION copy_new
RETURN employees%ROWTYPE
IS
l_return employees%ROWTYPE;
BEGIN
l_return.employee_id := :NEW.employee_id;
l_return.first_name := :NEW.first_name;
l_return.last_name := :NEW.last_name;
l_return.email := :NEW.email;
l_return.phone_number := :NEW.phone_number;
l_return.hire_date := :NEW.hire_date;
l_return.job_id := :NEW.job_id;
l_return.salary := :NEW.salary;
l_return.commission_pct := :NEW.commission_pct;
l_return.manager_id := :NEW.manager_id;
l_return.department_id := :NEW.department_id;
RETURN l_return;
END copy_new;
BEGIN
my_old := copy_old ();
my_new := copy_new ();
process_data (my_old, my_new);
END;
You should be able to enhance gen_audit_trigger_text in a straightforward way to also generate the header of your trigger; I leave that as an exercise for the reader.

Best practices for invoker rights and functions

Question:
I have made lots of use of the AUTHID CURRENT_USER (invoker rights) clause lately. I frequently write utilities for other developers on my team, and I define them in a central schema named SHARED_CODE. When I define a program with invoker rights, all developers can call that program and it will automatically perform the desired operations on that developer's own schema objects. But recently I ran into a problem with invoker rights. On our next project, we are going to be working a lot with operating system files via UTL_FILE. In particular, we will be writing text out to files, and that text is going to be formatted in various different ways. So I decided I would write a "write to file" engine that would do all the UTL_FILE operations but let the users specify the formatting by calling their own functions. That way they would have maximum flexibility to do that formatting. But it doesn't seem to work. I define my "write to file" program with invoker rights, but it refuses to call the invoker's formatting routine. What am I doing wrong?
Answer:
You aren't doing anything wrong. Instead, you ran straight into a restriction on how invoker rights works. Fortunately, I can help you work around that restriction and get to the end point you desire.
Generally, what you need to be able to do is write a generic program that issues a callback to a subprogram whose name you do not know when you write your program. Thus, your program needs to figure out at runtime which program to call.
One way to implement runtime callbacks is with dynamic polymorphism and Oracle object types. That is a rather advanced topic that will need to be addressed another day. In this answer, I show how you can use runtime callbacks and invoker rights to achieve the desired result.
Let's start with some background on invoker rights, which applies to all runtime SQL statements (both static and dynamic) executed within a PL/SQL unit. With invoker rights, both name resolution and computation of relevant privileges occur at runtime (and for the latter, roles are recognized and used to compute those privileges). Invoker rights affects the following statements:
• SELECT, INSERT, UPDATE, and DELETE data manipulation statements
• The LOCK TABLE transaction control statement
• OPEN and OPEN-FOR cursor control statements
• All dynamic SQL statements (EXECUTE IMMEDIATE, OPEN FOR, and DBMS_SQL)
Suppose I include the AUTHID CURRENT_USER clause in the header of my program. When I compile that program, the current schema will still need directly granted privileges on all objects the program references.
Then I grant authority to other schemas to execute my program. When a user connected to one of those schemas calls my program, references to database objects in any of the above statements will be resolved according to the privileges (direct or granted through roles) of the currently connected schema.
One obvious conclusion that must be drawn from this is that the AUTHID CURRENT_USER clause will not be applied to static invocations of other subprograms you make within the invoker rights programs.
A static invocation of a program is one that is resolved at compile time. For example, in the following block of code, proc1 calls proc2 statically.
PROCEDURE proc1
IS
BEGIN
proc2 ();
END proc1;
When I compile proc1 in schema ABC, Oracle Database resolves the reference to proc2 according to the privileges of that schema. Now suppose that your database finds a schema-level procedure named proc2 defined in ABC. I grant EXECUTE authority on ABC.proc1 to another schema, such as DEF, which has its own procedure called proc2. When a user connected to DEF executes ABC.proc1, that user will always run ABC.proc2, not DEF.proc2.
So does that mean that it's impossible to create a working "write to file" program with invoker rights? Not at all. You can, in fact, instruct Oracle to dynamically change which program it calls at runtime, but to do this, you need to take advantage of dynamic PL/SQL.
Let's look at two simple demonstrations of the impact of invoker rights, and then I will demonstrate the dynamic PL/SQL "workaround."
First, let's consider AUTHID CURRENT USER with data objects. In Listing 1, I create an invoker rights procedure, show_my_data, in the HR schema to show the number of rows in the my_data table (10). I grant the EXECUTE privilege to SCOTT to run that program. SCOTT also has a my_data table, with just 1 row. When SCOTT runs HR.show_my_data, the program displays 1 and not 10.
Code Listing 1: Create and run show_my_data
SQL> CONNECT hr/hr
Connected.

SQL> CREATE TABLE my_data (VALUE NUMBER)
2 /

Table created.

SQL> BEGIN
2 FOR indx IN 1 .. 10
3 LOOP
4 INSERT INTO my_data
5 VALUES (indx);
6 END LOOP;
7
8 COMMIT;
9 END;
10 /

PL/SQL procedure successfully completed.

SQL> CREATE OR REPLACE PROCEDURE show_my_data
2 AUTHID CURRENT_USER
3 IS
4 num PLS_INTEGER;
5 BEGIN
6 SELECT COUNT (*)
7 INTO num
8 FROM my_data;
9
10 DBMS_OUTPUT.put_line ('Count of my_data = ' || num);
11 END show_my_data;
12 /

Procedure created.

SQL> GRANT EXECUTE ON show_my_data TO scott
2 /

Grant succeeded.

SQL> CONNECT scott/tiger
Connected.

SQL> CREATE TABLE my_data (VALUE NUMBER)
2 /

Table created.

SQL> BEGIN
2 INSERT INTO my_data
3 VALUES (1);
4
5 COMMIT;
6 END;
7 /

PL/SQL procedure successfully completed.

SQL> CALL hr.show_my_data();
Count of my_data = 1
As you can see, even though I ran HR's show_my_data procedure, that program showed me the count of SCOTT's my_data table.
But now let's try to create a "write to file" program. First, I set up a database directory owned by HR and create the "default" formatting function (uppercase the text):
CONNECT hr/hr

CREATE DIRECTORY temp AS 'c:\temp'
/
GRANT WRITE ON DIRECTORY temp
TO scott;
/

CREATE OR REPLACE FUNCTION
format_line (line_in IN VARCHAR2)
RETURN VARCHAR2
IS
BEGIN
RETURN UPPER (line_in);
END format_line;
/
Then I build a very simple program, shown in Listing 2, that simulates "write to file" functionality, and I make sure that SCOTT can call it.
Code Listing 2: Create write_to_file
CREATE OR REPLACE PROCEDURE write_to_file (
dir_in IN VARCHAR2
, file_name_in IN VARCHAR2
, lines_in IN DBMS_SQL.varchar2s
)
AUTHID CURRENT_USER
IS
l_file UTL_FILE.file_type;
BEGIN
l_file :=
UTL_FILE.fopen (LOCATION => dir_in
, filename => file_name_in
, open_mode => 'W'
, max_linesize => 32767
);

FOR indx IN 1 .. lines_in.COUNT
LOOP
UTL_FILE.put_line (l_file, format_line (lines_in (indx)));
END LOOP;

UTL_FILE.fclose (l_file);
END write_to_file;
/
GRANT EXECUTE ON write_to_file TO scott
/
I then connect as SCOTT, create a formatting function that lowercases the text, and run write_to_file as shown in Listing 3.
Code Listing 3: Create format_line (and call write_to_file)
CREATE OR REPLACE FUNCTION format_line (line_in IN VARCHAR2)
RETURN VARCHAR2
IS
BEGIN
RETURN LOWER (line_in);
END format_line;
/

DECLARE
l_lines DBMS_SQL.varchar2s;
BEGIN
l_lines (1) := 'steven feuerstein';
l_lines (2) := 'is obsessed with PL/SQL.';
hr.write_to_file ('TEMP', 'myfile.txt', l_lines);
END;
/
Unfortunately, I find the following text in the myfile.txt output file:
STEVEN FEUERSTEIN
IS OBSESSED WITH PL/SQL.
So it seems very certain that you cannot take advantage of invoker rights with program calls, correct? Well, maybe there is a way: using dynamic SQL.
I rewrite the original write_to_file program in HR as shown in Listing 4.
Code Listing 4: Create updated write_to_file
CREATE OR REPLACE PROCEDURE write_to_file (
dir_in IN VARCHAR2
, file_name_in IN VARCHAR2
, lines_in IN DBMS_SQL.varchar2s
)
AUTHID CURRENT_USER
IS
l_file UTL_FILE.file_type;
l_newline VARCHAR2 (32767);
BEGIN
l_file :=
UTL_FILE.fopen (LOCATION => dir_in
, filename => file_name_in
, open_mode => 'W'
, max_linesize => 32767
);

FOR indx IN 1 .. lines_in.COUNT
LOOP
EXECUTE IMMEDIATE
'BEGIN :new_line := format_line (:old_line); END;'
USING OUT l_newline, IN lines_in (indx);

UTL_FILE.put_line (l_file, l_newline);
END LOOP;

UTL_FILE.fclose (l_file);
END write_to_file;
/
Note that I have moved my invocation of the format_line function so that it is inside an EXECUTE IMMEDIATE statement. That is, I call the function from within a dynamic PL/SQL block. And now I connect to SCOTT and call write_to_file:
DECLARE
l_lines DBMS_SQL.varchar2s;
BEGIN
l_lines (1) := 'steven feuerstein';
l_lines (2) := 'is obsessed
with PL/SQL.';
hr.write_to_file ('TEMP',
'myfile.txt',
l_lines);
END;
/
I then see the following contents in the myfile.txt output file:
steven feuerstein
is obsessed with pl/sql.
Hey, it worked! How is this possible?
With the call shifted to the program inside the dynamically executed PL/SQL block, the call now executes from within a statement to which invoker rights can be applied: EXECUTE IMMEDIATE. That is, the block now executes under SCOTT's authority, so SCOTT's version of format_line is invoked.
So there you have it: invoker rights can apply to the execution call stack when you use dynamic PL/SQL.
Please note that dynamic execution of a PL/SQL subprogram is slower than a static invocation. For long-running programs, that overhead will be insignificant, but you might notice it for programs that finish very quickly.
I compared the performance of statically and dynamically running a program that called the USER function and deposited that value in a local variable. Running the program 100,000 times through a static invocation took 2.83 seconds. Running it dynamically took 6.66 seconds.
I picked up this technique from Alex Nuijten of AMIS (technology.amis.nl/blog), who recently attended the Oracle PL/SQL Programming 2007 conference in San Mateo, California (www.odtugopp07.com) and accepted the first Oracle Development Tools User Group PL/SQL Innovation and Community Award on behalf of the AMIS Oracle group.
One final note: an alternative way to achieve the same effect (runtime selection of the program that is to be executed) is with dynamic polymorphism. This technique is implemented by object types in PL/SQL—specifically, by setting up an object type hierarchy with the UNDER keyword. For more details on object types and dynamic polymorphism in PL/SQL, check out Chapter 12, "Using PL/SQL With Object Types" in the Oracle Database PL/SQL User's Guide and Reference.