Sizing of Locally Managed Tablespaces More and more we are using locally managed tablespaces. They offer a large amount of benefits, so why should we not use this new feature? Some thoughts are needed when you decided to use Uniform Extent Allocation. With the uniform method, you specify an extent size when you create the tablespace, and all extents for all objects created within that tablespace will be that size. The uniform method also provides an enforcement mechanism, because you can’t override the uniform extent size of locally managed tablespaces when you create a schema object such as a table or an index. Calculate the Size of Tablespaces The goal is to allocate as much disk space as really needed and as really used. With the uniform extent allocation you can calculate or even estimate the number of extents you want to allocate. Gaps or unused disk space within the tablespace should be avoided. Lets assume that we create a tablespace with the uniform extent size of 1 MByte and 10 extents. Remember that locally managed tablespaces will use another 64 KBytes or the Header Bitmap: 10 * 1 * 1024K + 64K = 10304K Note that all calculations are made in KBytes and that your chosen extent size is the multiple of your defined block size. The following statement creates this locally managed tablespace with a uniform extent size of 1 MByte: CREATE TABLESPACE uni_test Check the Size and Number of Extents Now every object created within the newly created tablespace gets its uniform extent size of 1 MByte: CREATE TABLE tab_1 ( CREATE TABLE tab_2 ( CREATE TABLE tab_3 ( If you are including a STORAGE clause when you create tables or indexes, Oracle will allocate as much extents as you indicate to use. Table TAB_1 will be allocated with one extent, table TAB_2 too because you need at least 100 KBytes. Table TAB_3 will be created with two extents. This could also be done by defining an INITIAL value of 2 MBytes. The allocated blocks and extents can be verified using the view DBA_SEGMENTS: SELECT segment_name, segment_type, blocks, extents SEGMENT_NAME
SEGMENT_TYPE
BLOCKS EXTENTS The free space in the tablespace UNI_TEST can be verified using the view DBA_FREE_SPACE: SELECT tablespace_name, bytes, blocks
TABLESPACE_NAME
BYTES BLOCKS That means in the tablespace UNI_TEST are still 1536 blocks available. How many extents are these blocks? This can be calculated by multiplying the number of available blocks by the block size and divided by the extent size: 1536 * 4K / 1024K = 6 extents That fits with our calculations and verifications: 4 extents are already
used and another 6 Check the File Size If you check the physical file size used for the tablespace UNI_TEST you will be surprised: Instead of the calculated 10304 KBytes (10'551'296 Bytes) you will find the disk file’s size of 10'555'392 Bytes. Oracle allocates another block which can not be used for object allocation. Some of the Oracle tools such as the Tablespace Manger shows the total number of blocks according to the disk file size. In our example this are 2577 blocks, but usable are only 2576 blocks minus 64 KBytes (for header bitmap). Summary Keep the following rules in mind during the sizing of tablespaces:
Oracle8i: How to migrate LONG RAW to BLOB In Oracle8i BLOB's (Binary Large Objects) can be used instead of LONG RAW's to store binary unformatted data, like documents, images, audio and video. On the new BLOB data type many of the former LONG RAW restrictions are not valid anymore and up to 4GB can be stored. This tip shows how to migrate LONG RAW columns to BLOB's. It is worth to create a separate tablespace for the LOB's bigger contents to gain performance. The tables containing LOB columns can be stored together with other tables in a tablespace (called tab in this sample). However the LOB columns referencing their data in a separate tablespace (called btab here). CREATE TABLESPACE btab A new table must be created that contains the new BLOB column. Even if it is possible to add a BLOB column to an existing table we cannot migrate old LONG RAW data in it. The required SQL function TO_LOB can be used in SELECT subqueries of INSERT statements only. Lets assume the old table docs looks like this... id NUMBER(10) NOT
NULL ... we create the new one: CREATE TABLE newdocs ( LOB's in general do not use rollback segments. To maintain read consistency Oracle creates new LOB page versions every time a lob changes. PCTVERSION is the percentage of all used LOB data space that can be occupied by old versions of LOB data pages. As soon as old versions of LOB data pages start to occupy more than the PCTVERSION amount of used LOB space, Oracle tries to reclaim the old versions and reuse them. In other words, PCTVERSION is the percent of used LOB data blocks that is available for versioning old LOB data. The PCTVERSION can be set to the percentage of LOB's that are occasionally updated. If LOB's are inserted once and afterwards usually read only, 0% can be used. If CACHE is specified Oracle places LOB pages in the buffer cache for faster access. NOCACHE can be used if there are occasionally no writes to stored LOB's and infrequently reads only. CACHE READ is good for busy read operations and infrequent writes. Set CHUNK to the number of blocks of LOB data that will be accessed at one time. This reduces network roundtrip overheads. The INITIAL and NEXT storage parameters must be greater than CHUNK * DB_BLOCK_SIZE size. Use bigger CHUNK's if possible. The default setting ENABLE STORAGE IN ROW stores LOB's less than 4KB within the table and greater LOB's are automatically moved out of the row. This is the recommended setting. DISABLE STORAGE IN ROW can be used to store all data outside the rows. A lot of small LOB's within a table can decrease performance of table operations like full table scans or multi-row accesses. Consider that CHUNK and ENABLE/DISABLE STORAGE IN ROW cannot be altered after table creation. Finally we can use the following SQL statement to migrate the data from the old to the new table: INSERT INTO newdocs (id, bdata) To copy the data is easy. The SQL function TO_LOB( ) can be used to convert LONG RAW to BLOB. It's also possible to convert LONG to CLOB if required. The main thing of the whole data migration is to choose good storage parameter settings especially if a large number and large LOB's in size need to be stored. Net8 access trough a firewall with port forwarding using SSH One option for secure communication between the Net8 client and server is to tunnel the communication inside the Secure Shell protocol. Conceptually, it works like this. First, you install an SSH client on the local machine where you run your Net8 client. You use the SSH client to establish an SSH connection to the remote host where the Net8 server is running. You also use the SSH client to establish a "listen" on a local port for Net8 requests. Here's the cool part: when you fire up your Net8 client, it connects to the Net8 port on localhost - your machine - instead of connecting to port 143 on a remote server machine. The SSH client then forwards everything it receives on the local Net8 port through the SSH session, or tunnel, to the remote SSH daemon, which then forwards the data to the Net8 port on the remote host. How does the SSH daemon on the receiving end know what to do with all this Net8 information coming at it? Well, the information is part of the port-forwarding arrangement you gave the daemon when you first fired up the SSH session. For example, you'd invoke SSH from your unix client machine like this $ ssh -f -L localport:remotehost:remoteport tail -f /dev/null Tfhe command must be invoked as root because root privilege is required to set up port forwarding. The -f option tells SSH to run in the background after port forwarding has been established. -L localport:remotehost:remoteport specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side. In our example, we use port 5555 on the client and port 1521 on the database server 192.168.121.32 The server port must be whichever port listens for Net8 requests (1521 on most systems). Depending on the SSH client, you'll either be prompted for your password to log in to the SSHD 194.75.132.34 server when issuing the tunneling command, or you'll have to initiate a login manually to establish the session, In all cases, you'll have to use SSH to log in to the remote host before you can use it to "launder" your connection. The entire Net8 port-forwarding scenario is shown in the next figure. Example We start by using lsof (list open files), a program that tells you which open files and network connections belong to which processes. to check for software listening at local TCP port 5555. There is none. We confirm this by trying to telnet to localhost at port 555 without success. $ lsof -i tcp:5555 At this point, we're certain that there's no activity, such as a listen or an open connection, on port 555 on our local machine. That port is okay to use. Next, we set up the port forwarding by issuing an SSH command. Remember that you have to be root to set up port forwarding: $ su - The tail -f /dev/null that we tacked on the end of the SSH command is just a low-overhead command to keep the session open. We didn't want to keep an actual shell session open and running in the background when we didn't need it, so we used the tail command instead. You can verify with ps -ef, that the command is now running in the background and you now have a permanent Net8 connection through two firewalls -- cool isn't it? Next you have to setup your TNSNAMES.ORA configuration file, then check the connection with tnsping and finally connect with sqlplus. ORA1.WORLD = $ tnsping ORA1 Automatically Calculating Percentages in Queries Starting with Release 7.1 of Oracle, users have had access to a feature called an inline view. An inline view is a view within a query. Using this feature, you can easily accomplish your task. Example: Show percentage of salaries for each department Every row in the report must have access to the total sum of sal. You can simply divide sum (sal) by that total, and you'll have a number that represents the percentage of the total. column percentage format 99.9 DEPTNO SUM(SAL) PERCENTAGE With Oracle8i Release 2 (8.1.6 and higher), you can calculate percentages by using the new analytic functions as well. The query using an analytic function might look like this: column percentage format 99.9 DEPTNO SUM(SAL) PERCENTAGE The query produces the same answer—but it does so more efficiently, because it does not have to make two passes over the data to arrive at the answer. Because the analytic functions are built-in, queries that use them will find the answer more rapidly than the "pure" SQL-based approach. Show Table and System Privileges It is normally difficult to list all privileges and roles assigned to a specific user in one select, since a privilege can be assigned to a role, which can be assigned to another role, which in turn can be assigned to another role, which can be granted a user. The following view lists all of your roles, all of the roles granted to those roles, and so on: CREATE VIEW user_role_hierarchy GRANT SELECT ON user_role_hierarchy TO PUBLIC; That view is based on the existing data dictionary view, and if you create the new view as SYS, it will show the currently logged-in users their roles (and all of the roles granted to those roles, and so on). You'll find this view to be very fast. To get all of your table privileges, you should do this: SELECT DISTINCT privilege, owner, table_name And to get all of your system privileges, you should do this: select distinct privilege How to switch Oracle in Single User Mode ? During a migration it may be important, that nobody can connect to the database. You may have two possibilities to achieve this
Exclusive Table Locks (X) An exclusive table lock is the most restrictive mode of table lock, allowing the transaction that holds the lock exclusive write access to the table. An exclusive table lock is acquired for a table as follows: $ sqlplus scott/tiger SQL> GRANT ALL ON emp TO PUBLIC; $ sqlplus test/test SQL> SELECT * FROM
scott.emp;
(This works) Permitted Operations Only one transaction can obtain an exclusive table lock for a table. An exclusive table lock permits other transactions only to query the table. Prohibited Operations An exclusive table lock held by a transaction prohibits other transactions from performing any type of DML statement or placing any type of lock on the table. Lock Duration Oracle releases all locks acquired by the statements within a transaction when you either commit or roll back the transaction. Oracle also releases locks acquired after a savepoint when rolling back to the savepoint. However, only transactions not waiting for the previously locked resources can acquire locks on the now available resources. Waiting transactions will continue to wait until after the original transaction commits or rolls back completely. Oracle in Single User Mode You may use the RESTRICTED SESSION system privilege to switch the database in single user mode for migrations.
In a first step (before the migration), you shutdown the database an start it again to be sure that all users are disconnected. Now revoke the RESTRICTED SESSION system privilege from most users, this system privilege is normally not used. $ svrmgrl As user SYS you can now perform the migration. If an ordinary user tries to connect, he will get the following error messge: sqlplus scott/tiger ERROR: ORA-01035: ORACLE only available to users After the migration to not to forget, to disable the RESTRICTED SESSION system privilege svrmgr> ALTER SYSTEM DISABLE RESTRICTED SESSION; How to store data DES encrypted in Oracle ? This tip is from the Oracle Magazine, it shows the usage of the DBMS_OBFUSCATION_TOOLKIT.The DBMS_OBFUSCATION_TOOLKIT is the DES encryption package. This package shipped with Oracle8i Release 2 and later. It provides first-time field-level encryption in the database. The trick to using this package is to make sure everything is a multiple of eight. Both the key and the input data must have a length divisible by eight (the key must be exactly 8 bytes long). Example CREATE OR REPLACE PROCEDURE obfuscation_demo AS SQL> exec obfuscation_demo l_string before encrypt: hello world PL/SQL procedure successfully completed. You must protect and preserve your "magickey"—8 bytes of data that is used to encrypt/decrypt the data. If it becomes compromised, your data is vulnerable. Dynamic SQL queries that return multirow result sets The OPEN FOR clause allows to build dynamic SQL with variable table-, or column names that return multirow result sets. Let's take a closer look at OPEN FOR and multirow queries with a concrete example. Create the following tables:SQL> create table emp_10 as select * from emp where
deptno = 10; Now look at the following code CREATE OR REPLACE PROCEDURE show_emp_any ( The first step in a dynamic multirow query is to define the cursor variable that will point to the result set in the System Global Area. We declare the cursor variable as an instance of a REF CURSOR type, so we must consider the question of whether to make it strong or weak. A REF CURSOR is strong if it is strongly typed, which means the TYPE statement includes a RETURN clause defining what structure is allowable (number of items and datatypes) for data fetched through a cursor variable based on this TYPE. With dynamic SQL, however, you are not allowed to use strong REF CURSORs. At compile time, the query associated with the REF CURSOR is just a string, so the PL/SQL engine cannot validate that the query's SELECT list is consistent with the record structure in the REF CURSOR's RETURN clause. Once that is done, we declare a record that we can use to retrieve individual rows from any of the three tables (all three tables have the same structure, so we can %ROWTYPE the record against any of the tables. We must use dynamic SQL, since we are constructing the name of the table each time we run the program. But dynamic SQL doesn't require any more in the way of special code than what we have just described. We can use the familiar explicit-cursor syntax unchanged for my cursor-variable-based code, as the following lines demonstrate: SQL> exec show_emp_any('10'); SQL> exec show_emp_any('20'); SQL> exec show_emp_any('30'); Formatted SELECT * FROM 'table' Statement Results This tip comes from Anirban Das, a Senior IA from Vernon Hills, IL. We have improved the SQL statement in some points, so WHERE clauses will be correctly executed. In SQL*Plus, it is often difficult to read the results of a SELECT *FROM 'table' statement since the output could wrap across multiple lines. For example, the normal output of SELECT * FROM 'dba_users' would be :
USERNAME
USER_ID PASSWORD The results are not clearly readable. Implementing this SQL script, the output would be : ACCOUNT_STATUS = ----------------OPEN [4] This formats the results to 1 row per screen with the length of data in square braces [ ]. It also allows optional WHERE clause and ORDER BY . Click here for the improved script. Switch to another Oracle User and back Sometimes, the DBA needs to log into a user's account to make a change, such as to grant a privilege on a user's table to another user, but, may not know what the user's password is, or, may need to make changes to a set of users from a script, but, doesn't want to include their passwords in the script itself. Oracle provides an undocumented "identified by values" clause in the "alter user" command that can be used for this purpose. Since the dba_users table contains the encoded password for each user, this value can be used to generate an "alter user" command to reset the user's password back to its original value. Then, from user system or another DBA user, you can alter the user's password to a known value (such as "whatever"), log into the user's account using "connect userid/whatever", make the changes that are needed, connect back to the system account, and run the generated "alter user" command to put the original password back. The following SQL generates a password change script (setpw.sql) to set all users to a known password ("whatever"), and, another script (resetpw.sql) to set all users back to their original passwords. This would be used as part of another script to generate the password change scripts, run setpw.sql, log in and make the changes that are needed, and immediately run resetpw.sql to put the passwords back. Note that the users won't be able to log in during that time, since their passwords are not valid while you are running the script. spool setpw.sql Generated Script setpw.sql alter user SYS identified by whatever; Generated Script resetpw.sql alter user SYS identified by values '5638228DAF52805F'; Access Internet Web pages from SQL or PL/SQL Using the package UTL_HTTP, you can access any HTML page from SQL set pages 50000 What is a JDBC KPRB driver and what is it used for ? What is JDBC and what is it used for? JDBC is a set of classes and interfaces written in Java to allow other Java programs to send SQL statements to a relational database management system. Oracle provides three categories of JDBC drivers:
Oracle's JDBC Thin driver uses Java sockets to connect directly to Oracle. It provides its own TCP/IP version of Oracle's Net8 (SQL*Net) protocol. Because it is 100% Java, this driver is platform independent and can also run from a Web Browser (applets). Oracle's JDBC OCI drivers uses Oracle OCI (Oracle Call Interface) to interact with an Oracle database. You must use a JDBC OCI driver appropriate to your Oracle client installation. The OCI driver works through either SQL*Net or Net8.
Either of these client versions can access Oracle7 or Oracle8 servers. The JDBC OCI drivers allow you to call the OCI directly from Java, thereby providing a high degree of compatibility with a specific Version of Oracle. Because they use native methods, they are platform specific. Oracle's JDBC KBPR driver is mainly used for writing Java stored procedures, triggers and database JSPs. It uses the default/ current database session and thus requires no additional database username, password or URL. All three drivers support the same syntax and API's. Oracle needs three drivers to support different deployment options. Looking at source code, they will only differ in the way you connect to the database. Remember, you must use a JDBC version that matches the version of your Java Development Kit. How does one connect with the JDBC Thin Driver? The the JDBC thin driver provides the only way to access Oracle from the Web (applets). It is smaller and faster than the OCI drivers, and doesn't require a pre-installed version of the JDBC drivers. import java.sql.*; How does one connect with the JDBC OCI Driver? One must have Net8 (SQL*Net) installed and working before attempting to use one of the OCI drivers.import java.sql.*; How does one connect with the JDBC KPRB Driver? One can obtain a handle to the default or current connection (KPRB driver) by calling the OracleDriver.defaultConenction() method. Please note that you do not need to specify a database URL, username or password as you are already connected to a database session. Remember not to close the default connection. Closing the default connection might throw an exception in future releases of Oracle. import java.sql.*; How does one enable dead connection detection ? Dead database connections can be detected and killed by SQL*Net if you specify the SQLNET.EXPIRE_TIME=n parameter in your SQLNET.ORA file (usually in $TNS_ADMIN). This parameter will instruct SQL*Net to send a probe through the network to the client every n minutes, if the client doesn't respond, it will be killed. NOTE: This parameter is only useful on the database server side, specifying it on a client workstation will have no effect. How to connect to a system regardless of machine failure ? You can place multiple address entries for a single connection alias in the TNSNAMES.ORA file. This means that you can connect to a database, even if some kind of physical failover occurred. Look at the following example:
The mystery of Stored Outlines Where to use Stored Outlines ? The SQL statements in a hybrid system need to have an execution plan that is appropriately tailored for the type of system configuration you are using. When running the statements during the day in the OLTP configuration they need to use a different execution plan than they would use when running at night in the DSS configuration. This can be achieved by creating two different stored outlines for each SQL statement. The outlines are grouped by categories. You can name one category OLTP and the other DSS. Each category contains the different execution paths. How to create the Stored Outline ? To ensure that the same tuning execution plan is used every time a particular query is recompiled, you must create a stored outline for it. The stored outline is created with the CREATE OUTLINE command. When creating an individual stored outline for your SQL statement using the CREATE OUTLINE command you give the outline a name which is later used to identify which outline to use during the statements execution. CREATE OR REPLACE OUTLINE emp_hiredate Outline created. How to use Stored Outlines ? The statement in the stored outline must match the text of the SQL statement being executed. This is done by using the method for comparing cursors in the shared pool. Stored outlines rely on hints used by the optimizer for its execution plans. Stored outlines use the cost-based optimizer. Their query text must match exactly the text of the stored outline SQL statement. The USE_STORED_OUTLINES parameter may be set at the instance or session levels, but will be of no use if the stored outline has not yet been created. The parameter identifies a stored outline to be used when executing a SQL statement. If the parameter is set to TRUE, DEFAULT category outlines are used. If the parameter is set to a category name, that categories outlines are used. This parameter is set using the ALTER SESSION or ALTER SYSTEM command. The ALTER OUTLINE SQL command can be used to change the category of an outline, rename an outline, or rebuild an outline. ALTER SESSION SET USE_STORED_OUTLINES = TRUE; Where can I find the created Stored Outlines ? All stored outlines are stored in the OUTLN schema and may be transferred easily using the Export/Import utility. The OUTLN schema is automatically created with the database. The OL$ OUTLN schema table contains the outline name, statement text, category, and the creation date. The OL$HINTS OUTLN schema table contains the hints for the outlines in the OL$ table. This information can also be viewed in the DBA_OUTLINES and DBA_OUTLINE_HINTS data dictionary views. SELECT name,used FROM USER_OUTLINES; Top-N SQL queries with Inline Views Top-N queries Suppose you want to retrieve and sort information on the 4 most recently hired employees in a very efficient way. This can be achieved using an inline view combined with ORDER BY and ROWNUM. Inline Views An in-line view, which is a feature of a Top-N SQL query, is a subquery. This type of subquery differs from a regular subquery by containing an ORDER BY clause which is not allowed in a regular subquery. The ROWNUM condition, which is an enhanced sorting mechanism, would be used in the outer query to complete the Top-N SQL query. Example We need to retrieve and sort information on the 4 most recently hired employees from the following list (marked in blue color). SELECT empno,ename,TO_CHAR(hiredate,'DD.MM.YYYY') "hiredate" EMPNO ENAME
hiredate The first approach is to used the following query, which does not select the Top-4 rows ! SELECT empno,ename,hiredate EMPNO ENAME
HIREDATE The solution is to use an inline view with an ORDER BY and a ROWNUM condition, in the outer query to complete the Top-N SQL query. SELECT * EMPNO ENAME
HIREDATE The purpose of this query is to retrieve and sort information on the 4 most recently hired employees. This is a Top-N SQL query which is more efficient than a regular query because Oracle stores a maximum of only 5 rows as the data is retrieved from the table avoiding sorting all of the rows in the table at once. The WHERE clause contains 'ROWNUM < 5' which prevents sorting on more than 5 rows at one time -- cool isn't it ? iFS (Internet Filesystem) Test Installation Internet portals are currently implementing document management systems as part of the offered services on their portal. We have been asked to review Oracle Internet File System (iFS) in order to enable iFS for the portal. This article is based on iFS version 1.0.8.3. IFS Overview iFS is, as its name suggests, a file system for the internet. It allows users to store content at a central location, through different interfaces like a web browser, the Microsoft Windows Explorer or an ftp client. All content is stored in an Oracle 8i Release 2 or 3 database, completely transparent for the end user. The objects stored into iFS can get organised within a virtual folder structure and protected by access control lists (ACL) that allow fine grained access control. Versioning of the content is also supported. Using Oracle interMedia (optional) it is possible to do full text searches through the content stored in iFS. For example, one search would allow to find all email messages, Microsoft Word documents and Microsoft Excel Spreadsheets that contain the keyword "Akadia". Protocol Servers Protocol servers are used to interact with iFS. You can connect to them using standard clients, such as Eudora (email) or the Microsoft Windows Explorer; iFS clients that get shipped with the product, such as the web interface; or a custom written client. The following main protocol servers are shipped with the product:
IFS Services iFS can get customised in many different ways. The following describe the "points of access".
Customisation The iFS document hierarchy will not fit all application needs. Using custom classes, the hierarchy can get extended. Custom attributes can get added to existing document classes using XML.
Requirements iFS is quite resource intensive. The requirements for both software and hardware are described below. Oracle recommends to set up a two-tier configuration with a database server and an application server. iFS is available on Sun Solaris (Version 2.6 with patch 107733 or newer) and Microsoft Windows NT 4.0 by the time of the writing of this article. The required database release is Oracle8i, Release 2 (8.1.6), it needs the JDBC:OCI Driver for Java 1.1 to connect to it. For a two-tier configuration, Oracle Administrator Client is required, too. For the evaluation, the complete software has been installed and configured at the offices of Akadia AG. The version used is 1.0.8.3 on Sun Solaris. Due to limited system availability, the evaluation was carried out in a single tier environment. Both the database and the application server was installed on the same machine. Both the CPU and memory minimum requirements were not met. Installation First of all, a new database using the NLS character set US7ACII of version 8.1.6 had to get created. The installation of JServer (required for interMedia) was very tedious, as the memory requirements are extremely high. After installing interMedia, iFS 1.0.8.0 and the patchset 1.0.8.3 was installed. This is a very easy and straightforward task. Shared Pool and Java Pool in INIT.ORA parallel_automatic_tuning = false Installation Script #!/bin/ksh sqlplus << EOF ### Setup a database for running Java and the
ORB ### Initializes Java library needed by PL/SQL ### Script used to load AQ/JMS jar files into the
database ### Load Java RepAPI server classes and publish
'repapi' obj disconnect ### Oracle81x interMedia Installation of
ORDSYS sqlplus << EOF sqlplus << EOF Configuration iFS is configured using a graphical configuration assistant or by editing the text configuration files. Only minimal configuration is required to get iFS running. Tests The HTTP protocol server and the shipped web client have been tested extensively. The application is very robust and works well even with large files (note that only the Sun Java Web Server was tested - it does not allow for scalability). A 20MB upload lasted approximately 2 minutes (via LAN, database not tuned). The FTP protocol server also worked without problems, but was not tested extensively. Other interfaces were out of the scope of these tests. The Power of CASE Expressions in SQL Queries One of the most powerful SQL features is often unknown - CASE Expressions in SQL Queries. An expression is a combination of one or more values, operators, and SQL functions that evaluate to a value. An expression generally assumes the datatype of its components. CASE Expressions CASE expressions let you use IF ... THEN ... ELSE logic in SQL statements without having to invoke procedures. The syntax is: CASE_expression ::= Oracle searches for the first WHEN ... THEN pair for which condition is true.
At least one occurrence of expr1 or expr2 must be non-null. Note: The maximum number of arguments in a CASE expression is 255, and each WHEN ... THEN pair counts as two arguments. To avoid exceeding the limit of 128 choices, you can nest CASE expressions. That is expr1 can itself be a CASE expression. Simple Example The following statement finds the average salary of all employees in the EMP table. If an employee's salary is less than $2000, the CASE expression uses $2000 instead. SELECT AVG(CASE WHEN e.sal > 2000 THEN e.sal ELSE 2000
END) More complex Example with a Subquery in the CASE Expression CREATE OR REPLACE VIEW v_user AS In this example, we use a subquery in the CASE expression block. If the subquery finds an entry the CASE expression evaluates to 1, else to 0 which is stored in userstate. We think, that such constructs can be very powerful - without to programming PL/SQL. If you must create an index on a large and busy table which cannot suffer any downtime, then you can build this index without interrupting the availability of this table. The command ... CREATE INDEX index-name ON table-name (column-name) ONLINE; ... will create the index without interrupting the availability of the table. Note, that NO downtime is experienced when rebuilding an index online. All DML operations may continue without interruption. This operation works on partitioned, nonpartitioned, and index-organized tables with B*-tree indexes. Locks are briefly placed on the table during the prepare phase to place entries in the data dictionary and to create the index structure. Rebuilding an index online requires double the space of the current index.The table is offline briefly during the prepare and merge phases. Bitmap, cluster, and secondary indexes that are on index-organized tables may not be rebuilt. Speed up Queries using Function based Indexes When you use an application that often select rows based on an expression, then a function based index (an index on a function) can help to avoid full table scans. A function-based index is based on expressions. Expressions may be arithmetic expressions, built-ins, or other user-defined expressions. Lets look at an example SQL> SELECT sal FROM emp Without a function based index, you will notice, that oracle performs a full table scan. Create the function based index SQL> CREATE INDEX calc_on_emp ON emp (sal + comm * (sal - 100)); Enable the function based index SQL> ALTER SESSION SET QUERY_REWRITE_ENABLED = TRUE; Check, that the function based index is used SQL> ANALYZE INDEX calc_on_emp COMPUTE STATISTICS; Execution Plan 1 0 TABLE ACCESS (BY INDEX ROWID) OF
'EMP' 2 1 INDEX (RANGE SCAN) OF 'CALC_ON_EMP' Here is a more advanced example We use an application that deletes data with records in excess of 20MB
from the fact table. DELETE measurements This process currently takes nearly two to three hours each time it runs. Is there a way we can speed this up? Note that &1&1 is of fixed length and &2&2 is of varying length and mixed case. The character in front of &1&1 is #. Let's say the &1 is after the first # and the &2 is after the second, up to the third. Then I would recommend a function-based index (an index on a function—a capability added in Oracle8i). You can make an index on some function of the last_modified_by column that will allow you to find all of the rows you want to delete via the index—as opposed to the full scan that is occurring now. The index you want to create would take the form: create index t_idx on If I'm off on the placement of the #'s, you just need to adjust the fourth parameter to instr in the above—I'm looking for the first and third ones. The index you create will pick off just the fields you are interested in—in effect, it will index &1#&2 for you. You would then create a view to delete from, so that you can avoid typos in the function. create or replace view t_view This new view will use an index after having the necessary session or system parameters set (the following may be set in your init.ora to make them the default). alter session set QUERY_REWRITE_ENABLED=TRUE; alter session set QUERY_REWRITE_INTEGRITY=TRUSTED; set autotrace on explain Execution Plan SQL> set autotrace off This process can make use of an index, and, if the number of rows to be deleted is a small fraction of the 20 million rows present, will definitely speed up the operation. The UTL_FILE package lets your PL/SQL programs read and write operating system (OS) text files. It provides a restricted version of standard OS stream file input/output (I/O). Server security for PL/SQL file I/O consists of a restriction on the directories that can be accessed. Accessible directories must be specified in the instance parameter initialization file (INIT.ORA). Specify the accessible directories for the UTL_FILE functions in the initialization file using the UTL_FILE_DIR parameter. For example: UTL_FILE_DIR = <directory name> If the initialization file for the instance contains the line UTL_FILE_DIR = /u01/oracle/db, then the directory /u01/oracle/db is accessible to the FOPEN function. Note that a directory named /u01/oracle/DB would not be accessible on case-sensitive operating systems. The parameter specification UTL_FILE_DIR = * has a special meaning. This entry turns off directory access checking, and it makes any directory accessible to the UTL_FILE functions. The ' * ' option should be used with great caution. Oracle does not recommend that you use this option in production systems. Also, do not include ' . ' (the current directory for UNIX) in the accessible directories list. To ensure security on file systems that enable symbolic links, users must not be allowed WRITE permission to directories accessible by PL/SQL file I/O functions. The symbolic links and PL/SQL file I/O could be used to circumvent normal operating system permission checking and allow users read/write access to directories to which they would not otherwise have access. Here is a simple example CREATE OR REPLACE SQL> set serveroutput on; Using Index Only Tables (IOTs) Index-organized tables are tables with data rows grouped according to the primary key. The attributes of index-organized tables are stored entirely within the physical data structures for the index. Index-organized tables provide fast key-based access to table data for queries involving exact match and range searches. Changes to the table data (such as adding new rows, updating rows, or deleting rows) result only in updating the index structure (because there is no separate table storage area). Also, storage requirements are reduced because key columns are not duplicated in the table and index. The remaining non-key columns are stored in the index structure. Index-organized tables are particularly useful when you are using applications that must retrieve data based on a primary key such as intersection tables. Note the following important constraints
Here is an example CREATE TABLE my_intersection ( How to query a "n X m" relation ? Consider the following situation: We have employees and projects. An employee can be registered (work for) in 0, 1or more projects. For a certain project, o one or more employees are allocated. We have a typical many-to-many relationship which is normalized with the intersection entity PROJALLOC. Create the tables with the following content CREATE TABLE employee ( CREATE TABLE project ( CREATE TABLE projalloc ( INSERT INTO employee (emp_id,name) VALUES (1,'ALLEN'); INSERT INTO project (proj_id,name) VALUES (1,'PROJECT 01'); INSERT INTO projalloc (proj_id,emp_id) VALUES (1,1); Now we have the following situation EMP_ID NAME EMP_ID
PROJ_ID PROJ_ID NAME Suppose you want the following result set
The obvious approach with outer joins leads to the error message below: select e.name employee,p.name name ORA-01417: a table may be outer joined to at most one other table $ oerr ora 1417 "a table may be outer joined to at most one other table" Solution Use the UNION construct to query the two special cases ( all employees with no project assigned and all projects with no employees assigned). select e.name employee,p.name project
EMPLOYEE
PROJECT Logging using autonomous transactions At times, you may want to commit or roll back some changes to a table independently of a primary transaction's final outcome. For example, while running a transaction, you may want to log error messages to a debug table even if the overall transaction rolls back. Autonomous transactions allow you to do such tasks. An autonomous transaction executes within an autonomous scope. An autonomous scope is a routine you mark with the pragma (compiler directive) AUTONOMOUS_TRANSACTION. The pragma instructs the PL/SQL compiler to mark a routine as autonomous (independent). Example You need to log an error to your database log table. On the other hand, you need to roll back your core transaction because of the error. And you don't want to roll back over other log entries Create the Log Table: log CREATE TABLE log ( Now create the Logging Procedure write_log, which inserts a row in the log table when an error occurs. You can use the procedure write_log an an error handler in the EXCEPTION part. CREATE OR REPLACE PROCEDURE write_log ( Now create a test procedure with the EXCEPTION handler write_log CREATE OR REPLACE PROCEDURE add_emp IS Test the execption handler SQL> exec add_emp; LOG_CODE LOG_MESG The error is logged in the autonomous transaction, but the main transaction is rolled back. Creating updatable views using INSTEAD OF triggers INSTEAD OF triggers provide a transparent way of modifying views that cannot be modified directly through UPDATE, INSERT, and DELETE statements. These triggers are called INSTEAD OF triggers because, unlike other types of triggers, Oracle fires the trigger instead of executing the triggering statement. The trigger performs UPDATE, INSERT, or DELETE operations directly on the underlying tables. Example We create the VIEW emp_dept, which joins the table emp and dept. An INSERT is only possible with an INSTEAD OF trigger. Create the view based on emp and dept CREATE VIEW emp_dept AS Try to insert a row INSERT INTO emp_dept (empno,ename,job,deptno,dname) ORA-01779: cannot modify a column which maps to a non Check if view is updatable -- it is not. SELECT column_name,
COLUMN_NAME
UPD INS DEL Create the INSTEAD OF trigger CREATE OR REPLACE TRIGGER trg_emp_dept Try to insert a row INSERT INTO emp_dept (empno,ename,job,deptno,dname) 1 row created. A subquery in the FROM clause of a SELECT statement is also called an inline view. When a query contains an inline view, the inline is merged into the query. Powerful constructs can be built using inline views as the next example shows: check the tablespace available, free and used space. Example set linesize 100 select a.file_name file_name,
File
Tablespace Allocated
Free Used Counting Backwards to Reset a Sequence Value Consider a sequence named "MY_SEQ" which has been created to generate automatic numbers. Somebody, by mistake, has tried to access the sequence from SQL*Plus and consequently the value has been increased to e.g. 32. Our objective is now to reset the value to 10 so that next created number will have a value of 11.
This will return 1. It is not always necessary to run the above query to find out the INCREMENT BY value. You can find it out by looking at the column data for which the sequence is used.
Logon DB Trigger to Setup the User Environment Date and time format settings are based on the user's locale and native language. For example, SCOTT needs default NLS_DATE_FORMAT "MM/DD/YYYY", and ZAHN needs default "DD-MON-YYYY".,etc. The idea is to create an AFTER LOGON trigger at database level, which reads LOGON session UserName and sets up NLS_DATE_FORMAT by executing ALTER SESSION dynamically. CREATE OR REPLACE TRIGGER setup_env Identify the Datafile Size Limit There is always some ambiguity regarding the datafile size limit on a given platform and version of the database. Here is a simple way of identifying the maximum size to which a datafile can grow before switching to the next datafile. Suppose you have a locally managed tablespace 'NOL' with AUTOEXTEND on without the MAXEXTEND value specified (Note: Don't do this ... the reason to read this tip). sqlplus sys/.... CREATE TABLESPACE nol Now let's get the file id of the datafile created for the NOL tablespace select file_id,file_name,autoextensible FILE_ID
FILE_NAME
AUT Querying the filext$ table, we can get the value of the maximum size in database blocks to which the datafile can grow before switching to the next file. Technically this must be the maximum filesize that Oracle can understand for a database datafile. select * from filext$ where file# = 9; FILE#
MAXEXTEND INC Getting the Blocksize of the Database: select value from v$parameter where name = 'db_block_size'; Setting the default would give you a maximum value of 4194302 blocks with increments of 1 block when the datafile extends. On a database of 4K block size this would work out to be: select (4194302*4096)/1024 from dual; So, the database datafile can have a maximum size of 16.7 GByte on a 4k
database block size and Windows 2000 platform. The above shows that the dependency is on
the database block size for a given platform. Also if we do not set the MAXEXTEND for the
datafile, then the first datafile would grow to the MAXEXTEND value and only then would
shift to the next datafile for Profilers are helpful tools to investigate programs and identify slow program parts and bottle necks. Furthermore you can determine which procedure, function or any other code part is executed how many times. To be able to use the DBMS_PROFILER package you have to install once for your database the following packages. Do this as user SYS: Suppose Oracle is installed in ORACLE_HOME = D:\Oracle\Product\8.1.7 sqlplus sys/<passwd> Additionally install again the proftab file as the owner your packages belong to: sqlplus scott/tiger A typical profile cycle looks like this:
1. Start Profiler Start your profiler before every profile cycle: sqlplus scott/tiger 2. Execute your PL/SQL Code For example, create and start the following procedure, to create a random number using the DBMS Package DBMS_RANDOM CREATE OR REPLACE FUNCTION myrand (n IN NUMBER) RETURN NUMBER IS SELECT myrand (5000) FROM dual; 3. Stop the profiler and get the run id: exec DBMS_PROFILER.STOP_PROFILER; The run id (in our example = 1) is used to identify the profile cycle. All results for all cycles remain stored. You can use the run ids to compare different cycles with different program codes against each other. 4. Evaluate PLSQL_PROFILER_DATA and PLSQL_PROFILER_UNITS Here is a code sample how to select profiler information: col line format 9999 heading "Line" SELECT s.line, You have to set your run id, a package name and the package owner. To specify the source lines is not mandatory (not set in the example) but very convenient. Some development tools support the evaluation of these two profiler tables. However if you select them in SQL*Plus you will get a lot of lines. In a first step you can display all the lines and later on, if you focus down to certain code parts, you may like to restrict your selection by line numbers. A typical output looks like this. Tot Tot The file number is followed by the number of executions and the total time in milliseconds. Enabling and Disabling Automatic Extension for a Datafile You can create datafiles or alter existing datafiles so that they automatically increase in size when more space is needed in the database. The files increase in specified increments up to a specified maximum. Advantage of using AUTOEXTEND ON
Disadvantage of using AUTOEXTEND ON If you have no automatic mechanism to check the growth of your Oracle Datafile, ATOEXTEND ON will fillup the filesystem until 100%, then it is not easily to reclaim space. We think, that the disadvantage is greater than the advantage, due to this, check your Oracle Datafiles and reset AUTOEXTEND to OFF. Check AUTOEXTEND on the Oracle Datafiles To find out if a datafile is auto-extensible, query the DBA_DATA_FILES view and examine the AUTOEXTENSIBLE column. SELECT file_name,autoextensible FROM dba_data_files; Enabling automatic file extension You can specify automatic file extension by specifying an AUTOEXTEND ON clause when you create datafiles using the following SQL statements:
CREATE TABLESPACE rbs You can enable or disable automatic file extension for existing datafiles, or manually resize a datafile using the SQL statement ALTER DATABASE. The following example enables automatic extension for a datafile added to the USERS tablespace: ALTER TABLESPACE users The value of NEXT is the minimum size of the increments added to the file when it extends. The value of MAXSIZE is the maximum size to which the file can automatically extend. Disabling automatic file extension The next example disables the automatic extension for the datafile. ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' The Power of EXECUTE IMMEDIATE and CASE WHEN Never do procedurally what you can do in a single SQL statement. If you have a typical CASE construct you can use EXECUTE IMMEDIATE with a CASE statement. Suppose you have the following UPDATE: DECLARE CURSOR empcur IS Use dynamic SQL on the update since PLSQL doesn't understand CASE until 9i but that's not really relevant. Unlearn "procedural" coding. Try to do it in SQL, resorting to procedural logic only when truly necessary. DECLARE Tracking the progress of a long running statement Sometimes you run an INSERT or DELETE statement that takes a long time to complete. You have wondered how many rows have already been inserted or deleted so that you can decide whether or not to abort the statement. Is there a way to display how many rows have been deleted while the statement is occurring ? You can query the V$SESSION_LONGOPS table to track the progress of the statement. Example: Starting the following long running INSERT INSERT INTO bigemp SELECT * FROM bigemp; Check the progress: SELECT sid,sofar,totalwork,time_remaining SID
SOFAR TOTALWORK TIME_REMAINING Slow count(*) and the Highwater Mark For each object, Oracle also maintains a record of the highest relative block of the table used to hold data. This highwater mark is maintained in multiples of five blocks and is not reset unless the TRUNCATE command is executed. When Oracle performs operations requiring a full table scan, such as SELECT count(*), all blocks up to and including the highwater mark are read. If a table is created with 50,000 rows occupying 10,000 blocks, and those rows are subsequently deleted, the highwater mark will remain at 10,000, and a SELECT count(*) command will read all 10,000 blocks even though they are all empty. An even worse scenario is possible. Suppose that a table contains 50,000 rows, and the first 49,000 rows are then deleted. The blocks corresponding to the deleted data are placed at the end of the free block list. When the next INSERT statement is executed, Oracle finds the first block on the free block list, which is beyond the highwater mark. The effect is that all the free space (49,000 rows worth) is ignored, and the physical table becomes bigger. Full table scans and other similar operations still have to read all the empty blocks, and performance is significantly impacted. If you use SQL*Loader with the direct path option, these loads always begin at the highwater mark, so the table size may grow while leaving significant amounts of free space unused. To easily determine the current value of the highwater mark, use the following formula after analyzing the table: highwater mark = total blocks - empty blocks - 1 Total blocks for a table can be obtained by using the following query. SELECT blocks Likewise, the number of empty blocks (blocks above the highwater mark) can be obtained with this query: SELECT empty_blocks John Dixon, published on http://www.revealnet.com a script which can be used to list all of the tables specified by owner, where the High Water Mark is say 20% larger than the actual data in the tables. This will indicate which tables require a rebuild. Download Script: Oracle 7 Version / Oracle 8 Version SQLNET Trace Route from Client to Server The Trace Route Utility (TRCROUTE) enables administrators to discover what path or route a connection is taking from a client to a server. If TRCROUTE encounters a problem, it returns an error stack to the client instead of a single error. These additional error messages make troubleshooting easier. TRCROUTE is different from TNSPING in that it travels as a special type of connect packet, and is routed as such. As it travels toward its destination, the TRCROUTE connect packet collects the TNS addresses of every node it travels through. If an error occurs, TRCROUTE collects error information that shows where the error occurred. The Trace Route Utility displays the information collected on the client screen. You can redirect the TRCROUTE output to a file, and print it if you wish. Requirements Trace Route works only over Net8 and SQL*Net version 2.3 and later. Every node along the route from client to server must use SQL*Net version 2.3 or later. If a pre-2.3 node is on the path, the following error is displayed: TNS-03603: Encountered a node with pre-2.3 version of SQL*Net TRCROUTE shows what node along the path is responsible for any errors. Effect on Performance The Trace Route Utility uses minimal resources. It gathers information in the connect data of a special connect packet; standard connect packets are not affected. The server is not affected by TRCROUTE. The listener receives and processes the TRCROUTE connect packet. It returns the information to the client by putting it into a refuse packet. The server does not need to start up any new processes or deal with dummy connections. Using TCROUTE To invoke TRCROUTE, enter the following from the command line: $ trcroute net_service_name Example $ trcroute DIA3.WORLD Copyright (c) 1999 Oracle Corporation. All rights reserved. Route of TrcRoute: Node:
Client Time and
address of entry into node: Node:
Server Time and
address of entry into node: Identifying Databases ( SERVICE_NAMES = DB_NAME + DB_DOMAIN ) The Oracle9i software identifies a database by its global database name. A global database name consists of the database name and network domain. The global database name uniquely distinguishes a database from any other database in the same network domain. Example of a Global Database Name: DIA3.AKADIA.COM In the preceding example, DIA3 is the name of the database. The database name portion is a string of no more than 8 characters that can contain alpha, numeric, and additional characters. The database name is also assigned to the DB_NAME parameter in the init.ora file. AKADIA.COM is the network domain in which the database is located. Together, the database name and the network domain make the global database name unique. The domain portion is a string of no more than 128 characters that can contain alpha, numeric, period (.), and additional characters. The domain name is also assigned to the DB_DOMAIN parameter in the init.ora file. The DB_NAME parameter and the DB_DOMAIN name parameter combine to create the global database name value assigned to the SERVICE_NAMES parameter in the init.ora file. Global Database Name = SERVICE_NAMES = DB_NAME + DB_DOMAIN The System Identifier (SID) identifies a specific Oracle9i instance reference to the database. The SID uniquely distinguishes a database from any other database on the same computer. Multiple Oracle home directories enable you to have multiple, active Oracle databases on a single computer. Each database requires a unique SID and database name. Managing Files from PL/SQL with DBMS_BACKUP_RESTORE Oracle 8i now offers many File I/O utilities which can be used to manage files in the operating system filesystem. One of this File I/O package is DBMS_BACKUP_RESTORE. It includes many procedures and functions to normalize filenames for Windows NT environments. One of this function is DELETEFILE, which can be used to remove a file from the filesystem. In the next example we show a way to delete old archive log files automatically, it works on Windows and Unix. Of course, make sure that you have valid backups or you risk deleting archives that you may need. First, the script gives a grant to objects owned by the user sys. Then, it creates a directory to use the BFILENAME function. Finally, it creates two procedures: One that extracts the name of the archive and another that finds old archive logs and deletes them. 1. Set the necessary GRANTS as users SYS CONNECT sys/your_sys_password GRANT SELECT ON v_$archived_log TO SYSTEM; 2. Create an alias for a directory on the server's file system Use the CREATE DIRECTORY statement to create a directory object. A directory object specifies an alias for a directory on the server's file system where external binary file LOBs (BFILEs) are located. You can use directory names when referring to BFILEs in your PL/SQL code and OCI calls, rather than hard-coding the operating system pathname, thereby allowing greater file management flexibility CONNECT SYSTEM/user_system_password; The PATH given in the above statement can also be read from the data dictionary in the variable LOG_ARCHIVE_DEST select value from v$parameter 3. Read filename from a given PATH CREATE OR REPLACE FUNCTION get_fname(p_path IN VARCHAR2) 4. Create procedure to delete archivelogs older than one month from the filesystem CREATE OR REPLACE PROCEDURE delete_archive_logs IS CURSOR get_archive IS BEGIN 5. Test the procedure from SQL*Plus SQL> SET SERVEROUTPUT ON; Deleting: D:\ORADATA\ARK1\ARC\ARK1_83.ARC The procedure can easily enhanced to read the ARCHIVEDIR from the data dictionary and to enter the days back to delete (instead of the fixed 30 days) as an argument for the procedure DELETE_ARCHIVE_LOGS. Migration to 8i / 9i: Set "_SYSTEM_TRIG_ENABLED" to FALSE This important Tip was published on Oracle Metalink. This parameter was introduced in Oracle 8.1. It is a HIDDEN parameter (It begins with an UNDERSCORE). This parameter can only be set in the init.ora file in Oracle8i. It can be changed dynamically using ALTER SYSTEM in Oracle9i. System triggers are a new feature in Oracle 8.1. When _SYSTEM_TRIG_ENABLED is set to TRUE (the default) then system triggers are enabled. In some cases it may be necessary to disable system triggers from firing by setting this parameter to FALSE. This should only be done for short periods for specific operations. It is not advisable to allow normal users onto the database whilst such triggers are disabled as system triggers may be used for audit or security checking by certain applications. Description
Oracle 8i
Oracle 9i
Oracle 8.0 / Oracle7
Explanation
Example Problem Scenarios
Manually Removing all Oracle Components from NT / 2000 This important Tip was published on Oracle Metalink. If you have many Oracle Releases and Tools on your NT or W2000 Workstation, you may encounter problems to deinstall all of them with the Oracle Installer. It may be necessary to do a manually Deinstallation. This article describes the procedure how to manually perform the actions to create a "clean machine" on Microsoft Windows NT/2000 and how to manually remove all Oracle components (e.g.: oracle RDBMS database server, IAS, OEM, client installations, etc) and services from your computer. Removing Components on Windows NT/2000 To remove all Oracle components from a computer on Windows NT/Windows 2000:
Restrict the Number of Records While developping database application we often have to restrict the number of records an end user can insert into a table. A simple solution is the following code fragment in the BEFORE-INSERT-FOR-EACH-ROW Trigger:
To reconstruct or understand the following descriptions please download the example code ex_restr_nbrof_records.sql or consult the source code, we assume, that the maximum number of detailrecords must be limited to five. We try to insert a 6Th record:
So far so good. We’ll now demonstrate what happens, when we insert records from two concurrent transactions. Delete one record in order to be able to insert one record hereafter:
Invoke the SQL-Plus tool and insert one record
Invoke a second session by starting a SQL-Plus again and run the statement
We now have 6 records. The maximum number allowed was 5 ! How to avoid this situation ? Oracle does not support the "dirty read“
isolation level which is defined with: A solution to solve this problem is to lock the corresponding master
record:
Delete the 2 records to test the solution:
There should be 4 records now:
Now we try again with the two concurrent sessions:
Change to the second session and issue:
Go back to the first session and run:
Remarks If an exact maximum number of records is specified for a table, using a lock is the only solution to avoid the above situation demonstrated. Often we want to limit the number of deteilrecords in a way, that an end user can’t claim a huge amount of disk space by inserting unbound number of records. In this case, the simple solution shown at the begin will do it. If the table doesn’t use a foreign key constraint, you can implement an auxilliary table, insert one row and lock this row instead of the masterrecord. Interacting with Oracle via the UNIX Korn shell UNIX remains a fabulous choice for hosting Oracle databases. Both technologies have proven to be more flexible and more stable for more time than most of us care to remember. The remarkable 'Korn' shell, when coupled with the strength of the standard UNIX tool set, seems to be able to extract the best from any programmer. While many Oracle customers shy away from actively adopting and supporting UNIX development, most are perfectly happy to accept 'shell script', and 'shell script' is all we need for many back-end applications. This article demonstrates how SQL*Plus can be used in the same manner as any UNIX tool. And you will find that interacting with an Oracle database via a 'shell script' program is simple, flexible and dynamic--the UNIX way. Background / Overview SQL*Plus will happily integrate into the standard UNIX IO streams. The simplest way to
demonstrate this point is that the following UNIX command. /bin/ksh Produces the following output: D [ Note: the '-s' flag suppresses the standard Oracle banner. ] Once you grasp this, the possibilities for using SQL*Plus from UNIX can become quite exciting. You can construct and run Dynamic SQL or PL/SQL statements without difficulty. There is also a mechanism available to a shell script programmer that avoids commit or rollback processing between SQL*Plus transactions. By using pipes to deliver SQL or PL/SQL statements to SQL*Plus, you can avoid the use of temporary files and construct SQL or PL/SQL statements 'on-the-fly'. You can also use pipes to collect the output generated from SQL*Plus (thereby avoiding temporary files, once again), and interpret the output of pipes using the UNIX tool set. This article uses a simple UNIX file to Oracle table interface to demonstrate UNIX/Oracle communications techniques. Is Oracle working properly? One of the first tasks required of an interface is to check that the target Oracle database is actually ready for business. I've seen and used many methods to do this, but my current favorite is to run a small predictable query and check the result. If the query performs as expected, then it is likely that Oracle is OK. The following function Is_Oracle_OK does the job For SUN Solaris, grep in /usr/xpg4/bin must be used, therefore the PATH variable is setup. #!/bin/ksh function Is_Oracle_OK if Is_Oracle_OK This script queries the "dual" table and then scans the output for the string that is expected if everything is working well ("XOK"). Note that care has been taken to ensure that if Oracle were to reject the submitted SQL, any fragment of an Oracle-generated error report will not meet the acceptance criteria set within the grep command. Dynamically Generated SQL Now that we have established that the database is operational using the Is_Oracle_OK function, we want to insert some of the contents of the /etc/passwd file into a table. In this example, we want to load the first and fifth fields (delimited by '|') of each line that begins with the character 'r'. To add a little extra spice, the following example code creates, populates, queries and drops the example table. We issue a commit every time 10 records are inserted and after the last insert. Note the use of UNIX environment variables within the SQL statements. #!/bin/ksh print "WHENEVER
SQLERROR EXIT" print "create table ${TABLE} typeset -i
COUNT=0 # Count records inserted grep -E '^r' ${FILE} | while read LINE print " # Commit
every 10 records - COUNT ends with a '0' print "select * from ${TABLE};" In shell scripts, usually turn autocommit off and take charge of issuing the commits self. Note that the shell variables Item1 and Item5 are single-quoted within the insert SQL because they are strings. Don't quote numbers ! Watch out for single-quote characters contained within the data source, they'll cause the SQL statements to fail unless you escape them or remove them. Keeping Track of Progress A neat trick to use in some situations (like debugging) is to save all the SQL*Plus input and output to a file. Try replacing the ... "} | sqlplus -s scott/tiger" .... command (on the last line of the previous example code), with (all on a single line) "} 2>&1 | tee -a /tmp/ora.txt | While this should work nicely, it's preferable to replace all the 'sqlplus -s scott/tiger' strings within the body of the code with an alias, because this allows to offer a 'debug oracle' facility, which can activate to record the Oracle IO. To do this, the beginning of the Oracle Shell-script code includes the following (again all on a single line) alias To_Oracle="tee -a /tmp/ora.txt | Once the alias has been defined, use the alias To_Oracle instead of the string 'sqlplus -s scott/tiger' within the body of the code; for example: $ print "select * from dual;" | To_Oracle Quick Error Check If SQL*Plus encounters an error, it generally reports the problem with an error code prefixed with either ORA, ERROR. Armed with this knowledge, test that some Oracle interactions have worked correctly by simply scanning the output for an occurrence of an error prefix, for example in the following code snippet the name of {TABLE} is missining to produce an error: #!/bin/ksh Once again, we like to define an alias for this call through SQL*Plus, but a function that takes the SQL as an argument could also be used to make things even prettier: alias To_Oracle_OK="sqlplus -s scott/tiger 2>&1 | The code now looks like this: #!/bin/ksh The BIG 'commit' problem When a SQL*Plus session terminates, all uncommitted transactions are either lost or committed (depending on your setup). Using the method just outlined, this makes things a bit difficult if you want to check how things are going without ending the current SQL*Plus session. To counter this problem, set a SQL*Plus session as a co-process and communicate with it using the print -p and read -p shell commands. For example, initiate a SQL*Plus session as a co-process with the following command: sqlplus -s scott/tiger |& # Co-process You retrieve the SQL*Plus output: D Note: UNIX also reports that the co-process has completed. Unfortunately, the read -p command will hang if there is no output to collect or the output stream has not been terminated. This can even happen if you're just too quick in trying to grab the output while Oracle is still cogitating. This method, however, does offer the programming prize of being able to fully communicate with Oracle via a single, unbroken SQL*Plus session. To prevent the read command from hanging programs, place a marker in the output stream (for example, using the SQL*Plus 'prompt' command) after requesting some work. Then ensure that the code does not read beyond that marker. Placing the marker also gives the system the moment required to prevent the 'too quick' hang mentioned earlier. sqlplus -s scott/tiger |&
# Start the sqlplus co-process Newer versions of the Korn shell offer a time-out argument for the read command. PL/SQL When using the methods outlined in this article, PL/SQL can be used in just the same way as SQL. For example: sqlplus -s scott/tiger |& # Start sqlplus as co-process print -p "begin while read -p
LINE # Get the
co-process output Produces the output: PL/SQL procedure successfully completed. Conclusion The advantage of Oracle shell scripts is that you can apply the full range of UNIX tools to the Oracle data. Given that the Oracle interface is reasonably simple, the productivity boost gained by a shell scripter using UNIX to manipulate the data can be significant. Have Fun with Unix and Oracle ... How to escape special characters in Oracle ? If you want to retrieve TABLE_NAMES from ALL_TABLES where the table name is like 'ADD_' using the following query, you may notice that the query is returning ADDRESS and ADD_CODES: create table address (p1 number); select distinct table_name TABLE_NAME If you try to escape the '_' character with the following query, you will still get the same result. select distinct table_name from all_tables Therefore the question is: How do you use LIKE to find data that contains an underscore or percent sign ? The answer is to escape the underscore and/or percent sign in the LIKE template. You have to designate what character you would like to use via the ESCAPE keyword. A slash is commonly used, but any character would actually work: select distinct table_name How to return an exit code from SQL*Plus to the Unix Shell ? If you want to return a code from SQL*Plus to the UNIX shell script, then the WHENEVER SQLERROR directive can help. We can return a status from 0...255 to the UNIX shell. This is usually in the $? environment variable ( for the Korn or Bash Shell ) immediately after execution of a command. Here is a KSH script that shows how this might work: #!/bin/ksh sqlplus -s scott/tiger <<-EOF 1>/dev/null 2>&1 variable rc number begin Run this code as follows and you will see: $ ./test.ksh 1 The 1 is what you are looking for. The key is to use ... raise_application_error( (-20000-224) - :rc, 'Error!' ); ... to raise the error. We can raise errors in a given range, but the shell will only keep an unsigned byte in the status return value (values 0...255). It takes our exit file and just looks at that last byte. By using -20000-224 and subtracting from your return code, we end up exiting with the value of your return code ( given that :rc is in the range...255 ! ). Help ! SQL*Plus hangs after installing 'login.sql' Introduction
Help .... SQL*Plus hangs ....
Example login.sql
How to restrict your users to only one Oracle session ? How can I make sure the end user has only one session in the database?
That's it now any user with the ONE_SESSION profile can log on only once. NVL2: check for the existence of NOT NULL
Here are the differences between the NVL and NVL2 functions:
Oracle 8i and 9i Online Documentation You find the Oracle 8i and 9i Online Documentation under: http://tahiti.oracle.com/ Action Logging
Using an Action Log Table
Truncating a Table from a Remote Database Problem
To execute the procedure, use the following from the local database:
In release 8.1, a new SQL function, TO_LOB, copies data from a LONG column in a table to a LOB column. The datatype of the LONG and LOB must correspond for a successful copy. For example, LONG RAW data must be copied to BLOB data, and LONG data must be copied to CLOB data. In the next example we show how to migrate a table with one LONG to a CLOB datatype. Create the LOB Tablespace CREATE TABLESPACE lob1 Disable temporarily all Foreign Keys set feed off; Convert LONG to LOB in temporary Table Create a temporary table with converted BLOB field. CREATE TABLE lob_tmp Drop and Rename Tables DROP TABLE document; Create the necessary Constraints and enable the Foreign Keys again set feed off; If you plan to store LOB data, together with other data in the same table, then you should store the LOB data in a separate tablespace. CREATE TABLE news ( |