Using UTL_TCP to send E-Mail from PL/SQL with Oracle 8.1.6 The UTL_TCP is a TPC/IP package that provides PL/SQL procedures to support simple TCP/IP-based communications between servers and the outside world. It is used by the SMTP package, to implement Oracle server-based clients for the internet email protocol. For more information see: Oracle8i Supplied PL/SQL Packages Reference Release 2 (8.1.6). Note that this package requires that you install the JServer option ! The following procedure SEND_MAIL can be used to send an E-Mail directly from the database. CREATE OR REPLACE PROCEDURE send_mail ( Examples set serveroutput on Show Oracle Version and installed Options The data dictionary views V$OPTION and V$VERSION can be used to get the current Oracle version and the installed options. Click here for the Script. select banner select parameter select parameter begin Here is the produced Output from the Script: Oracle Version: BANNER Installed Options: PARAMETER Not Installed Options: PARAMETER Specific Port Information: IBMPC/WIN_NT-8.1.0 A temporary table has a definition or structure that persists like that of a regular table, but the data it contains exists only for the duration of a transaction or session. Oracle8i allows you to create temporary tables to hold session-private data. You specify whether the data is specific to a session or to a transaction. There are two options: Delete rows after commit or Delete rows after exit session. create global temporary table mytemp (a date) Show Status of the temporary table select table_name, temporary, duration Rows exists after insert insert into mytemp values (sysdate); Inserted rows are missing after commit commit; Avoid REVERSE KEY index together with a FOREIGN Key This alert comes from the oracle support. Reverse key index on foreign key column allows deletion of parent key ! The FOREIGN KEY integrity constraint is provided to define and ensure the integrity of a parent-child relationship between two tables. It requires that each value in a column, or a set of columns match a value in a related (parent) table's UNIQUE or PRIMARY KEY. FOREIGN KEY integrity constraints also define referential integrity actions such as ON DELETE CASCADE which specifies that upon the deletion of the row in the parent table, all corresponding rows in the referential (child) tables will be deleted as well. If the ON DELETE CASCADE option is not specified for the FOREIGN KEY constraint, the deletion of the row from the parent table is prevented with an error message signalling the presence of corresponding rows in a child table. If, however, a REVERSE KEY index is created on the column(s) defined in the FOREIGN KEY integrity constraint, the deletion of the row in the parent table is allowed to proceed without error, thereby leaving orphaned rows in the corresponding child table(s). If the FOREIGN KEY was created with the ON DELETE CASCADE option, this directive is ignored and the corresponding rows in the child table are not deleted. Likelihood of Occurence If you have defined a REVERSE KEY index on column(s) designated as a FOREIGN KEY constraint and have deleted rows from the parent table, then it is likely that you will have orphaned rows in the child table. There are no error messages generated unless you disable and attempt to re-enable the FOREIGN KEY constraint. If orphaned rows exist in the child table, you will receive an ORA-02291 error message listing the name of the FOREIGN KEY constraint. "ORA-02291:integrity constraint (SCOT7.FK-DEPTNO),violated-parent key not found" Inserting in two or more tables with SQL*Loader If you want to load data from an ASCII or EBCDIC file into
an Oracle Database -- SQL*Loader is the tool. In this article we want to show, how to
distribute the rows in the file in two tables depending on a value at a specified character
position in the file. You need the following SQL*Loader controlfile to accomplish this task. OPTIONS (SKIP=1) The first line in the EBCDIC file is skipped OPTIONS (SKIP=1). The most interesting part is the WHEN clause. The attribute "ra" is the single character that has the value specified by hex-byte in the character encoding schema, such as X ' F1 ' (equivalent to 241 decimal), "X" must be uppercase. Due to National Language Support NLS requirements, hex 00 cannot be used (See Oracle 8 Utilities Guide). Avoid Rollback Segement Problems with huge Updates / Deletes If you have to UPDATE or DELETE a huge number of rows, you may encounter problems with Rollback Segmets. One solution is to COMMIT after sets of n Rows, as the next example shows. declare Special characters needs 2 bytes in UTF8 If you want to import a dump file created by e.g. WE81SO8859P1 character set into UTF8 you can get an ORA-1401 Truncating Data Too Long for a Column. You can prevent this message by increasing the size of your VARCHAR2 fields, because a special character like umlauts needs 2 bytes in UTF8 and e.g. one byte in WE81SO8859P1. This means that you must resize all your VARCHAR2 fields if you want to import VARCHAR2 fields with umlauts. How to set SYS password in Oracle for NT ?
After you have created the password file, you can relocate
it as you choose. After relocating the How to avoid having remaining characters filled with blanks? You are spooling a file with SQL*PLUS and want to avoid having remaining characters filled with blanks or tabs. This may happen if you have set LINESIZE 500 and your table rows are only 100 in size. The resulting script will not have the correct format although the display on the screen will look correct. Another situation when this may happen is if you are dumping a table to a comma delimited ASCII file to be used with SQL LOADER. The line length is likely to be padded with blanks and will dramatically increase the size of the file. Use the TRIMSPOOL ON command at the beginning of your SQL script. This will trim the unneeded spaces in your file. How to split Export or standard output into smaller pieces ? If you want to export the whole oracle database, or create a TAR archive of one of your filesystem, you may reach a filesize which is bigger than 2 GB. On some unix filesystems this is the maximal size for one single file. Or you want to distribute your software release over the internet, then you usually create a TAR file which can be downloaded. For your customers it may be helpful to download several small chunks than one huge file. To accomplish these tasks you need split and often a named pipe. Create several pieces of a TAR archive
Create several pieces of an Oracle Export
How to remove the default value from a column ? If you want to remove the default value for a column which already has a default value, use the following two equivalent statements in Oracle8i (Note: in Oracle 7/8 they are not equivalent). ALTER TABLE address MODIFY (name DEFAULT NULL); Example Create table ADDRESS, then set a default value for column NAME. CREATE TABLE address ( ALTER TABLE address MODIFY (name VARCHAR2(20) DEFAULT 'Unknown'); Check the default value in the data dictionary COLUMN column_name FORMAT A10 COLUMN_NAM DATA_DEFAU DATA_LENGTH Remove default value and check again ALTER TABLE address MODIFY (name DEFAULT NULL); COLUMN_NAM DATA_DEFAU DATA_LENGTH ALTER TABLE address MODIFY (name VARCHAR2(20) DEFAULT NULL); COLUMN_NAM DATA_DEFAU DATA_LENGTH The table USER_TAB_COLUMNS show a NULL string irrespective of weather a datatype is specified in the table modify statement or not. In Oracle7 and 8 a blank null is stored when you use ALTER TABLE address MODIFY (name DEFAULT NULL); How to avoid performance disaster with PL/SQL tables ? Last week, we have been called to verify an application written in PL/SQL, with a poor performance and a lot of CPU consuming. The application fetches about 100'000 rows from the database into a PL/SQL table for further processing. The PL/SQL table fits completely into the memory, no swapping takes place. A closer look shows, that for each processing cycle a procedure was called with the PL/SQL table as an argument -- here we found the performance disaster -- in PL/SQL arguments are passed by value, that means, the whole table was copied for each cycle ! First, we want to show a small example (Example 1) with this performance bottleneck -- DO NOT USE IT IN YOUR APPLICATION -- it's just a demonstration. Next we will show you, how to avoid this situation in Oracle7/8 (Example 2) and at last we will show you the NOCOPY hint in Oracle8i (Example 3). Example 1 and 2 within Package PLSQL_TAB The Procedure run_local uses local declaration of table emp_tab. For each call of process_local, the full table emp_tab will be copied to the procedure and return after processing (IN OUT definition), the content of the table could be changed. Never do it in that way ! The procedure run_global uses global
declaration of table emp_tab. The procedure CREATE OR REPLACE Example 3 When the parameters hold large data structures, all this copying slows down execution and uses up memory. To prevent that, you can specify the NOCOPY hint in Oracle8i, which allows the PL/SQL compiler to pass OUT and IN OUT parameters by reference. Remember, NOCOPY is a hint, not a directive. DECLARE Speed up your transactions with PL/SQL tables Datawarehouse Applications often process several millions rows, often each record is selected, processed and updated again in the database. This can be a very time consuming task. If you can split to whole transaction in smaller pieces and your machine have enough physical memory use PL/SQL tables, which are kept in-memory. Select all needed rows and insert them into the in-memory PL/SQL table. Then analyze, group or process them directly in the memory, finally write back the whole PL/SQL table to the databse. In one of our DSS application, we used this approach, the performance gain is incredible. 20'000 records has been processed 100 - 150 times faster with PL/SQL than the direct method (each record separate from / to the database). One further advantage is, that the transaction is less longer open. One disadvantage of PL/SQL tables is, that you have to use one BINARY INTEGER as the key for the table, real primary keys cannot be used in PL/SQL tables. Therefore you have to create an artificial key, based on the real primary key. To demonstrate this performance gain, we use a simple SELECT, PROCESS, UPDATE transaction, first processing row by row, then using a PL/SQL table. Without PL/SQL table -- With PL/SQL table -- How to migrate LONG fields into VARCHAR2 In Oracle7 the LONG datatype often cause difficulties, for example a table in Oracle7 can have only one LONG, LONG's couldn't replicated and more. We have seen, that in many cases simple text is inserted in these LONG fields. Due to this, we suggest to convert the LONGs into VARCHAR2 with the following simple PL/SQL script. In Oracle8 you may specify a longer VARCHAR2 (40000). CREATE TABLE long_tab ( ALTER TABLE long_tab ADD (char_field VARCHAR2(2000)); CREATE OR REPLACE PROCEDURE long2varchar IS EXECUTE long2varchar; If you want to look or trace at a certain oracle process, belonging to an Orcale User, the best way is to trace the session for this user. As a DBA, you can trace other sessions, normal users can only trace their own sessions. 1). Enable TIMED statistics
2). Get SID and Serial#
3). Stop the process
Then use TKPROF in USER_DUMP_DEST with the generated trace file. How to start an Oracle database with corrupted or lost dbfile ? Help -- we cannot start the Oracle Database ... we get an error message telling us, that one datafile is lost or corrupted ... what can we do ? ORA-01157: cannot identify/lock data file 10 - see DBWR
trace file If the database is in ARCHIVELOG mode with a working online backup concept you are a lucky person, recover the database and everything is OK. But what can we do, if the database is in NOARCHIVELOG mode and no backup is present ? a). If the database is still running do the following 1). Switch the damaged datafile to the RECOVER status svrmgr> ALTER DATABASE DATAFILE
'/u01/db/test.dbf' OFFLINE DROP; 2). Stop and Start the database to verify that the database can be started without ' test.dbf '. svrmgr> SHUTDOWN IMMEDIATE; 3). DROP the tablespace to which the datafile belongs svrmgr> DROP TABLESPACE test INCLUDING CONTENTS; b). If the database is not running do the following The database can only be stopped with SHUTDOWN ABORT with a damaged or lost datafile and the datafile is still in ONLINE mode. Therefore it's better to switch the datafile to the RECOVER status as shown above before stopping the database. However there is a way to switch the datafile to the RECOVER status when the database is stopped. 1). Mount the database and switch the damaged datafile to the RECOVER status svrmgr> STARTUP MOUNT; 2). DROP the tablespace to which the datafile belongs svrmgr> DROP TABLESPACE test INCLUDING CONTENTS; 3). Stop and Start the database to verify that the database can be started without ' test.dbf '. svrmgr> SHUTDOWN IMMEDIATE; Important note
How to drop a tablespace containing tables with references ? You cannot drop a tablespace containing tables with active referential integrity constraints. SQL> DROP TABLESPACE test1 INCLUDING
CONTENTS; Before you can drop the tablespace you have to to drop or
disable the referential integrity constraints. You can verify the tables with references to
tables in other tablespaces. Create the following Tablespaces and Tables CREATE TABLESPACE test1 datafile '/u01/db/test1.dbf' SIZE 2M REUSE; CREATE TABLE test1 ( CREATE TABLE test2 ( Try to drop tablespace test1 SQL> ALTER TABLESPACE test1 OFFLINE; List the Primary- and Foreign Key Relationsships. The output of the SQL-Statement is listed below, click here for the SQL-Statement.
From From To
To List and Drop or Disable the Foreign Key Constraint an table ' TEST2 ' SELECT constraint_name,table_name CONSTRAINT_NAME
TABLE_NAME SQL> ALTER TABLE test2 DROP CONSTRAINT SYS_C005395; Drop the Tablespace SQL> DROP TABLESPACE test1 INCLUDING CONTENTS; How to load EBCDIC data into an Oracle database ? Last week, we had to load EBCDIC data from an IBM host the an Oracle 8 Database. Well, not a job we are doing every day, but with some hints from the Oracle Support an easy task. Converting between ASCII and EBCDIC The "dd" Unix command can be used to convert EBCDIC data files to ASCII
and vice-versa. dd if=data.ebc of=data.asc conv=ascii cbs=87 The example takes data.ebc as EBCDIC input file with a fixed record length of 86, converts it into ASCII, and writes the converted output to file data.asc. Loading EBCDIC data directly into the Oracle Database using SQL*Loader Specify the Characterset WE8EBCDIC500 for the EBCDIC data. The following example shows the SQL*Loader Controlfile to load a fixed length EBCDIC record into the Oracle Database. LOAD DATA How to show column and table comments ? If you have to verify an unkown database schema, it can be very helpful to list the table and column comments, specially if there is no documentation. Unfortunately, some designers are too lazy to specify these comments using the following SQL statement: COMMENT ON TABLE emp IS 'Employee Information'; To generate a report with all column and table comments, you can use the following simple SQL*Plus script -- click here to show the script. The script generates the following output: Table Comments Table Table Table Column One of the hidden features of Oracle, are the so called Inline Views (Immediate View, View on the Fly, Anonymous View). They are very useful to overcome some restrictions as:
Instead to create a view with CREATE VIEW, which needs the necessary privileges, you can specify the view directly within the FROM clause. The following example, shows how to print the number of rows from two tables in one line using an Inline View. SELECT e.emp_count, d.dept_count EMP_COUNT DEPT_COUNT Another example using an Inline View gives departments' total employees and salaries as a decimal value of all the departments: SELECT a.deptno "Department", Department %Employees %Salary The next example demonstrates how to overcome the restriction, that a SELECT statement with a CONNECT BY to represent hierarchical dependencies, can specify only one table in the FROM clause. There is no possibility to enter a JOIN directly in the FROM clause. This problem can be solved with an Inline View. In the top SELECT list, all attributes must be defined, including those within the Inline View. COLUMN employee FORMAT A12 HEADING "Chart" SELECT employee, empno, job, d.dname, mgr, sal, mgrno Note, that the SELECT list corresponds with the Inline View (Attributtes without table alias). The departement name (d.dname) does not exist within the Inline View, it is directly selected in the table DEPT using the JOIN e.deptno = d.deptno in the WHERE clause, outside the Inline View. Chart EmpNo Job
Dept
Boss Salary BossNo Using PL/SQL to speed up UPDATEs PL/SQL may provide an especially noticeable performance gain when you are running large batch update jobs in which parent / child updating plays a role. Consider the scenario in which the ACCOUNT (Parent) table is updated every night from the daily "collection" table BOOKING (Child). There are about 10'000 rows in ACCOUNT and approximarely 200 rows in BOOKING. Create the Tables CREATE TABLE account ( CREATE TABLE booking ( UPDATE using plain SQL Using plain SQL may take several minutes, due the correlated subquery. For Oracle < Version 8.1.5 UPDATE account A For Oracle >= Version 8.1.5 UPDATE account A UPDATE using PL/SQL Now suppose you use PL/SQL to achieve the same result. This may run in seconds. DECLARE Using PL/SQL instead of traditional SQL may result in a substantial performance gains, however PL/SQL does not always produce such results. If the BOOKING table contains 1'000 or more rows, rather than 200, the result is different. If a child table updates more than 10% to 15% of the parent table, PL/SQL will actually make the update run more slowly. PL/SQL uses the table's indexes and performs more physical reads against the database than plain SQL, which performs a full table scan in this case. Always experiment with alternatives, benchmark the options, and question the results. How to cache a table in the SGA ? In order to cache a table in the SGA it has to be smaller than CACHE_SIZE_THRESHOLD as set in the init.ora file. However, the cost based analyzer doesn't take table cacheing into account when doing optimization so you may want to force the table using hints. Optimize Table Structures with CAS and ORDER BY Specify CREATE TABLE AS SELECT ORDER BY if you intend to create an index on the same key as the ORDER BY key column. Oracle will cluster data on the ORDER BY key so that it corresponds to the index key. With this "trick", the COST based Optimizer will more often use the index instead of performing a full table scan. CREATE TABLE customer_reorg AS Creating Oracle 7 Export Files from Oracle 8 You can create an Oracle release 7 export file from an Oracle8i database by running Oracle release 7 Export against an Oracle8i server. To do so, however, the user SYS must first run the CATEXP7.SQL script, which creates the export views that make the database look, to Export, like an Oracle release 7 database. The following steps describe the procedure in more detail:
How to remove in-doubt transactions ? In doubt transactions may occur at double phase commit time for network, break node ... reason. Normally if a double phase commit failed, you may have some entries in views DBA_2PC_PENDING and DBA_2PC_NEIGHBORS. To force the in doubt transaction to rollback you have to use the command: ROLLBACK FORCE <transaction_id>; Unfortunately, sometime the entries are still there ... and you may discover in your alert<sid>.log file something like: ora-600 [18104] ... This point to bug 445604 Fixes in version Oracle 7.3.4. Now it's possible to use package DBMS_TRANSACTION to solve the problem if rollback force do not clean all entries. Do as follow: EXECUTE
DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('xid'); How to install the Help-Tables for SQL*Plus and PL/SQL ? It can be handy to have the Help-Tables for SQL*PLUS online using SQL>help command. Usually, these tables in $ORACLE_HOME/sqlplus/admin/help are not installed per default, but it's easy done with the following Shell-Script. #!/bin/sh $ORACLE_HOME/bin/svrmgrl << EOF $ORACLE_HOME/bin/svrmgrl << EOF cd $ORACLE_HOME/sqlplus/admin/help $ORACLE_HOME/bin/sqlldr userid=system/manager control=plushelp.ctl How much Redo-Log does this Transaction generate ? The Oracle Dicitonry is a wonderful Pool of many hidden resorces. In the Oracle System Statistic Table V$SYSSTAT you can find the Number of blocks written to the Redo. This can be found in the Attribute STATISTIC# = 71. If you measure this value before and after the transaction, you can approximately find out how much Redo-Log the transaction generates. Of course this is approximately, because there may exists other transactions at the same time. SQL> select * from v$sysstat where statistic#=71; Example SQL> select value from v$sysstat where statistic#=71; SQL> create table myhelp as select * from system.help; SQL> select value from v$sysstat where statistic#=71; Flush Shared Pool when it reaches 60-70% of it's capacity On a recent project we had a problem where performance would start
acceptable at the beginning of the day and by mid-day would be totally unacceptable.
Investigation showed that the third party application that ran on top of the Oracle
database was generating ad hoc SQL without using bind variables. This generation of ad hoc
SQL and non-use of bind variables was resulting in proliferation of non-reusable code
fragments in the shared pool, one user had over A flush of the shared pool was the only solution to solve this performance problem, resulting that all other query returned again in less than a second. It was determined that an automatic procedure was needed to monitor the shared pool and flush it when it reached 60-70% of capacity. The following procedue was created: CREATE OR REPLACE VIEW sys.sql_summary AS SELECT CREATE OR REPLACE PROCEDURE sys.flush_it AS CURSOR get_share IS CURSOR get_var IS CURSOR get_time is todays_date DATE; BEGIN OPEN get_share; FETCH get_share INTO share_mem; FETCH get_var INTO variable_mem; mem_ratio:=share_mem/variable_mem; IF (mem_ratio>0.3) THEN This procedure was then loaded into the job queue and scheduled to run every hour using the following commands: DECLARE Show who am I with my environment set termout off User: SCOTT on database ARK1.WORLD Check current ROLE for Database Access When a user logs on, Oracle enables all privileges granted explicitly to the user and all privileges in the user's default roles. During the session, the user or an application can use the SET ROLE statement any number of times to change the roles currently enabled for the session. The number of roles that can be concurrently enabled is limited by the initialization parameter MAX_ENABLED_ROLES. You can see which roles are currently enabled by examining the SESSION_ROLES data dictionary view, for example: SELECT role FROM session_roles; You may check the DB access in your application context using the following code construct. DECLARE Useful UNIX Utilities to manipulate fixed length records The Unix operating system has a number of utilities that can be very
useful for pre-processing data files to be loaded This article describes such Unix commands with examples of their utilization. The Unix version of reference here is Sun Solaris, which is based on Unix System V Release 4. For syntax details and the full range of options for each command, consult the man pages in your system and your operating system documentation. EXAMPLE 1 Let us assume a load with the following SQL*Loader control file: LOAD DATA Here are the contents of data file example1.dat: 7782 CLARK MANAGER 7839 2572.50 0.20
10 EXAMPLE 2 Let us assume another load with the following control file: LOAD DATA Below are the contents of data file example2.dat: 12,RESEARCH,"SARATOGA" The performance of direct path loads can be significantly improved by
presorting the input data on indexed columns. The Unix command to be used for presorting is "sort". In Example 1, suppose you have added the SORTED INDEXES (empno) clause to the control file to indicate that fields in the data file are presorted on the EMPNO column. To do that presorting, you would enter at the Unix prompt: sort +0 -1 example1.dat > example1.srt This will sort file example1.dat by its first field (by default fields are delimited by spaces and tabs) and send the output to file example1.srt: 7654 MARTIN SALESMAN 7698 1894.00 0.15 30 In Example 2, if you wanted to sort file example2.dat by column DNAME, you would enter: sort -t, -d +1 -2 example2.dat > example2.srt where "-t," indicates that commas are delimiters, "-d" causes sort to consider only letters and digits in comparisons, and example2.srt is the output file: 10,"ACCOUNTING",CLEVELAND Eliminating fields from the data file Often, it is necessary to remove one or more fields from all the records in the data file. The Unix command that does that is "cut". In Example 1, if you want to eliminate the COMM field altogether from the data file, enter at the Unix prompt: cut -c1-38,44- example1.dat > example1.cut where the "-c" option specifies the character ranges that you want to extract from each record. The output file example1.cut contains: 7782 CLARK MANAGER 7839 2572.50
10 In Example 2, to eliminate the LOC field from the data file, you would enter: cut -f1-2 -d, example2.dat > example2.cut where "-f1-2" indicates you want to extract the first two fields of each record and "-d," tells cut to treat comma as a delimiter. The output file example2.cut would contain: 12,RESEARCH Replacing characters in the data file Two Unix commands can be used here: "tr" or "sed". For instance, if you want to replace all double quotes in the data file in Example 2 by single quotes, you may enter: cat example2.dat | tr \" \' > example2.sqt The piped "cat" is necessary because tr's input source is the standard input. Single and double quotes are preceded by backslashes because they are special characters. The output file will be: 12,RESEARCH,'SARATOGA' Similarly, to substitute colons for commas as delimiters in Example 2, you may enter: sed 's/,/:/g' example2.dat > example2.cln The output would be: 12:RESEARCH:"SARATOGA" Eliminating characters from the data file Just as for replacing characters, "tr" and "sed" can be used for eliminating them from the data file. If you want to remove all double quotes from the data file in Example 2, you may type: cat example2.dat | tr -d \" > example2.noq The contents of file example2.dat are piped to the tr process, in which the "-d" option stands for "delete". The output file example2.noq would look like: 12,RESEARCH,SARATOGA An identical result would be obtained by using sed: sed 's/\"//g' example2.dat > example2.noq The string in single quotes indicates that double quotes should be replaced by an empty string globally in the input file. Another interesting usage of tr would be to squeeze multiple blanks between fields down to a single space character. That can be achieved by doing: cat example1.dat | tr -s ' ' ' ' > example1.sqz The output file would look like: 7782 CLARK MANAGER 7839 2572.50 0.20 10 Inserting characters into the data file A typical situation in which you may need to insert characters into the datafile would be to convert a fixed position data file into a delimited one. The data file in Example 1 is a fixed position one. To convert it into a file delimited by commas, you would enter cat example1.dat | tr -s ' ' ',' > example1.dlm and obtain 7782,CLARK,MANAGER,7839,2572.50,0.20,10 Merging different files into a single data file Merging can be done by using "paste". This command allows you to specify a list of files to be merged and the character(s) to be used as delimiter(s). For instance, to merge the data files in Examples 1 and 2, you may enter: paste -d' ' example1.dat example2.dat > example.mrg where "-d' '" specifies a blank character as the delimiter between records being merged and example.mrg is the merged output file: 7782 CLARK MANAGER 7839 2572.50 0.20
10 12,RESEARCH,"SARATOGA" Eliminating duplicate records/fields from the data file The Unix command for this is "uniq". It eliminates or reports consecutive lines that are identical. Because only adjacent lines are compared, you may have to use the sort utility before using uniq. In Example1, suppose you wanted to keep only the first entry with DEPTNO = 10. The Unix command would be: uniq +43 example1.dat > example1.unq The "+43" indicates that the first 43 characters in each record should be ignored for the comparison. The output file example1.unq would contain: 7782 CLARK MANAGER 7839 2572.50 0.20
10 Counting the number of lines in the data file Use "wc". For example: wc -l example1.dat which indicates that the file contains three lines. Converting between ASCII and EBCDIC The "dd" Unix command can be used to convert EBCDIC data files to ASCII and vice-versa. For example: dd if=example1.ebc cbs=86 conv=ascii > example1.asc takes example1.ebc as EBCDIC input file, converts it into ASCII, and writes the converted output to file example1.asc. The "dd" and "tr" commands can also be used for converting between uppercase and lowercase characters. Because Oracle is currently case-sensitive, this can be useful in many situations. For example, to convert all characters in file example1.dat from uppercase to lowercase, simply enter dd if=example1.dat conv=lcase > example1.low or cat example1.dat | tr "[A-Z]" "[a-z]" > example1.low The contents of output file example1.low will be: 7782 clark manager 7839 2572.50 0.20
10 To convert this file back to uppercase characters, type dd if=example1.low conv=ucase > example1.dat or cat example1.low | tr "[a-z]" "[A-Z]" > example1.dat Why to use locally managed tablespaces ? Overview A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a block or a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables in the data dictionary. Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents. The sizes of extents that are managed locally can be determined automatically by the system. Alternatively, all extents can have the same size in a locally-managed tablespace. A tablespace that manages its extents locally can have either uniform extent sizes or variable extent sizes that are determined automatically by the system. When you create the tablespace, the UNIFORM or AUTOALLOCATE (system-managed) option specifies the type of allocation. For system-managed extents, you can specify the size of the initial extent and Oracle determines the optimal size of additional extents, with a minimum extent size of 64 KB. This is the default for permanent tablespaces. For uniform extents, you can specify an extent size or use the default size, which is 1 MB. Temporary tablespaces that manage their extents locally can only use this type of allocation. The storage parameters NEXT, PCTINCREASE, MINEXTENTS,
MAXEXTENTS, and DEFAULT Advantages Local management of extents avoids recursive space management operations, which can occur in dictionary-managed tablespaces if consuming or releasing space in an extent results in another operation that consumes or releases space in a rollback segment or data dictionary table. Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents. Example:
So a large number of inserts in to a tablespace with a small extent size may cause many I/O's to the system tablespace and consequently the redo log files. Also, large sorts from "read-only" databases may cause many I/O's to the log file due to system tablespace update for temporary tablespace extent allocation. How to calculate the size of locally managed tablespaces ? When creating tablespaces with a uniform extent size it is important to understand that 64 Kbytes per datafile is allocated for the storage management information. When creating database files, add an additional 64 Kbytes to the size of your datafile. Consider the following example to illustrate the matter: SQL> CREATE TABLESPACE demo1 Tablespace created. SQL> select bytes from dba_free_space where BYTES What happens here is we ask for 5 Mbyte extents in a 10 Mbyte file. After 64 Kbytes is allocated for the bitmap, we are left with one 5 Mbyte extent and one less then 5 Mbytes extent. We cannot use the less then 5 Mbyte extent so it does not show up -- it is wasted. This can also happen when you have larger uniform extents when the remainder of space in the datafile is just 64 Kbytes short of being able to accomodate your uniform extent size. SQL> drop TABLESPACE demo1; Tablespace dropped. If you change the test case to allow for the extra 64 Kbytes: SQL> CREATE TABLESPACE demo1 Tablespace created. SQL> select bytes from dba_free_space where BYTES You can see that when we add 64 Kbytes to the datafile size the full 2
extents you want are there. Locally managed tablespaces should have datafiles that are 64
Kbytes Download Scripts to create a
database (8.1.6) with locally managed tablespaces for NT4 How to create a locally managed temporary Tablespace ? The syntax to create a locally managed temporary tablespace is different from the normally used syntax when creating a permanent tablespace -- find the syntax below: ### Locally managed (SIZE + 64K for Header Bitmap) You may get an ORA-03212 error message after creating locally managed tablespace. This will happen when you create an oracle user without a temporary tablespace specified; note that temporary segments cannot be created in permanent tablespaces. Due to this, always create the user with a locally managed temporary tablespace. ORA-00604: error occurred at recursive SQL level 1 oerr ora 03212 03212, 00000, "Temporary Segment cannot be created in
locally-managed tablespace" How to implement inheritance in a relational DB ? Oracle8 is a object relational database, but one of the most central parts in object technology is inheritance which is not supported in many object relational databases. Therefore the big question is how inheritance can be implemented in a plain relational database. In the following example we show some possible ways. We have the generic class VEHICLE which have the three attributes A,B,C which are common to all specific classes like BIKE and CAR. In object terminology BIKE and CAR are inherited from VEHICLE. Both, BIKE and CAR may overwrite the common attributs A,B,C. Besides this, the class CAR have the attributes D,E and the class BIKE have the attrbutes F,G,H. The following figure shows the situation in object terminologie. Possible Solutions in a Relational Database The shown object inheritance can be shown in an entity relationship diagram, which is used in the world of relational databases as follows:
Solution A: One table including all attributes The simplest solution is to create one table with all attributes. The advantage is simplicity but this solution should only be used if there are only a few attributes for CAR and VEHICLE. If you have many attributes for e.g. BIKE, most of them are NULL if you specify a car. Another disadvantage is, that all attributes D,E,F,G,H must be defined as NULL. The primary key ID in this example ID is generated with a sequence, but this is not a must. Solution B: Two tables with common attributes in both tables The two table solution, one table for CAR and one table for BIKE eliminates the problem of having (many) unused attributes from the one table solution. The disadvantage is, that the common attributes A,B,C must exist in both tables. Again, the primary key is generated with the same sequence. Solution C: Three tables with DB-Trigger to check OR constraint The three table solution eliminates all disadvantages from the above solutions, but adds some overhead. The OR constraint cannot be checked with referential integrity, therefore we need a (simple) database trigger for this. The following code example shows solution C. CREATE TABLE vehicle ( CREATE TABLE car ( ALTER TABLE car ADD ( CREATE TABLE bike ( ALTER TABLE bike ADD ( CREATE OR REPLACE TRIGGER check_or INSERT INTO vehicle VALUES (1,'Has Weels','Licence','Driver'); ERROR at line 1: Do NOT backup temporary tablespace You have created a locally managed temporary tablespace and are
attempting to SQL> ALTER TABLESPACE <TABLESPACENAME> BEGIN BACKUP; and you receive the following error: ORA-03217: invalid option for alter of TEMPORARY
TABLESPACE Do not back up temporary tablespaces. This error is issued due to one of the following:
Alter database and alter tablespace will not work on tempfiles as they have no reference in the data dictionary. There is no need to back up the tempfiles as they are used and destroyed every time you start up and shut down the database. The locally managed tempfile is a new feature in Oracle8i to avoid recursive space management operations, which can occur in dictionary managed tablespaces. ORA-27146: post/wait initialization failed on Solaris / Oracle 8.1.6 This message occures when oracle have a resource problem in order to start the background processes. Try to specify enough memory segments and semaphores specified in /etc/system, use the following guidelines. Definitions
Guidelines First list out the 'processes' parameter from the "init<SID>.ora" file. SEMMNS Sum the number of processes in the "init<SID>.ora"
file, this is the number of semaphores required by Oracle to start ALL databases. Add to
this any other system requirements and ensure SEMMNS is AT LEAST this value. For Oracle 8.0.x and 8.1.x allocate twice as many semaphores
as are in the "init<SID>.ora" file on startup. For example, if processes = 200,
Oracle will need 400 to startup the SGA. This SEMMNI Semaphores are allocated by Unix in 'sets' of up to SEMMSL semaphores per set. You can have a MAXIMUM of SEMMNI sets on the system at any one time. SEMMNI is an arbitrary figure which is best set to a round figure no smaller that the smallest 'processes' figure for any database on the system, in our example we would set SEMMNI=100 (see above). SEMMSL We usually set SEMMSL to the same value as SEMMNI. Enter Kernel Parameters in /etc/system Reboot the Solaris Machine after editing /etc/system. Do an ipcs -s to find out what semaphores are hold by oracle. If no oracle processes are running then no semaphores should be hold by the owner oracle. To remove the semaphores do an iprm -s ID where ID the semaphore identifier. * Create a CROSS MATRIX report using DECODE Example of a CROSS MATRIX report implemented using standard SQL. SELECT job, JOB
Dept10 Dept20 Dept30 Dept40 Oracle supports different data types which allows the database designer to define the length or precision of these types. Specially VARCHAR2 and NUMBER are scalable. VARCHAR2 One of the main advantage of the data type VARCHAR2 is the variable
length. As most database designers know, only the used number of characters up to the
defined length will be stored in the database. No spaces are padded up to the specified
size for VARCHAR2. CREATE TABLE shortVar2 (shorttext
VARCHAR2(12)); Now some records are inserted in both tables. The length of the inserted text is equivalent, in this example 11 characters: FOR i IN 1..25000 LOOP Both tables can be analysed and the result is not surprising: TABLE_NAME
NUM_ROWS BLOCKS AVG_ROW_LEN In both tables 120 blocks are used and the average row length is 15 bytes, regardless of the specified maximum row size. Some overhead bytes are used to determine the length of the stored number of bytes and for row information. NUMBER The analyse for VARCHAR2 was not surprising. But how does Oracle stores numbers? Does NUMBER(2) need less, equal, or more storage than NUMBER(15) or even NUMBER? Let's do the same steps as above to analyse the result: Three tables are defined, one for NUMBER(2) only, one for the maximum precision but filled with the same value as the first table, and one for the maximum precision filled with very big numbers. CREATE TABLE num2num2 (num2 NUMBER(2)); Again some records are inserted in every table: FOR i IN 1..25000 LOOP The result of the analyse: TABLE_NAME
NUM_ROWS BLOCKS AVG_ROW_LEN Both tables num2num2 and numXnum2 (filled with the same value) are using 80 blocks and have an average record length of 6 bytes. Only the storage of very big numbers needs more blocks and increases the average record length up to 24 bytes. Some overhead bytes are used to determine the length (one byte) and for row information. Internal Format of Oracle Data Type NUMBER Positive numbers are stored in the following format:
E.g. the number 345 will be split to <3><45> and stored within three bytes as follows: <194><4><46>
So the value will be calculated for this positive number as 0.345 multiplied by 10000. Negative numbers are stored and calculated in a similar way. Summary Oracle has an efficient way to store strings and numbers with variable length. There is no reason to shorten the length. No disk space is wasted because Oracle takes as much space as really needed. Thanks to Oracle Software (Schweiz) for this information. |