Notes - Oracle Tables

  

These are my personal notes that I use as a quick help in my work.
You are welcome to read them.

Contents of current page Top-level home page
 
Index  Java Internet Oracle Notes
Linux Basics Web Basics SQL Notes
Informatica Servlets Apache BkpRstore SQL*Plus
Teradata   LDAP Storage PL/SQL
Windows     Tables OEM
UML   Net8 Portal
SQL Server Python perl Performance OLAP
Vmware Visual Basic PHP/MySQL User Mgmt  
Git        
More technical pages here

Contents

 


Tables

 

CREATE TABLE table
    (column type,
     column type)
  [ STORAGE (INITIAL nK NEXT nk PCTINCREATE 0 MINEXTENTS n MAXEXTENTS n) ]
  [ PCTFREE n ] [ PCTUSED n ]
  [ INITRANS n ] [ MAXTRANS n ] -- Not of much use
  [ LOGGING | NOLOGGING ] -- Use of redo log. NOLOGGING is not secure
  [ CACHE | NOCACHE ] -- Force blocks to stay in cache buffer of SGA
  TABLESPACE tablespace;

Create table as copy: no definition of columns:
CREATE TABLE new_table /* all other details*/
  TABLESPACE tablespace
  AS SELECT * FROM source_table/view ;

Alternate approach for large tables:
CREATE TABLE new_table AS SELECT * FROM source_table WHERE 0=1;
  ALTER TABLE new_table NOLOGGING;
  INSERT /*+ APPEND */ INTO new_table SELECT /*+ PARALLEL(A,6) */ * FROM source_table A;
  COMMIT;

ALTER TABLE table PCTFREE n PTCTUSED n STORAGE (storage indications);

ALTER TABLE table ALLOCATE EXTENT [ ( [ SIZE nK ] DATAFILE 'file') ] ;

ALTER TABLE table DEALLOCATE UNUSED [ KEEP n M|K ] ; -- Use KEEP 0 to reduce below the amount of MINEXTENTS

TRUNCATE TABLE table [ { DROP | REUSE } STORAGE ] ;
-- No triggers !
-- Use REUSE to keep all the storage space
-- Use DROP to free storage (extra extents down to MINEXTENTS, indexes, ...) DROP TABLE table;

DROP TABLE table_name CASCADE CONSTRAINTS;

ANALYZE TABLE table { COMPUTE | ESTIMATE } STATISTICS ;
SELECT owner, table_name, num_rows, blocks, empty_blocks, chain_cnt FROM dba_tables
  WHERE table_name = upper('table') AND owner = upper('owner');
-- blocks used and blocks not used: remember that one block is used for the header.

BEGIN
    dbms_utility.analyze_schema( upper( '&the_owner' ), 'ESTIMATE' | 'COMPUTE', NULL, 5);
END;
/

Row_id: get file number and block number: dbms_rowid.rowid_relative_fno(rowid) , dbms_rowid.rowid_block_number(rowid)

 

create table a_new_table
(fiscal_year number(4) not null check (fiscal_year between 1990 and 2009)
,gl_period number(2) not null check (gl_period between 1 and 13));

 

ALTER TABLE the_table ADD (the_field VARCHAR2(10) DEFAULT 'xyz' NOT NULL);
Add a field
ALTER TABLE the_table MODIFY ( the_field DEFAULT NULL);
Remove the default value for the field; note that the NULL refers to the default value, not to NULL/NOT NULL.
ALTER TABLE the_table MODIFY ( the_field new_data_type);
MOdify the data type.
ALTER TABLE the_table ADD (CONSTRAINT tab_FK FOREIGN KEY (a1, b1) REFERENCES another_table (a2, b2) [DISABLE]);
Add a constraint, disable at creation
ALTER TABLE the_table MODIFY (the_column NOT NULL);
Add NOT NULL constraint on a column
ALTER TABLE the_table { ENABLE | DISABLE } CONSTRAINT the_constraint;
Drop the constraint
 
ALTER TABLE the_table DROP CONSTRAINT the_constraint;
Drop the constraint
DROP INDEX the_index;
Drop an index
CREATE UNIQUE INDEX tab_PK ON a_table (a, b, c) LOGGING TABLESPACE the_tbs;
ALTER TABLE a_table ADD (CONSTRAINT tab_PK PRIMARY KEY (a, b, c) USING INDEX TABLESPACE the_tbs);
alter table a_table add (constraint tab_pk primary key (a, b, c) ); -- index already created
Add a primary key in two steps (unique key then constraint)
ALTER TABLE the_table MOVE LOGGING;
Rebuild the table, eventually on another tablespace. Use this in case of chained rows. Cannot work with long datatype.

Remove all "check" constraints (these include the "not null" constraints):
begin
for cur in
  (select owner, constraint_name , table_name from all_constraints
   where owner = '...' and TABLE_NAME = '...' AND CONSTRAINT_TYPE = 'C')
loop
  execute immediate 'ALTER TABLE '||cur.owner||'.'||cur.table_name||' MODIFY CONSTRAINT '||cur.constraint_name||' DISABLE';
end loop;
end;

 

Global Temporary Table (GTT)

Temporary tables. Data is private, meaning that data inserted by a session can only be accessed by that session. Data is preserved for the transaction (ON COMMIT DELETE ROWS) or the session (ON COMMIT PRESERVE ROWS). Use for complicated processes that cannot be completed in a single pass. The data is also in the undo tablespace, although options exist to not have it there.

CREATE GLOBAL TEMPORARY TABLE a_table (....)
ON COMMIT PRESERVE ROWS;

 

Chained Rows

analyze table TABLE_NAME list chained rows into CHAINED_ROWS;

select
    Owner_Name, /*Owner of the data segment*/
    Table_Name, /*Name of the table with the chained rows*/
    Cluster_Name, /*Name of the cluster, if it is clustered*/
    Head_RowID /*Rowid of the first part of the row*/
from CHAINED_ROWS;

 

Grants

Grants:
grant select on the_table_name to user_name;
grant select on the_table_name to public;

 

Recycling Bin

SELECT * FROM RECYCLEBIN;
select what is in my recycling bin
PURGE TABLE BIN$...;
Purge just one table using the recycling bin name
PURGE TABLE the_table_name;
Purge just one table using the original table name
PURGE RECYCLEBIN;
Purge the recycle bin of my own objects, and release space for objects

Materialized Views

Previously known as snapshots.

 

CREATE MATERIALIZED VIEW the_mv
  TABLESPACE DWH_DATA2
  LOGGING
  CACHE
[USING INDEX ....]
[FOR UPDATE]
REFRESH refresh_clause
[{ DISABLE | ENABLE } QUERY REWRITE]
AS SELECT ..........
/

Refresh_clause:

{ FAST | COMPLETE | FORCE }
Fast: refresh according to changes (kept in the log)
Complete: complete refresh of the whole view
Force (default): fast refresh if possible, otherwise complete.
{ ON DEMAND | ON COMMIT }
On commit: refresh whenever a commit occurs on the underlying tables.
On demand (default): use the dbms_mview refresh procedures
{ START WITH | NEXT } date
Start with: first fresh
Next: time interval
{ WITH PRIMARY KEY | ROWID }
Primary key (default): uses primary key of tables
Rowid: use for backwards compatibility
USING DEFAULT { MASTER | LOCAL } ROLLBACK SEGMENT
USING         { MASTER | LOCAL } ROLLBACK SEGMENT rollback_seg
Rollback segment to be used; if none is specified, then a rollback segement is automatically chosen.

Other clauses:

FOR UPDATE
use to update the materialized view
QUERY REWRITE
Define use for query rewrite or not

 

CREATE INDEX an_index ON the_mv (a_column) LOGGING NOPARALLEL;

ALTER TABLE MV_CUSTOMERS ADD (ONSTRAINT the_mv_pk PRIMARY KEY (a_column));

GRANT SELECT ON the_mv TO ...;

Refresh view:
exec dbms_mview.refresh('view_name');

 


Sequences

CREATE SEQUENCE seq_name START WITH 1 INCREMENT BY 1 MINVALUE 1;

SELECT seq_name.NEXTVAL FROM DUAL;
update table set unique_id=seq_name.NEXTVAL WHERE unique_id IS NULL;

SELECT seq_name.CURRVAL FROM DUAL; Get current value. (Remember: no underscore "_" in NEXTVAL and CURRVAL)

DROP SEQUENCE seq_name;

You can't change the current value without dropping and re-creating the sequence.
However, you can try this trick:
select target_start_number - seq.currval from dual;
alter sequence seq increment by <the result of previous query>;
select seq.nextval from dual;
alter sequence seq increment by 1;

column max_key_plus_one new_value the_max_key_value_plus_one noprint
select max(a_column)+1 as max_key_plus_one from a_table ;
prompt The max value is &the_max_key_value_plus_one
create sequence a_temp_sequence start with &the_max_key_value_plus_one increment by 1;
select a_temp_sequence.nextval from dual;
select a_temp_sequence.nextval from dual;
drop sequence a_temp_sequence;

 

 


Audit

Goal: audit INSERTs, UPDATEs and DELETEs

Note that connections with administrator privileges are NOT audited in AUD$. See audit files in $ORACLE_HOME/rdbms/audit. They show every connection as sysdba.The $ORACLE_HOME/network/log/listener.log file shows all other connections.

The OS enables auditing in a system file (udump directory)

Steps:

Enable Auditing in Oracle

Initialization parameter AUDIT_TRAIL
AUDIT_TRAIL = {NONE | FALSE | DB | TRUE | OS}
    None or False disable auditing
    DB or TRUE enable auditing in SYS.AUD$ table
    OS enables auditing in a system file (udump directory)
Example:
alter system set audit_trail = none scope = spfile; --> then shutdown and startup

To activate the auditing:
AUDIT delete table , update table, insert table;

To deactivate auditing,
NOAUDIT select table , update table, insert table;

Results in SYS.AUD$

Useful views:

 

Change the Tablespace of Table AUD$

1. Turn off auditing and restart

connect sys/<password>
alter system set audit_trail = none scope = spfile;
shutdown immediate
startup

2. Create new object and point to it

create table system.aud$
tablespace users -- <tablespace name>
as select * from aud$;

create index system.i_aud1 on system.aud$(sessionid, ses$tid);
rename aud$ to aud$_original;

create view aud$ as select * from system.aud$;

connect system/<password>
grant all on aud$ to sys with grant option;
grant delete on aud$ to delete_catalog_role;

3. Turn on auditing and re-start the database

alter system set audit_trail = DB scope = spfile;
connect sys/<password>
shutdown immediate
startup

4. Recreate the data dictionary views for auditing :

@$ORACLE_HOME/rdbms/admin/cataudit.sql

5. Enable the auditing

--In SQL plus, with a user having ALTER SYSTEM PRIVILEGE, type:
connect system/<password>
AUDIT delete table , update table, insert table;
Audit delete on system.aud$ by access;
Grant all on system.aud$ to genio;

connect sys/<password>
grant select on dba_audit_trail to genio;
grant create view to genio;
Grant select on v_$instance to genio;
--Then create a view (in schema genio) that displays the audit_trail.
create view audit_trail as select * from dba_audit_trail;

6. Protect the audit trail:

Audit delete on sys.aud$ by access;

 


Information Schema Equivalents

 

Views all_tab_cols / dba_tab_cols / user_tab_cols contains all columns.
Views all_tab_columns / dba_tab_columns / user_tab_columns not not.
select TABLE_NAME, COLUMN_NAME, DATA_TYPE from user_tab_cols where table_name = '&enter_table_name.' ;
select OWNER , TABLE_NAME , COLUMN_NAME , DATA_TYPE || ' (' || TO_CHAR(DATA_LENGTH) || decode(nvl(DATA_PRECISION, -1), -1, '', ', ' || to_char(DATA_PRECISION)) || decode(nvl(DATA_SCALE , -1), -1, '', ', ' || to_char(DATA_SCALE )) || ')' as data_type from all_tab_cols where table_name = '&enter_table_name.' order by column_id;

 

 


Synonyms

Synonyms can be created for tables, views, functions/procedures/packages, sequences, materialized views, other synonyms, java classes, and types.

I can use another schema's synonyms by prefixing with the schema name.

CREATE SYNONYM a_schema.xyz FOR another_schema.xzy;
Create a private synonym for someone else
CREATE PUBLIC SYNONYM an_object FOR a_schema.an_object;
Create a public schema for everyone
CREATE SYNONYM an_object_distinct_name FOR my_self.an_object;
Create a synonym for my object. Note that the synonym's name must be distinct from the objects name
DROP SYNONYM the_schema.the_synonym;
Drop private synonym
DROP PUBLIC SYNONYM the_synonym;
Drop public synonym

You can create a synonym for an object that does not exist (then you will get error ORA-00980)

 

 

 


SQL Loader

Command:
sqlldr scott@the_sid control=loader.ctl
%SystemRoot%\system32\notepad.exe log_file_name
pause

Control file for csv
OPTIONS (ERRORS=50)
LOAD DATA
  INFILE 'the_file.csv'
  TRUNCATE INTO TABLE a_table_name
  TRAILING NULLCOLS
("field1" CHAR TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
"field2" CHAR TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
)

 

Control file for csv
OPTIONS (ERRORS=50)
LOAD DATA
  INFILE 'the_file.csv'
  TRUNCATE INTO TABLE a_table_name
  TRAILING NULLCOLS
  FIELDS TERMINATED BY "~" OPTIONALLY ENCLOSED BY 'WHATEVER' (field_name1, fieldname2, ...)

The filename need not be enclosed in double quotes.

Other options:
APPEND instead of TRUNCATE

References