This site has been destroyed by Google forced upgrade to new way of WEB site.
All files links are not working. Many images has been lost in conversation.
Have to edit 190 pages manually. Will try to do ASAP but for this I need time ...
THANK YOU GOOGLE !

Wednesday, June 16, 2010

Auditing database


In mine previous article Audit DDL operations in database I have explained how to implement custom DDL auditing on database.

Much wider approach is implementation of original Oracle auditing. Oracle auditing is very wide action that can monitor any kind of database operation. The basic idea is to define objects and levels of auditing and then record the information to plain oracle table aud$, placed in sys schema. There are many official Oracle documents that explains how to setup auditing very briefly, so I'll skip that part.

But, regardless this is pretty straight path implementation, in this post I'll explain some not so obvious problems and paths how to solve them.

Move sys.aud$ from SYSTEM tablespace

Because sys.aud$ table will have huge number of inserts, (it's grow may rise from 100MB - 1 GB per day) it is advisable to place this table outside SYSTEM tablespace, where it initially resides.

With this action DBA prevent huge fragmentation of very vital tablespace-SYSTEM.

Detailed steps that describe this action can be found on Oracle Metalink Note 1019377.6 - "Script to move SYS.AUD$ table out of SYSTEM tablespace"

aud$_arch table (archive table for for sys.aud$ records)

Another reasonable approach is to move old auditing data from sys.aud$ to some archive table, which will not be in sys schema. This is more then recommended step to avoid holding to many data in original sys.aud$ table and because content of aud$_arch table may be easily exported and then purged.

To achieve that, create a copy of original sys.aud$ table in non sys schema as:
create table tools.aud$_arch tablespace TOOLS as (select * from sys.aud$ where 1=0);
As you see aud$_arch table is initially created in tools schema. If you use sys.aud$ table then expdp is not an option but old fashion exp. Because truncate is much faster then delete, this operation can not be done on sys.aud$ table.

Placing aud$_arch table in TOOLS tablespace you achieve the same benefit as with sys.aud$ table!

This archive table (aud$_arch) will be filled with additional procedure, which should be run on daily basis:
CREATE OR REPLACE PROCEDURE archive_aud$ (p_days number)
   authid current_user
IS
  purge_date date;
BEGIN
  purge_date := sysdate-p_days;
  insert into aud$_arch (select * from sys.aud$ a where a.ntimestamp# < purge_date);
  delete from sys.aud$ where ntimestamp# < purge_date;
  commit;
EXCEPTION
  WHEN OTHERS THEN
    rollback;
END archive_aud$;
/
With p_days parameter you may define number of days to be included in archiving. If you place this procedure in Oracle database job on daily basis, p_days should be 1. Otherwise define as you need.

Create directory (for expdp action)

If you'll use expdp method, then you probably know that you must define directory where export data will be saved.
conn / as sysdba
create or replace directory AUDITING as 'your path...';
grant read, write on directory AUDITING to tools;
I do not want to discuss why expdp is better option for me but just to mention that for ordinary exp you do not need this step.

OS part

From Linux side, final part is to call bash script, whose primary task is to export tools.aud$_arch records to flat file and then to truncate it's content. This is done through this script:
#!/bin/bash

DATUM=`date '+%Y%m%d'`; export DATUM
VRIJEME=`date '+%H%M%S'`; export VRIJEME

# assign Oracle environment 
. /home/oracle/PROD.env 

SQL_LOG_FILE="AUD_ARCH_${DATUM}-${VRIJEME}.log"

EXPORT_DIR=`echo "select 'xX '||DIRECTORY_PATH from dba_directories where owner='SYS' and DIRECTORY_NAME='AUDITING';" | sqlplus -s "/ as sysdba" | grep "xX" | awk '{print $2}'`
echo "export dir=" ${EXPORT_DIR} >> ${SQL_LOG_FILE}
echo ' ' >> ${SQL_LOG_FILE}

DATE_FROM=`echo "select 'xX '||MIN(ntimestamp#) from tools.aud\\$_arch;" | sqlplus -s "/ as sysdba" | grep "xX" | awk '{print $2}'`
echo "date from=" ${DATE_FROM} >> ${SQL_LOG_FILE}
echo ' ' >> ${SQL_LOG_FILE}

DATE_TO=`echo "select 'xX '||MAX(ntimestamp#) from tools.aud\\$_arch;" | sqlplus -s "/ as sysdba" | grep "xX" | awk '{print $2}'`
echo "date to=" ${DATE_TO} >> ${SQL_LOG_FILE}
echo ' ' >> ${SQL_LOG_FILE}

TAR_FILE="${EXPORT_DIR}/tools_aud_arch_${DATE_FROM}-${DATE_TO}.tgz"

#Version for expdp
DUMP_FILE="tools_aud_arch_${DATE_FROM}-${DATE_TO}.dmp"
LOG_FILE="tools_aud_arch_${DATE_FROM}-${DATE_TO}.log"
expdp \"/ as sysdba\" TABLES='TOOLS.AUD$_ARCH' CONTENT=ALL DIRECTORY=AUDITING DUMPFILE=${DUMP_FILE} LOGFILE=${LOG_FILE}
retvalue=$?

echo "expdp result:" $retvalue >> ${SQL_LOG_FILE}
echo "expdp result:" $retvalue

if [ $retvalue -ne 0 ]; then
  echo "dump has errors-tar will not be performed!" >> ${SQL_LOG_FILE}
  echo "dump has errors-tar will not be performed!" 
  exit $retvalue
fi

sqlplus -s "/ as sysdba"<EOF!
spool ${SQL_LOG_FILE} append
PROMPT truncate table aud\$_arch ...
truncate table TOOLS.AUD\$_ARCH drop storage;
PROMPT truncate complete
spool off;
exit
EOF!

tar czf ${TAR_FILE} ${EXPORT_DIR}/${LOG_FILE} ${EXPORT_DIR}/${DUMP_FILE}
retvalue=$?
if [ $retvalue -eq 0 ]; then
  chmod 444 ${TAR_FILE}
  rm -f ${LOG_FILE} ${DUMP_FILE}
  echo "Process completed succesfully!"
else
  echo "tar failed with retval=$retvalue"
fi 
Two important things:
  1. Do not run when archive_aud$ database job is active
  2. Run only from one node (if using RAC)
This script, according AUDITING directory definition, defines all other parameters automatically. After exporting data, it truncate table aud$_arch and then tar file to reduce size on file system. Finally it set RO attributes to created tar file to ensure accidental deletes or changes.

Script create log files in same directory as AUDITING defined, so you can monitor its execution:
export dir= /oracle/export/sys_aud
 
date from= 19.05.10
 
date to= 14.06.10
 
expdp result: 0
truncate table aud$_arch ...

Table truncated.

truncate complete
Keep in mind that "expdp result: 0" mean that expdp command finished regularly, not number of exported records!

To automatize the whole process, place it's execution in crontab, on 14 days window policy:
# export tools.aud$_arch table to file and purging records
0 4 1,15 * * /oracle/export/sys_aud/export_aud_arch.sh
Because exports may be initially pretty big (~ 5-10GB and even larger-depends on auditing level and window period) it is recommended to ensure larger storage for that operation.

Import

Lately, if you want to analyze some suspicion activities in database, easy import data with impdp to aud$_arch table. Do not forget to export current data from aud$_arch to file (by manually call export_aud_arch.sh) before import any data into table.

The End

For those who want advanced features in auditing, they can refer to Oracle Metalink Note 731908.1 - "New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information".

Cheers!

2 comments :

  1. Hi Damir,

    great posts about managing audit data.

    Very useful - nice work ;)

    Regards,
    Marko

    ReplyDelete
  2. Marko,

    THX.

    Glad to find it useful...
    Damir

    ReplyDelete

Zagreb u srcu!

Copyright © 2009-2018 Damir Vadas

All rights reserved.


Sign by Danasoft - Get Your Sign