Install > Oracle setup for the model repository > Database monitoring strategy

Database monitoring strategy

Since the Model Repository is a critical component of SA, the DBA should implement a monitoring strategy. The DBA can write custom monitoring scripts or use third-party products.

This section contains example commands for monitoring the Oracle database used by the Model Repository. When issuing the commands shown in this section, you must be logged on to the server as the user oracle:

$ su - oracle

The SQL commands shown in this section are entered in the sqlplus command-line utility. To run sqlplus, log on as oracle and enter the following command:

$ sqlplus "/ as sysdba"

Verifying if the database instances are up and responding

To verify if the database instances are up and running:

  1. Enter the following command to check if the Oracle processes are running:

    ps -ef | grep ora_

    This ps command should generate output similar to the following lines:

    oracle   14674     1  0 Apr18 ?        00:00:00 ora_pmon_truth
    oracle   14676     1  0 Apr18 ?        00:00:00 ora_psp0_truth
    oracle   14678     1  0 Apr18 ?        00:00:00 ora_vktm_truth
    oracle   14682     1  0 Apr18 ?        00:00:00 ora_gen0_truth
    oracle   14684     1  0 Apr18 ?        00:00:00 ora_diag_truth
    oracle   14686     1  0 Apr18 ?        00:00:00 ora_dbrm_truth
    oracle   14688     1  0 Apr18 ?        00:05:57 ora_dia0_truth
    oracle   14690     1  0 Apr18 ?        00:00:00 ora_mman_truth
    oracle   14692     1  0 Apr18 ?        00:00:00 ora_dbw0_truth
    oracle   14694     1  0 Apr18 ?        00:00:01 ora_lgwr_truth
    oracle   14696     1  0 Apr18 ?        00:00:28 ora_ckpt_truth
    oracle   14698     1  0 Apr18 ?        00:00:04 ora_smon_truth
    oracle   14700     1  0 Apr18 ?        00:00:00 ora_reco_truth
    oracle   14702     1  0 Apr18 ?        00:00:13 ora_mmon_truth
    oracle   14704     1  0 Apr18 ?        00:00:13 ora_mmnl_truth
    oracle   14728     1  0 Apr18 ?        00:00:00 ora_qmnc_truth
    oracle   14775     1  0 Apr18 ?        00:00:01 ora_cjq0_truth
    oracle   14779     1  0 Apr18 ?        00:00:00 ora_q000_truth
    oracle   14781     1  0 Apr18 ?        00:00:00 ora_q001_truth
    oracle   14832     1  0 Apr18 ?        00:00:00 ora_smco_truth
    oracle   22619     1  0 22:38 ?        00:00:00 ora_w000_truth
  2. Verify if the database status is ACTIVE by entering the following command in sqlplus:

    SQL>select database_status from v$instance;

  3. Verify if the open mode is READ WRITE by entering the following command in sqlplus:

    SQL>select name, log_mode, open_mode from v$database;

Verifying if the data files are online

Enter the following commands to verify if the data files are online, in SQL*Plus, :

SQL>Col file_name format a50
SQL>Col status format a10
SQL>Set line 200
SQL>Select file_id, status, bytes, file_name from dba_data_files order by SQL>tablespace_name;

The status should be AVAILABLE for all the data files.

Verifying if the listener is running

To verify if the listener is running:

  1. Check to see if the Oracle listener processes are running by entering the following command:
    ps -ef | grep tns
     
    oracle   11664     1  0 Mar22 ?        00:08:05 /u01/app/oracle/product/12.1.0/db_1/bin/tnslsnr LISTENER -inherit
    oracle   22725 22706  0 22:44 pts/2    00:00:00 grep tns
  2. Check the status of the listener with the lsnrctl command:

    lsnrctl status

    The listener should be listening on port 1521 (default), or on the port that you have designated that the Oracle listener process use, with the TCP protocol, and should be handling the instance named truth. The lsnrctl command should generate output similar to the following lines:

    . . .

    Connecting to (ADDRESS=(PROTOCOL=tcp)
    (HOST=per1.performance.qa.example.com)(PORT=1521))
    . . .
    Instance "truth", status READY, has 1 handler(s) for this service...
  3. Test connectivity to the instance from the Data Access Engine (spin) and Web Services Data Access Engine (twist) hosts by running the tnsping utility:

    tnsping truth

    The OK statement displayed by the tnsping utility confirms that the listener is up and can connect to the instance. The tnsping utility should generate output similar to the following lines:

    . . .

    Used parameter files:
     
    Used HOSTNAME adapter to resolve the alias
    Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=truth.performance.qa.example.com))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.165.178)(PORT=1521)))
    OK (0 msec)
    Attempting to contact (DESCRIPTION=(ADDRESS=(HOST=localhost)(PORT=1521)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=truth)))
    OK (0 msec)

    As an alternative to running the tnsping utility in this step, you can check the connectivity by running sqlplus and connecting to the database instance with the service name (TNS alias), for example:

    sqlplus myuser/mypass@truth

Examining the log files

To examine the log files:

  1. Look for errors in the alert_<SID>.log file.

    For each instance, locate the alert_<SID>.log file in the background dump destination directory:

    $ORACLE_BASE/diag/rdbms/<SID>/<SID>/trace/

    This is an example bdump directory for an instance with the truth SID:

    /u01/app/oracle/diag/rdbms/truth/truth/trace/

  2. Look for errors in the other log and trace files, located in various directories under:

    $ORACLE_BASE/diag/rdbms/<SID>/<SID>

Checking for sufficient free disk space in tablespaces

To check for sufficient disk space:

  1. Enter the following commands in sqlplus:
    SQL>set line 200
    SQL>column dummy noprint
    SQL>column pct_used format 999.9 heading "Pct|Used"
    SQL>column name format a16 heading "Tablespace Name"
    SQL>column mbytes format 999,999,999 heading "Current|File Size|MB"
    SQL>column used format 999,999,999 heading "Used MB "
    SQL>column free format 999,999,999 heading "Free MB"
    SQL>column largest format 999,999,999 heading "Largest|Contigous|MB"
    SQL>column max_size format 999,999,999 heading "Max Possible|MB"
    SQL>column pct_max_used format 999.999 heading "Pct|Max|Used"
    SQL>break on report
    SQL>compute sum of Mbytes on report
    SQL>compute sum of free on report
    SQL>compute sum of used on report

     

    SQL>SELECT
        nvl(df.tablespace_name,'UNKOWN') name, df.mbytes_alloc Mbytes, 
        df.mbytes_alloc-nvl(fs.mbytes_free,0) used, nvl(fs.mbytes_free,0) free, 
        ((df.mbytes_alloc-nvl(fs.mbytes_free,0)) / df.mbytes_alloc) * 100 pct_used,
        nvl(df.largest,0) largest, nvl(df.mbytes_max,df.mbytes_alloc) Max_Size,
        ((df.mbytes_alloc-nvl(fs.mbytes_free,0)) / df.mbytes_max) * 100 pct_max_used
    FROM
        (   SELECT tablespace_name, sum(bytes)/1024/1024 Mbytes_alloc, max(bytes)/1024/1024 largest, 
                sum(decode(autoextensible,'YES',greatest(bytes,maxbytes),bytes))/1024/1024 Mbytes_max 
            FROM   
                dba_data_files GROUP BY tablespace_name 
        ) df,
        ( SELECT tablespace_name, sum(bytes)/1024/1024 Mbytes_free 
            FROM dba_free_space GROUP BY tablespace_name 
        ) fs
    WHERE 
        df.tablespace_name = fs.tablespace_name(+)
    UNION 
    SELECT 
        D.tablespace_name name, D.mbytes_alloc Mbytes, ((ss.used_blocks * F.block_size) / 1024 / 1024) used,
        D.mbytes_alloc - ((ss.used_blocks * F.block_size) / 1024 / 1024) free, 
        ((D.mbytes_alloc-nvl((D.mbytes_alloc - ((ss.used_blocks * F.block_size) / 1024 / 1024)),0)) / D.mbytes_alloc) * 100 pct_used,
        nvl(((G.max_blocks * F.block_size) / 1024 / 1024),0) largest, Max_Mbytes Max_Size,
        ((D.mbytes_alloc-nvl((D.mbytes_alloc - ((ss.used_blocks * F.block_size) / 1024 / 1024)),0)) / D.Max_Mbytes) * 100 pct_pct_used
    FROM 
        (   SELECT tablespace_name, used_blocks, free_blocks, max_size 
            FROM v$sort_segment 
        ) ss,
        (   SELECT tablespace_name, sum(bytes)/1024/1024 Mbytes_alloc, 
                sum(decode(autoextensible,'YES',greatest(bytes,maxbytes),bytes))/1024/1024 Max_Mbytes
            FROM dba_temp_files GROUP BY tablespace_name 
        ) D,
        (   SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total 
            FROM v$tablespace B, v$tempfile C
            WHERE B.ts#= C.ts# GROUP BY B.name, C.block_size 
        ) F,
        (   SELECT B.name, max(blocks) max_blocks, sum(blocks) total_blocks 
            FROM v$tablespace B, v$tempfile C
            WHERE B.ts#= C.ts# GROUP BY B.name 
        ) G

    WHERE ss.tablespace_name = D.tablespace_name and ss.tablespace_name = F.name and ss.tablespace_name = G.name;

    In the output generated by the preceding commands, compare the numbers under the Used and Free headings.

  2. To list the existing data, index, and temporary files, enter the following commands in sqlplus:

    SQL>Select file_id, bytes, file_name from dba_data_files;

  3. If a tablespace has auto-extended to its maximum size and is running out of disk space, then add new data files by entering the ALTER TABLESPACE command in sqlplus.

    The following example commands add data files to four of the tablespaces. For a full list of tablespaces and data files, see the output generated by the commands in the preceding two steps.

    SQL>ALTER TABLESPACE AAA_DATA 
    SQL>ADD DATAFILE '/u01/oradata/truth/aaa_data10.dbf' 
    SQL>SIZE 32M AUTOEXTEND ON NEXT 128M MAXSIZE 4000M ;

     

    SQL>ALTER TABLESPACE "AAA_INDX" 
    SQL>ADD DATAFILE '/u02/oradata/truth/aaa_indx11.dbf' 
    SQL>SIZE 32M AUTOEXTEND ON NEXT 128M MAXSIZE 4000M ;

     

    SQL>ALTER TABLESPACE "UNDO" 
    SQL>ADD DATAFILE '/u03/oradata/truth/undo12.dbf' SIZE 32M AUTOEXTEND ON NEXT 128M MAXSIZE 4000M ;

     

    SQL>ALTER TABLESPACE "TEMP" ADD 
    SQL>TEMPFILE '/u04/oradata/truth/temp14.dbf' SIZE 32M AUTOEXTEND ON NEXT 128M MAXSIZE 4000M ;

Enabling the collection of Oracle Automatic Optimizer statistics

As of SA 10.0 the schema and index statistics collection for SA database user AAA, TRUTH etc. has been moved from dba_jobs to Oracle's Automatic Optimizer Statistics Collection.

SA relies on Oracle’s Automatic Optimizer statistics collection to collect schema statistics used to avoid database performance degradation. By default, Oracle’s Automatic Optimizer statistics collection should be enabled.

To verify if the Oracle Automatic Optimizer statistics collection is enabled:

  1. Enter the following commands in SQL*Plus:
    # su - oracle
    # sqlplus "/ as sysdba"
     
    SQL>set line 200
    SQL>col status format a10
    SQL>SELECT status FROM dba_autotask_client where client_name='auto optimizer stats collection';
    The output from the above statement should be as follows:
    STATUS
    ----------
    ENABLED
  2. If the status is not ENABLED, execute the following statement to enable Oracle’s Automatic Optimizer statistics collection.

    SQL>EXEC DBMS_AUTO_TASK_ADMIN.ENABLE(client_name => 'auto optimizer stats collection',operation => NULL, window_name => NULL);

Verifying if the database jobs (System/Index statistics and garbage collection) ran successfully

When the Model Repository is installed, the SA Installer sets up the System/Index Statistics and the Garbage Collection jobs in Oracle's dba_scheduler_jobs which then runs these jobs at specified time-intervals. The jobs perform system/ index statistics collection and garbage collection. If the system/index statistics collection jobs do not run successfully, database performance degrades. If the garbage collection jobs do not run, old data accumulates and requires additional disk space. Performance can also be affected.

To verify if the jobs in DBA_SCHEDULER_JOBS ran successfully:

  1. Enter the following commands in SQL*Plus:
    SQL>set line 200
    SQL>col job_name format a50
    SQL>col owner format a14
    SQL>col last format a17
    SQL>col next format a17
    SQL>col state format a10
    SQL>col job_action format a50
     
    SQL>select job_name, owner, to_char(LAST_START_DATE, 'MM/DD/YY HH:MI:SS')
    last, to_char(next_run_date, 'MM/DD/YY HH:MI:SS') next, state, job_action
    from dba_scheduler_jobs where owner in ('OPSWARE_ADMIN', 'LCREP', 'GCADMIN');

    In the output generated from the preceding statement, the value of the JOB_ACTION column indicates the type of job. The jobs owned by GCADMIN perform the garbage collection. The job owned by LCREP performs index statistics collection and the job owned by OPSWARE_ADMIN performs system statistics collection. Sample output looks like this:

    JOB_NAME             OWNER         LAST              NEXT              STATE      JOB_ACTION
    -------------------- ------------- ----------------- ----------------- ---------- ---------------------------------
    WLMPURGE_GC          GCADMIN       04/03/12 09:00:00 04/04/12 09:00:00 SCHEDULED  WLMPURGE.GC_JOBS
    STORAGEINITIATOR     GCADMIN       04/03/12 09:00:00 04/02/12 09:47:30 SCHEDULED  STORAGEINITIATORPURGE.GC_
    PURGE_GC                                                                          STORAGEINITIATORS
    AUDITPURGE_GC        GCADMIN       04/03/12 09:00:00 04/04/12 09:00:00 SCHEDULED  AUDITPURGE.GC_AUDITLOGS
    CHANGELOGPURGE_GC    GCADMIN       04/03/12 09:00:00 04/04/12 09:00:00 SCHEDULED  CHANGELOGPURGE.GC_CHANGELOGS
    WAYPURGE_GC          GCADMIN       04/03/12 09:00:00 04/04/12 09:00:00 SCHEDULED  WAYPURGE.GC_SESSIONS
    LCREP_INDEX_STATS    LCREP         04/02/12 11:00:00 04/03/12 11:00:00 SCHEDULED  gather_lcrep_stats
    OPSWARE_ADMIN_SYSTEM OPSWARE_ADMIN 04/02/12 06:00:00 04/03/12 06:00:00 SCHEDULED  gather_opsware_admin_sys_stats
    _STATS
     
    7 rows selected.

    where:

    • JOB_NAME - name of the job
    • OWNER - the user who with permissions to run the job
    • LAST - last date-time when the job was run
    • NEXT - next date the job will run
    • STATE - The status of the scheduled job:

      • disabled - The job is disabled
      • scheduled - The job is scheduled to be executed
      • running - The job is currently running
      • completed - The job has completed, and is not scheduled to run again
      • broken - The job is broken
      • failed - The job was scheduled to run once and failed
      • retry scheduled - The job has failed at least once and a retry has been scheduled to be executed
      • succeeded - The job was scheduled to run once and completed successfully
      • JOB_ACTION - the procedure that the job runs

Changes to the database statistics job

Starting with Oracle 10g, the DBMS_JOB package was superceded by the improved Oracle Scheduler (dbms_scheduler) package. Although Oracle still supports the DBMS_JOB package for backward compatibility, Oracle will make no further enhancements to the package. Since the DBMS_SCHEDULER provides better functionality, all the SA jobs that used the DBMS_JOB package have been redesigned in this release to use the DBMS_SCHEDULER package. The affected jobs can be found in the dba_scheduler_jobs table. These changes are only relevant to new SA 10.x Cores and cores upgraded to SA 10.x.

To view the jobs and changes made, you can run the following from SQL*Plus:

# Su - oracle
# Sqlplus "/ as sysdba"
SQL>set line 200
SQL>col owner format a14
SQL>col job_action format a50
SQL>col job_name format a50
SQL>select job_name, owner, job_action from dba_scheduler_jobs where owner in
('OPSWARE_ADMIN', 'LCREP', 'GCADMIN');

Your output should be as follows:

JOB_NAME                                    OWNER          JOB_ACTION
------------------------------------------- -------------- --------------------------
WLMPURGE_GC                                 GCADMIN        WLMPURGE.GC_JOBS
STORAGEINITIATORPURGE_GC                    GCADMIN        STORAGEINITIATORPURGE.GC_
STORAGEINITIATORS
AUDITPURGE_GC                               GCADMIN        AUDITPURGE.GC_AUDITLOGS
CHANGELOGPURGE_GC                           GCADMIN        CHANGELOGPURGE.GC_
CHANGELOGS
WAYPURGE_GC                                 GCADMIN        WAYPURGE.GC_SESSIONS
LCREP_INDEX_STATS                           LCREP          gather_lcrep_stats
OPSWARE_ADMIN_SYSTEM_STATS                  OPSWARE_ADMIN  gather_opsware_admin_
                                                           sys_stats
 
7 rows selected.

Running dba_scheduler_jobs manually

If you need to run the System/Index Statistics and the Garbage Collection jobs manually, you must first grant the following privilege.

SQL> grant create session to lcrep, gcadmin;

To run the statistics collection jobs manually in SQL*Plus, use the commands shown below. If you copy and paste the following command examples, replace the variables like schema_user_value with the values of the schema_user column displayed by the preceding select statement. Substitute the variables such as job_name_value with the values of the job column displayed by the same select statement.

SQL> connect <schema_user_value>/<password>
SQL> exec dbms_scheduler.run_job('<job_name_value>');

After you are done running the jobs, you should revoke the privileges granted above. Log in to SQL*Plus and enter the following command:

SQL> revoke create session from lcrep, gcadmin;

Changing the time jobs are run

dba_scheduler_jobs are run at UTC time. To change the time when the jobs are run, follow these instructions:

sqlplus "/ as sysdba"
SQL>set line 300
SQL>col job_name format a30
SQL>col owner format a14
SQL>col last format a17
SQL>col next format a17
SQL>col repeat_interval format a40
SQL>col job_action format a30
 
SQL>select job_name, owner, to_char(LAST_START_DATE, 'MM/DD/YY HH:MI:SS') last, to_char(next_run_date, 'MM/DD/YY HH:MI:SS') next, repeat_interval, job_action from dba_scheduler_jobs where owner in ('OPSWARE_ADMIN', 'LCREP', 'GCADMIN');
The above statement provides information about a job. Note the job name and the owner that has the privilege to run this job.

The output of the above statement is similar to the following (formatting is compressed due to space limitations):

JOB_NAME                      OWNER     LAST              NEXT              REPEAT_INTERVAL        JOB_ACTION
------------------------------ -------  ----------------- ----------------- ---------------------- --------------
WLMPURGE_GC                    GCADMIN  04/02/12 09:00:02 04/04/12 09:00:00 TRUNC(SYSDATE+1)+
                                                                            9/24                   WLMPURGE.GC_JOBS
STORAGEINITIATORPURGE_GC       GCADMIN  04/02/12 09:47:30 04/03/12 10:47:30 SYSDATE+1/24           STORAGE
                                                                                                   INITIATOR
                                                                                                   PURGE.GC_
                                                                                                   STORAGE
                                                                                                   INITIATORS
AUDITPURGE_GC                  GCADMIN  04/02/12 09:00:02 04/04/12 09:00:00 TRUNC(SYSDATE+1)+9/24  AUDITPURGE.GC_
                                                                                                   AUDITLOGS
CHANGELOGPURGE_GC              GCADMIN  04/02/12 09:00:02 04/04/12 09:00:00 TRUNC(SYSDATE+1)+9/24  CHANGELOGPURGE.
                                                                                                   GC_CHANGELOGS
WAYPURGE_GC                    GCADMIN  04/02/12 09:00:02 04/04/12 09:00:00 TRUNC(SYSDATE+1)+9/24  WAYPURGE.GC_
                                                                                                   SESSIONS
LCREP_INDEX_STATS              LCREP    04/01/12 11:00:04 04/03/12 11:00:00 TRUNC(SYSDATE+2)+11/24 gather_lcrep_
                                                                                                   stats
OPSWARE_ADMIN_SYSTEM_STATS     OPSWARE  04/02/12 06:00:01 04/03/12 06:00:00 TRUNC(SYSDATE+1) + 
                               _ADMIN                                       18/24 + mod(abs(to_    gather_opsware
                                                                            number(to_char)        admin_sys_stats 
                                                                            (sysdate + 1,'D'))
                                                                            - 7) + 2,7)
 
7 rows selected.

In this example the user lcrep changes the time/interval at which the job is run. Any other user can be substituted for the user lcrep.

sqlplus "connect / as sysdba"
SQL> grant create session to lcrep;
Grant succeeded.

In the example:

job name=LCREP_INDEX_STATS
owner = lcrep

In this example, the job LCREP_INDEX_STATS runs at 11:00 a.m. UTC. To change this to 9:00 a.m. UTC, the command is:

SQL> connect lcrep/<password_for_lcrep>
Connected.

 

SQL> exec dbms_scheduler.set_attribute('LCREP_INDEX_STATS',
attribute=>'REPEAT_INTERVAL', value=>'TRUNC(SYSDATE+2)+9/24');

Monitoring database users

To monitor database users:

  1. To check the database users, enter the following command in sqlplus:
    # su - oracle
    $ sqlplus "/ as sysdba"
    SQL>Select username, account_status, default_tablespace,
    temporary_tablespace from dba_users;

Monitoring the ERROR_INTERNAL_MSG table

Various SA internal PL/SQL procedures write exceptions to the truth.ERROR_INTERNAL_MSG table. You should monitor this table for errors (daily checks are recommended) on all Model Repository (Oracle) databases.

Executing the SQL below lists the data in error_internal_msg from the last fifteen days.

Note: You can remove the WHERE clause if you want to display all data in the truth.ERROR_INTERNAL_MSG table.
# Su - oracle
# Sqlplus "/ as sysdba"
SQL> set line 200
SQL> col ERR_ID format 999999
SQL> col ERR_USER format a8
SQL> col ERR_TABLE format a25
SQL> col ERR_TABLE_PK_ID format a10
SQL> col ERR_CODE format 9999999
SQL> col ERR_TEXT format a20
SQL> col ERR_INFO format a30

 

SQL> select ERROR_INTERNAL_MSG_ID ERR_ID,
ERR_DATE,
ERR_USER,
ERR_TABLE,
ERR_TABLE_PK_ID,
ERR_CODE,
ERR_TEXT,
DELETE_FLG,
ERR_INFO
from ERROR_INTERNAL_MSG
where ERR_DATE > sysdate - 15
order by ERR_DATE;

Rebuilding the SHADOW_FOLDER_UNIT table

The procedure SHADOW_FOLDER_UNIT_RELOAD is provided in case the contents of SHADOW_FOLDER_UNIT table becomes out of synchronization or there are multiple records of the type (shadow_folder_unit.folder_id = -1).

The table can be rebuilt without stopping the system. Simply connect as user TRUTH, TWIST, SPIN, or OPSWARE_ADMIN and issue the command:

SQL>exec SHADOW_FOLDER_UNIT_UTIL.SHADOW_FOLDER_UNIT_RELOAD

Check the results from monitoring the ERROR_INTERNAL_MSG table. If the results contain:

'ERR_TABLE' = 'UNIT_RELATIONSHIPS'

Perform the following:

  1. Check if there are records in truth.SHADOW_FOLDER_UNIT of the type (folder_id = -1).
    SQL> connect / as sysdba
    SQL>  select count(*) from shadow_folder_unit where folder_id = -1;
  2. If the above SQL returns a value greater than zero, then run the following during low database usage time:
    SQL> grant create session to truth;
    SQL> connect truth/<password>
    SQL> exec SHADOW_FOLDER_UNIT_UTIL.SHADOW_FOLDER_UNIT_RELOAD;
  3. Run the SQL from Monitoring the ERROR_INTERNAL_MSG table and check if the procedure has listed any faulty records. SHADOW_FOLDER_UNIT_UTIL.SHADOW_FOLDER_UNIT_RELOAD is idem potent therefore the faulty records can be fixed and you can rerun SHADOW_FOLDER_UNIT_UTIL.SHADOW_FOLDER_UNIT_RELOAD.

    HPE recommends that you gather table statistics after the data reload:

    SQL> connect truth/<password>
    SQL> exec dbms_stats.gather_table_stats (
                    ownname=> 'TRUTH',
                    tabname=> 'SHADOW_FOLDER_UNIT',
                    estimate_percent=> DBMS_STATS.AUTO_SAMPLE_SIZE,
                    cascade => true);
  4. Revoke the permissions given to user truth:
    SQL> connect / as sysdba
    SQL> revoke create session from truth;