Nov 29, 2008

About PGA

Introduction to PGA management
Almost every Oracle professional agrees that the old-fashioned sort_area_size and hash_area_size parameters imposed a cumbersome one-size-fits-all approach to sorting and hash joins. Different tasks require different RAM areas, and the trick has been to allow "enough" PGA RAM for sorting and hash joins without having any high-resource task "hog" all of the PGA, to the exclusion of other users.

Oracle9i introduced the pga_aggregate_target parameters to fix this resource issue, and by-and-large, pga_aggregate_target works very well for most systems. You can check your overall PGA usage with the v$pga_target_advice advisory utility or a STATSPACK or AWR report. High values for multi-pass executions, high disk sorts, or low hash join invocation might indicate a low resource usage for PGA regions.

Let's take a look at the issues surrounding the hidden limits of pga_aggregate_target.

Hidden parameters for Oracle PGA regions
With proper understanding (and knowing that these undocumented parameters are not supported by Oracle), you can adjust your PGA regions to allow for system-specific sorting and hash joins.

_pga_max_size ?this hidden parameter defaults to 200 megabytes, regardless of the setting for pga_aggregate_target.
 
_smm_px_max_size ?This parameter is used for Oracle parallel query, and defaults to 30% of the pga_aggregate_target setting, divided by degree of parallelism (as set by a PARALLEL hint, "alter table xxx parallel" command, or the parallel_automatic_tuning initialization parameter). For example, by default a DEGREE=4 parallel query would have a maximum sort area value of 15 megabytes per session with a 200 megabyte pga_aggregate_target setting. Remember, parallel full-table scans bypass the data buffers and store the incoming data rows in the PGA region and not inside the data buffers (as defined by the db_cache_size parameter).
The limits of sorting and hashing
There are important limitations of pga_aggregate_target:

The total work area cannot exceed 200 megabytes of RAM because of the default setting for _pga_max_size.
 
No RAM sort may use more than 5% of pga_aggregate_target or _pga_max_size, whichever is smaller. This means that no task may use more than 200 megabytes for sorting or hash joins. The algorithm further reduces this to (200/2) for sorts so the actual limit for pure sorts will be 100 megabytes.
These restrictions were made to ensure that no large sorts or hash joins hog the PGA RAM area, but there are some secrets to optimize the PGA. For example, the following set of parameters may be mutually-exclusive:

sort_area_size=1048576 <-- sort_area_size is ignored when pga_aggregate_target is set and when workarea_size_policy =auto, unless you are using a specialized feature such as the MTS. If dedicated server connections are used, the sort_area_size parameter is ignored.
 
pga_aggregate_target = 500m <-- The maximum default allowed value is 200 megabytes, this limits sorts to 25 megabytes (5% of 500m).
 
mts_servers<>0 <-- If Multi-threaded server is being used, the pga_aggregate_target setting would be ignored in all versions except Oracle10g.
(Note: there may be some cases where sort_area_size is used in Oracle utilities, but these have not been documented, even with pga_aggregate_target.)

We also see these additional undocumented parameters:
 

Parameter Name Description
_smm_advice_enabled if TRUE, enable v$pga_advice
_smm_advice_log_size overwrites default size of the PGA advice workarea history log
_smm_auto_cost_enabled if TRUE, use the AUTO size policy cost functions
_smm_auto_max_io_size Maximum IO size (in KB) used by sort/hash-join in auto mode
_smm_auto_min_io_size\ Minimum IO size (in KB) used by sort/hash-join in auto mode
_smm_bound overwrites memory manager automatically computed bound
_smm_control provides controls on the memory manager
_smm_max_size maximum work area size in auto mode (serial)
_smm_min_size minimum work area size in auto mode
_smm_px_max_size maximum work area size in auto mode (global)
_smm_trace Turn on/off tracing for SQL memory manager

WARNING - These are unsupported parameters and they should not be used unless you have tested their behavior on your own database and you are willing to accept full responsibility for any issues.

Super-size me
For certain Oracle applications the Oracle professional will want to allow individual tasks to exceed the default limits imposed by Oracle. For example, PC-based, 64 bit Oracle servers (1 or 2 CPU's with 8 gigabytes of RAM) will often have unused RAM available. For example, a fully-cached 5 gigabyte database on an 8 gigabyte dedicated Oracle server will have approximately 1 gigabyte available for the PGA (allowing 20% for the OS and other SGA regions):

O/S - 1.6 gig
SGA - 5 gig
PGA Space - 1 gig
Total - 8 gig
The system has a pga_aggregate_target setting of 1 gigabyte and the undocumented parameters are at their default settings.  While it is unusual for an online system to require super-sized regions for sorting (because the result sets for online screens are normally small), there can be a benefit to having large RAM regions available for the Oracle optimizer.

The Oracle cost-based optimizer will determine whether a hash join would be beneficial over a nested-loop join, so making more PGA available for hash joins will not have any detrimental effect since the optimizer will only invoke a super-sized hash join if it is better than a nested-loop join. In a system like the example above, the following settings would increase the default sizes for large sorts and hash joins while limiting those for parallel sorts.

pga_aggregate_target = 4g
_pga_max_size = 400m
_smm_px_max_size = 333m
With these hidden parameters set we see significant size increase for serial sorts and a throttling effect for parallel queries and sorts. To see a reproducible, artificial test case demonstrating sort throttling, Mike Ault has prepared a 230 page artificial test case: Validation of Sort Sizes in a Linux Oracle10g Database. However, bear in mind that it only valid for a specific release of Oracle10g, on a specific hardware and OS environment, and not using any optional features such as the MTS.

A RAM sort or hash join may now have up to the full 200 megabytes (5% of pga_aggregate_target) a 400% increase over a 1 gigabyte pga_aggregate_target setting. With the default settings, only a 200% (100 megabyte size) increase would be possible.
 
Parallel queries are now limited to 333 megabytes of RAM (30% of pga_aggregate_target or _smm_px_max_size), such that a DEGREE=4 parallel query would have a maximum of 83 megabytes (333 meg/4) per slave which may actually be less due to internal sizing algorithms that set the memory increments used in setting sort areas. This throttling is to prevent one parallel query using all available memory since _smm_px_max_size would default to 1.2 gigabytes with the setting for pga_aggregate_target at 4 gigabytes.
 
You must be careful in setting the pga_aggregate_target to greater than the available memory, calculate the maximum number of users who would be sorting/hashing and multiple that times the predicted size to get your actual limitations otherwise ORA-4030 errors or swapping may occur.
In conclusion, overriding the built-in safeguards of pga_aggregate_target can make more efficient use of RAM resources in cases where large RAM regions are available on the database server. When used with care (and the blessing of Oracle Technical Support) it can often make sense to over-ride these default values to make better use of expensive RAM resources.

There is also lots of evidence that changing these parameters will have a positive effect of large, batch-oriented Oracle jobs, but you must be very careful to fully understand the limitations of the PGA parameters:

Success stories for PGA size expansion
If you have a limited number of active sessions you may wish to override the PGA governor that only allows any single task to consume 5% of the total PGA.  Laurent Schneider notes in Oracle MetaLink that overriding the PGA defaults made a large batch processes run more than 8x faster:

"I set appropriate values for pga_aggregate_target and _pga_max_size...

alter system set pga_aggregate_target=6G;
alter system set "_pga_max_size"=2000000000;

...and I gave the query some hints "NOREWRITE FULL USE_HASH ORDERED". As a result, it boosted my query performance from 12 hours to 1.5 hour."

Ah, if only it were that easy, just change a setting and batch jobs run six times faster.  Laurent Schneider notes some perils and reliability issues relating to this parameter and says "this parameter often leads to an ORA-4030, even when plenty of memory available, for some obscure reasons". 
While Laurent abandoned the undocumented approach, the promise of eight times faster execution speeds are very tempting.  Once you get permission from Oracle Technical Support to set an undocumented parameter, they can work with  to resolve errors.  While they may not address bugs, they may be able to provide alternatives and workarounds.
 
References
Metalink Note:223299.1: "If pga_aggregate_target is set in the init.ora, then SORT_AREA_SIZE, HASH_AREA_SIZE, BITMAP_MERGE_AREA_SIZE and CREATE_bitmap_AREA_SIZE are ignored." . . .

"If pga_aggregate_target is set in the init.ora, then SORT_AREA_SIZE, HASH_AREA_SIZE, BITMAP_MERGE_AREA_SIZE and CREATE_bitmap_AREA_SIZE are ignored. . .

If pga_aggregate_target is set in init.ora, then WORKAREA_SIZE_POLICY defaults to AUTO."
 
Metalink Note: 223730.1: "1- When we set the pga_aggregate_target and WORKAREA_SIZE_POLICY to auto then the *_area_size parameter are automatically ignored and oracle will automatically use the computed value for these parameters."
 
Metalink Note: 30918.1: "Oracle does not recommend using the SORT_AREA_SIZE parameter unless the instance is configured with the shared server option. Oracle recommends that you enable automatic sizing of SQL working areas by setting pga_aggregate_target instead. SORT_AREA_SIZE is retained for backward compatibility."

From: http://www.dba-oracle.com

--
Thanks
Xiangzhen

Nov 24, 2008

BULK COLLECT and BULK BIND in PLSQL

 
rem -----------------------------------------------------------------------
rem Filename:   bulkbind.sql
rem Purpose:    Simple program to demonstrate BULK COLLECT and BULK BIND.
rem Notes:      Bulk operations on ROWTYPE only work from and above.
rem Date:       12-Feb-2004
rem Author:     Frank Naude, Oracle FAQ
rem -----------------------------------------------------------------------
set serveroutput on size 50000

DECLARE
  CURSOR emp_cur IS SELECT * FROM EMP;

  TYPE emp_tab_t IS TABLE OF emp%ROWTYPE INDEX BY BINARY_INTEGER;
  emp_tab emp_tab_t;            -- In-memory table

  rows NATURAL        := 10000;   -- Number of rows to process at a time
  i    BINARY_INTEGER := 0;
BEGIN
  OPEN emp_cur;
  LOOP
    -- Bulk collect data into memory table - X rows at a time
    FETCH emp_cur BULK COLLECT INTO emp_tab LIMIT rows;
    EXIT WHEN emp_tab.COUNT = 0;

    DBMS_OUTPUT.PUT_LINE( TO_CHAR(emp_tab.COUNT)|| ' rows bulk fetched.');

    FOR i IN emp_tab.FIRST .. emp_tab.LAST loop
      -- Manipumate data in the memory table...
      dbms_output.put_line('i = '||i||', EmpName='||emp_tab(i).ename);
    END LOOP;

    -- Bulk bind of data in memory table...
    FORALL i in emp_tab.FIRST..emp_tab.LAST
      INSERT /*+APPEND*/ INTO emp2 VALUES emp_tab(i);

  END LOOP;
  CLOSE emp_cur;
END;
/
 
 

Bulk Insert [forall] in PL/SQL

create table t_ (a number, b varchar2(30));

create type t_number as table of number;
/

create type t_varchar as table of varchar2(30);
/

set serveroutput on size 1000000 format wrapped

declare
  v_time number;
begin
  dbms_output.put_line('');

  v_time := dbms_utility.get_time;

  for r in (select rownum, object_name from dba_objects) loop
    insert into t_ values (r.rownum, r.object_name);
    exit when r.rownum = 10000;
  end loop;

  dbms_output.put_line('time used: ' || (dbms_utility.get_time - v_time) / 100 || ' secs');
  dbms_output.put_line('');
end;
/

select count(*) from t_;

truncate table t_;

declare
  v_time    number;
  a_varchar t_varchar:=t_varchar();
  a_number  t_number :=t_number();
begin
  dbms_output.put_line('');

  v_time := dbms_utility.get_time;

  for r in (select rownum, object_name from dba_objects) loop
    a_number.extend;
    a_varchar.extend;
   
    a_number (a_number.count)  := r.rownum;
    a_varchar(a_varchar.count) := r.object_name;


    if mod(r.rownum, 550) = 0 then
      forall i in 1 .. a_number.count
        insert into t_ values(a_number(i), a_varchar(i));

      a_number.delete;
      a_varchar.delete;

      exit when r.rownum = 10000;

    end if;

    if r.rownum = 10000 then
      forall i in 1 .. a_number.count
        insert into t_ values(a_number(i), a_varchar(i));

      exit;
    end if;
   
  end loop;

  dbms_output.put_line('time used: ' || (dbms_utility.get_time - v_time)
    / 100 || ' secs');
  dbms_output.put_line('');
end;
/

select count(*) from t_;

drop table t_;
drop type t_number;
drop type t_varchar;

Noted.
--
Thanks
Xiangzhen

OS: Red Hat Enterprise Linux ES relea...

OS: Red Hat Enterprise Linux ES release 4 (Nahant Update 4)

DB: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod

SGA:

      Total System Global Area  880803840 bytes
      Fixed Size                  1222384 bytes
      Variable Size             167774480 bytes
      Database Buffers          704643072 bytes
      Redo Buffers                7163904 bytes
PGA:281M

 


-- 二表关联更新(200万,200万,表结构同dba_tables)
-- T01.Id 有索引
SQL> select count(*) from t01;

COUNT(*)
----------
2000000

SQL> select count(*) from t02;

COUNT(*)
----------
2000000

SQL> @/tmp/3.sql
04:07:44
04:09:35

PL/SQL procedure successfully completed.

SQL> ! cat /tmp/3.sql
declare
maxrows number default 1000;
cnt number;
type type_id is table of number index by pls_integer;
ids type_id;
--TYPE employee_aat IS TABLE OF employee.employee_id%TYPE
-- INDEX BY PLS_INTEGER;
cursor cur is
select /*+ hash(t01,t02) */ t02.id
from t01,t02
where t01.id=t02.id;
begin
dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
cnt := cnt + 1;
open cur;
loop
exit when cur%notfound;
fetch cur bulk collect into ids limit maxrows;
forall i in 1 .. ids.count
update t01 set val = 'abc'
where id = ids(i);
cnt := cnt + 1;
commit;
--dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
end loop;
dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
end;
/

-- 六表关联更新(200万,200万,200万,200万,200万,200万,表结构同dba_tables)
-- T01.Id 有索引
SQL> select count(*) from t01;

COUNT(*)
----------
2000000

SQL> select count(*) from t02;

COUNT(*)
----------
2000000

SQL> select count(*) from t03;

COUNT(*)
----------
2000000

SQL> select count(*) from t04;

COUNT(*)
----------
2000000

SQL> select count(*) from t05;

COUNT(*)
----------
2000000

SQL> select count(*) from t06;

COUNT(*)
----------
2000000

SQL> edit
Wrote file afiedt.buf

1 declare
2 maxrows number default 1000;
3 cnt number;
4 type type_id is table of number index by pls_integer;
5 ids type_id;
6 --TYPE employee_aat IS TABLE OF employee.employee_id%TYPE
7 -- INDEX BY PLS_INTEGER;
8 cursor cur is
9 select /* hash(t01,t02) */ t02.id
10 from t01,t02,t03,t04,t05,t06
11 where t01.id=t02.id
12 and t02.id=t03.id
13 and t03.id=t04.id
14 and t04.id=t05.id
15 and t05.id=t06.id
16 and rownum<=500000;
17 begin
18 dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
19 cnt := cnt + 1;
20 open cur;
21 loop
22 exit when cur%notfound;
23 fetch cur bulk collect into ids limit maxrows;
24 forall i in 1 .. ids.count
25 update t01 set val = 'abc'
26 where id = ids(i);
27 cnt := cnt + 1;
28 commit;
29 -- dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
30 end loop;
31 dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
32* end;
SQL> /
04:33:45
04:35:09

PL/SQL procedure successfully completed.

SQL> edit
Wrote file afiedt.buf

1 declare
2 maxrows number default 1000;
3 cnt number;
4 type type_id is table of number index by pls_integer;
5 ids type_id;
6 --TYPE employee_aat IS TABLE OF employee.employee_id%TYPE
7 -- INDEX BY PLS_INTEGER;
8 cursor cur is
9 select /* hash(t01,t02) */ t02.id
10 from t01,t02,t03,t04,t05,t06
11 where t01.id=t02.id
12 and t02.id=t03.id
13 and t03.id=t04.id
14 and t04.id=t05.id
15 and t05.id=t06.id;
16 -- and rownum<=500000;
17 begin
18 dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
19 cnt := cnt + 1;
20 open cur;
21 loop
22 exit when cur%notfound;
23 fetch cur bulk collect into ids limit maxrows;
24 forall i in 1 .. ids.count
25 update t01 set val = 'abc'
26 where id = ids(i);
27 cnt := cnt + 1;
28 commit;
29 -- dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
30 end loop;
31 dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
32* end;
SQL> /
04:35:38
04:38:57

PL/SQL procedure successfully completed.

-- fetch 二个字段的情况
SQL> edit
Wrote file afiedt.buf

1 declare
2 maxrows number default 1000;
3 cnt number;
4 -- type type_id is table of number index by pls_integer;
5 -- ids type_id;
6 --TYPE employee_aat IS TABLE OF employee.employee_id%TYPE
7 -- INDEX BY PLS_INTEGER;
8 cursor cur is
9 select /* hash(t01,t02) */ t02.id,t06.table_name
10 from t01,t02,t03,t04,t05,t06
11 where t01.id=t02.id
12 and t02.id=t03.id
13 and t03.id=t04.id
14 and t04.id=t05.id
15 and t05.id=t06.id
16 and rownum<=100000;
17 type type_id is table of t02.id%type index by pls_integer;
18 type type_name is table of t06.table_name%type index by pls_integer;
19 ids type_id;
20 names type_name;
21 begin
22 dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
23 cnt := cnt + 1;
24 open cur;
25 loop
26 exit when cur%notfound;
27 fetch cur bulk collect into ids,names limit maxrows;
28 forall i in 1 .. ids.count
29 update t01 set val = names(i)
30 where id = ids(i);
31 cnt := cnt + 1;
32 commit;
33 -- dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
34 end loop;
35 dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
36* end;
SQL> /
05:05:38
05:06:29

PL/SQL procedure successfully completed.

SQL> edit
Wrote file afiedt.buf

1 declare
2 maxrows number default 1000;
3 cnt number;
4 -- type type_id is table of number index by pls_integer;
5 -- ids type_id;
6 --TYPE employee_aat IS TABLE OF employee.employee_id%TYPE
7 -- INDEX BY PLS_INTEGER;
8 cursor cur is
9 select /* hash(t01,t02) */ t02.id,t06.table_name
10 from t01,t02,t03,t04,t05,t06
11 where t01.id=t02.id
12 and t02.id=t03.id
13 and t03.id=t04.id
14 and t04.id=t05.id
15 and t05.id=t06.id;
16 -- and rownum<=100000;
17 type type_id is table of t02.id%type index by pls_integer;
18 type type_name is table of t06.table_name%type index by pls_integer;
19 ids type_id;
20 names type_name;
21 begin
22 dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
23 cnt := cnt + 1;
24 open cur;
25 loop
26 exit when cur%notfound;
27 fetch cur bulk collect into ids,names limit maxrows;
28 forall i in 1 .. ids.count
29 update t01 set val = names(i)
30 where id = ids(i);
31 cnt := cnt + 1;
32 commit;
33 -- dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
34 end loop;
35 dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
36* end;
SQL> /
05:07:06
05:11:05

PL/SQL procedure successfully completed.

SQL> select id,table_name,val from t01 where rownum<10;

ID TABLE_NAME VAL
---------- ------------------------------ ------------------------------
1 CON$ CON$
2 UNDO$ UNDO$
3 CDEF$ CDEF$
4 CCOL$ CCOL$
5 PROXY_ROLE_DATA$ PROXY_ROLE_DATA$
6 FILE$ FILE$
7 FET$ FET$
8 TS$ TS$
9 PROXY_DATA$ PROXY_DATA$

9 rows selected.




Nov 15, 2008

A few examples are used to handle a Oracle job


Example 1:

The following can be used to create a job using a predefined Program and Schedule:

BEGIN
DBMS_SCHEDULER.CREATE_JOB
(
 job_name                    => `BACKUP_JOB_01`,
 program_name            => `BACKUP_PROGRAM`,
 schedule_name            => `BACKUP_SCHEDULE`);
END;
/

Example 2:

The following can be used to create a job using a predefined Program without a predefined Schedule:

BEGIN
DBMS_SCHEDULER.CREATE_JOB
(
 job_name                    => `BACKUP_JOB_02`,
 program_name            => `BACKUP_PROGRAM`,
 start_date        => `TRUNC(SYSDATE)+23/24`,
 repeat_interval => `FREQ=WEEKLY; BYDAY=SUN` );
END;
/

Example 3:

The following can be used to create a job using a predefined Schedule without a predefined Program:

BEGIN
DBMS_SCHEDULER.CREATE_JOB
(
 job_name                    => `BACKUP_JOB_03`,
 schedule_name            => `BACKUP_SCHEDULE`,
 job_type                      => `EXECUTABLE`,
 job_action       => `/dba/scripts/weekly_backup.sh`
);
END;
/

Example 4:

The following can be used to create a job without a predefined Program and Schedule:

BEGIN
DBMS_SCHEDULER.CREATE_JOB
(
 job_name                    => `BACKUP_JOB_04`,
 job_type                      => `EXECUTABLE`,
 job_action       => `/dba/scripts/weekly_backup.sh`,
 start_date        => `TRUNC(SYSDATE)+23/24`
 repeat_interval => `FREQ=WEEKLY; BYDAY=SUN`
);
END;
/

The following is the syntax to run, stop, copy, and drop a job:

DBMS_SCHEDULER.RUN_JOB
(  job_name                  in varchar2 );
DBMS_SCHEDULER.STOP_JOB
( job_name                   in varchar2,
 force               in Boolean default false );

The copy_job procedure copies all attributes of an existing job to a new job.

DBMS_SCHEDULER.COPY_JOB
(
 old_job                       in varchar2,
 new_job                      in varchar2);

DBMS_SCHEDULER.DROP_JOB
(
 job_name                    in varchar2,
 force               in Boolean default false
);

Create Job Class

A Job Class defines a category of jobs that share common resource usage requirements.  A Job Class is associated with two attributes: the resource consumer group, which defines a set of user sessions that have common resource processing requirements; and a database service name, which defines the instance to which the job class belongs.  Each job belongs to a single job class at any given time.  By associating a Job with a Job Class, the amount of resources a Job can use during its execution can be managed.

The syntax to create a Job Class is:

DBMS_SCHEDULER.CREATE_JOB_CLASS
(
 job_class_name                       in varchar2,
 resource_consumer_group       in varchar2 default null,
 service                         in varchar2 default null,
 log_purge_policy                     in varchar2 default null,
 comments                                in varchar2 default null
);

By default, the Scheduler log table entries are not purged. The log_purge_policy defines the policy for purging the log table entries.

Data Dictionary Views

The following SQL script provides a list of data dictionary views used to monitor the Scheduler� activities:

SQL> select table_name, comments
   2 from   dict
   3 where table_name like 'DBA%SCHEDULER%'
   4 order by table_name;
 
 
These examples published by Burleson at www.dba-oracle.com. here just refer to it.
 

Nov 10, 2008

How to connect Access from Excel via ADO

'===========================================================================
'The macro example assumes that your VBA project has added a reference to the ADO object library.
'You can do this from within the VBE by selecting the menu Tools, References and selecting Microsoft
'ActiveX Data Objects x.x Object Library.
'Use ADO if you can choose between ADO and DAO for data import or export.
'===========================================================================
Sub ADOFromExcelToAccess()
    ' exports data from the active worksheet to a table in an Access database
    ' this procedure must be edited before use
    Dim cn As ADODB.Connection, rs As ADODB.Recordset, r As Long
   
    ' connect to the Access database
    Set cn = New ADODB.Connection
    cn.Open "Provider=Microsoft.Jet.OLEDB.4.0; " & _
    "Data Source=C:\FolderName\DataBaseName.mdb;"
   
    ' open a recordset
    Set rs = New ADODB.Recordset
    rs.Open "TableName", cn, adOpenKeyset, adLockOptimistic, adCmdTable
   
    ' all records in a table
    r = 3 ' the start row in the worksheet
    Do While Len(Range("A" & r).Formula) > 0
    ' repeat until first empty cell in column A
        With rs
        .AddNew ' create a new record
        ' add values to each field in the record
        .Fields("FieldName1") = Range("A" & r).Value
        .Fields("FieldName2") = Range("B" & r).Value
        .Fields("FieldNameN") = Range("C" & r).Value
        ' add more fields if necessary...
        .Update ' stores the new record
        End With
    r = r + 1 ' next row
    Loop
   
    rs.Close
    Set rs = Nothing
    cn.Close
    Set cn = Nothing
End Sub


Nov 9, 2008

About sys_Context Function

In Oracle/PLSQL, the sys_context function can be used to retrieve information about the Oracle environment.
The syntax for the sys_context function is:

sys_context( namespace, parameter, [ length ] )

namespace is an Oracle namespace that has already been created. If the namespace of 'USERENV' is used,
attributes describing the current Oracle session can be returned.
parameter is a valid attribute that has been set using the DBMS_SESSION.set_context procedure.length is optional.
It is the length of the return value in bytes. If this parameter is omitted or if an invalid entry is provided,
the sys_context function will default to 256 bytes.

The valid parameters for the namespace called 'USERENV' are as follows:

Parameter Explanation Return Length
AUDITED_CURSORIDReturns the cursor ID of the SQL that triggered the auditN/A
AUTHENTICATION_DATAAuthentication data256
AUTHENTICATION_TYPEDescribes how the user was authenticated. Can be one of the following values: Database, OS, Network, or Proxy30
BG_JOB_IDIf the session was established by an Oracle background process, this parameter will return the Job ID. Otherwise, it will return NULL.30
CLIENT_IDENTIFIERReturns the client identifier (global context)64
CLIENT_INFOUser session information64
CURRENT_SCHEMAReturns the default schema used in the current schema30
CURRENT_SCHEMAIDReturns the identifier of the default schema used in the current schema30
CURRENT_SQLReturns the SQL that triggered the audit event64
CURRENT_USERName of the current user30
CURRENT_USERIDUserid of the current user30
DB_DOMAINDomain of the database from the DB_DOMAIN initialization parameter256
DB_NAMEName of the database from the DB_NAME initialization parameter30
ENTRYIDAvailable auditing entry identifier30
EXTERNAL_NAMEExternal of the database user256
FG_JOB_IDIf the session was established by a client foreground process, this parameter will return the Job ID. Otherwise, it will return NULL.30
GLOBAL_CONTEXT_MEMORYThe number used in the System Global Area by the globally accessed contextN/A
HOSTName of the host machine from which the client has connected54
INSTANCEThe identifier number of the current instance30
IP_ADDRESSIP address of the machine from which the client has connected30
ISDBAReturns TRUE if the user has DBA privileges. Otherwise, it will return FALSE.30
LANGThe ISO abbreviate for the language62
LANGUAGEThe language, territory, and character of the session. In the following format:
language_territory.characterset
52
NETWORK_PROTOCOLNetwork protocol used256
NLS_CALENDARThe calendar of the current session62
NLS_CURRENCYThe currency of the current session62
NLS_DATE_FORMATThe date format for the current session62
NLS_DATE_LANGUAGEThe language used for dates62
NLS_SORTBINARY or the linguistic sort basis62
NLS_TERRITORYThe territory of the current session62
OS_USERThe OS username for the user logged in30
PROXY_USERThe name of the user who opened the current session on behalf of SESSION_USER30
PROXY_USERIDThe identifier of the user who opened the current session on behalf of SESSION_USER30
SESSION_USERThe database user name of the user logged in30
SESSION_USERIDThe database identifier of the user logged in30
SESSIONIDThe identifier of the auditing session30
TERMINALThe OS identifier of the current session10

Applies To:

  • Oracle 8i, Oracle 9i, Oracle 10g, Oracle 11g
For example:
sys_context('USERENV', 'NLS_DATE_FORMAT') would return 'RR-MM-DD'
sys_context('USERENV', 'NLS_SORT') would return 'BINARY'

How to set the NLS_LANG in Unix Environments

Subject: The correct NLS_LANG setting in Unix Environments

In this Document
Goal
Solution
References
--------------------------------------------------------------------------------
Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 10.2.0.0
Information in this document applies to any platform.
GoalHow to set the NLS_LANG correctly in Unix environments:

-1- Check which locale is set and correct this if needed.
-2- Check if the choosen locale is indeed installed and install this if needed.
-3- Check if you telnet/ssh software is properly set up.
-4- Set the NLS_LANG and Test.
-5- What to do if this is not working?
-6- More in depth debugging.

Solution

-1- Check which locale is set and correct this if needed.
---------------------------------------------------------
Please note that the ouput of this command is not exactly the same on all the Unix environments.
Contact your Unix vendor if you have questions.
The example used here is to configure your unix environment so that you can use Unicode (UTF8) with sqlplus on your unix box.
To see your current setup, use the "locale" command like this:

$ locale

example of output:

LANG=fr_FR
LC_CTYPE="fr_FR.iso885915@euro"
LC_COLLATE="fr_FR.iso885915@euro"
LC_MONETARY="fr_FR.iso885915@euro"
LC_NUMERIC="fr_FR.iso885915@euro"
LC_TIME="fr_FR.iso885915@euro"
LC_MESSAGES="fr_FR.iso885915@euro"
LC_ALL=fr_FR.iso885915@euro

Most Unix versions have this as default:
$ locale

LANG=
LC_CTYPE="C"
LC_COLLATE="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_MESSAGES="C"
LC_ALL=

"C" means US7ASCII , this implies that you can only display a-z, A-Z and 0-9 and nothing else.

We recommend to use UTF-8 when possible, this should look like:

$ locale

LANG=en_US
LC_CTYPE="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_ALL=en_US.UTF-8

When you have choosen a value, for example "en_US.UTF-8" on Linux, you can set it like this:

$ export LC_ALL=en_US.UTF-8
or
% setenv LC_ALL en_US.UTF-8

However you should set the needed variables in the user ".profile" file and then log out an in agian and check if the "locale" command returns the wanted output.
If you have questions on how to configure a user environment on your specific Unix/Linux flavor please consult your OS vendor.

-2- Check if the choosen locale is indeed installed and install this if needed.
-------------------------------------------------------------------------------
Please *do* check that you have the defined locale really installed.
To see all installed locales issue:
$ locale -a
sample output:
$ locale -a
POSIX
common
en_US.UTF-8
C
iso_8859_1
iso_8859_15
en_CA
en_CA.ISO8859-1
en_US
en_US.ISO8859-1
en_US.ISO8859-15
en_US.ISO8859-15@euro
fr_CA
fr_CA.ISO8859-1
th
th_TH
th_TH.TIS620
ja

This lists all installed locales for the unix box, for example "fr_FR.iso885915@euro" is missing in the list, so if you want to use it then you need to install this first.
If you set the user environment to a locale that is not installed you will not get an error but it will not work.
Please also note that you need to have installed the exact locale , if you have "fr_FR.UTF8" or "UTF-8" installed but want to use "en_US.UTF8" you need to install "en_US.UTF8".
In the example you see that "en_US.UTF-8" is listed , so we can use it on this server.

-3- Check if you telnet/ssh software is properly set up.
--------------------------------------------------------
You will need *also* to check that your TELNET/SSH software is correctly configured.
It's your telnet software is responible for the conversion of the Unix locale to the client's environment (most likly a windows system).
We suggest that you try first with the free PUTTY client.
Putty download site - http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
When using the "en_US.UTF-8" like in our example on the unix side change the following setting in Putty:
Open the configuration window, navigate to "Window" then to "Translation" and set "Received data assumed to be in which character set" to "UTF-8".
Then navigate to "Window" then to "Appearance" and choose a font in the "Font used in the terminal window" that supports the languages you want to use/see.
For non-asian languages you can use the default "Courier New".
On windows you can use the windows tool "character map" to see what characters are know in a font.
Putty UTF8 how to - http://www.laug.org/2004/11/putty-utf8-howto.html
If it works with Putty but not with your telnet/ssh package then please consult the vendor of your telnet/ssh software.
If you do not use telnet but a "real" unix display please see Note 265090.1 - How to check Unix terminal Environments for the capability to display extended characters.

-4- Set the NLS_LANG and Test.
------------------------------
Once you have 1) configured the locale correctly , 2) have checked that the used locale is really installed and 3) configured your telnet/ssh client, then you can use a NLS_LANG set to LC_TYPE values.
In our "en_US.UTF-8" example this means setting NLS_LANG=AMERICAN_AMERICA.UTF8 (note the difference in notation between UTF-8 (unix) and UTF8 (oracle) )
so, login with your Unix user and then
a) check with locale if it's correctly set (assuming here en_US.UTF8)
b) set the NLS_LANG to match the locale settings
$ export NLS_LANG=AMERICAN_AMERICA.UTF8
or
% setenv NLS_LANG AMERICAN_AMERICA.UTF8
c) connect with sqlplus to your database and select some data.
For example "select UNISTR('\20AC') from dual;" will give the euro symbol when selecting using a 9i or up database that can handle the euro (UTF8,WE8MSWIN1252,...) and a correct UTF-8 or ISO8859-15 Unix enviroment.
If this works, then set the NLS_LANG also in the profile of your user.

-5- What to do if this is not working?
--------------------------------------
If you do not see the expected characters then please double check your settings.
However it's also possible that you have wrong data in your database.
The easy way to check:
Use a windows client, download and install SQL developer http://www.oracle.com/technology/products/database/project_raptor/, connect to your database and see if your data is correctly displayed in that tool.
If it is visible in SQL developer then the data is correct in the database, if not then you have wrong data in your database.

The harder way:
If for example "select ename from scott.emp where empno='7369';" is a select that returns one row of your data then do "select dump(ename,1016),ename from scott.emp where empno='7369';".
You can then look up if the codes match the characters you expect for your database characterset
( select value from NLS_DATABASE_PARAMETERS where parameter='NLS_CHARACTERSET'; )
in NOTE.282336 Charts of most current mono-byte Character sets or in case you have a (AL32)UTF8 database use Note:69518.1
Determining the codepoint for UTF8 characters.

If you can't figure out what's wrong log a tar, refer to this note and provide:
* the info asked in Note 226692.1 Finding out your NLS Setup.
* a spool (!not copy paste!) of the output of your version of the
"select dump(ename,1016),ename from scott.emp where empno='7369';" select .

It might also be good to follow Database Character Set Healthcheck,
same here, if you are stuck, log a tar and provide all gatherd information.
note to support: if a ct log's a TAR about this please *do* request a ARE review.

-6- More in depth debugging.
----------------------------
The steps 1-4 should be enough for 99% of the cases, the rest of the note is more in depth debugging
On some platforms, it can be usefull to use the following syntax to have more details about the codepage really used:

$ locale LC_CTYPE | head
example of output in a HP-UX env:
""
""
"iso885915"
""
example of output in a Linux env:
upper;lowe ;alpha;digit;xdigit;space;print;graph;blank;cntrl;punct;alnum;combining;combining_level3
toupper;tolower;totitle
16
1
ISO-8859-15
70
84
1
0
1

$ locale LC_CTYPE | head
upper;lower;alpha;digit;xdigit;space;print;graph;blank;cntrl;punct;alnum;combining;combining_level3
toupper;tolower;totitle
16
6
UTF-8
70
84
1
0
1

On Solaris, AIX, TRU64, this syntax doesn't give interesting complementary information.
To find more details about these settings:
on Solaris, have a look in /usr/lib/locale.
on AIX, have a look in /usr/lib/nls/README
on TRU64, have a look in /usr/lib/nls
on HP-UX, have a look in /usr/lib/nls/config
on Linux, have a look in /usr/share/locale/locale.alias

How to check the codepoints managed by the O.S.:
To know which code point is generated for a character in a Unix Environment,
you can use the "od" command like this (examples with a iso-8859-1 locale):

$ od -xc
é^D^D
0000000 00e9
351 \0
0000001

as you can see the hexa-decimal code point e9 is corresponding to the "é" (lower e acute)
351 is the corresponding Octal value (Octal is the native mode of the od command).
You can also check the character corresponding to a code point using the "echo" command like this:
for Solaris, AIX, HP-UX, TRU64:
$echo '\0351'
é

for Linux:
$echo -e '\0351'
é

As you can see, echo uses the Octal value, so you need to convert in octal the value you want to check.

ReferencesNote 158577.1 - NLS_LANG Explained (How does Client-Server Character Conversion Work?)
Note 179133.1 - The correct NLS_LANG in a Windows Environment
Note 265090.1 - How to check Unix terminal Environments for the capability to display extended characters.
http://www.xfree86.org/4.2.0/xfontsel.1.html - to see which fonts are known in the xwin env
http://www.xfree86.org/4.2.0/xlsfonts.1.html - to see what fonts are installed on the system
http://publib.boulder.ibm.com/infocenter/pseries/index.jsp?topic=/com.ibm.aix.doc/aixprggd/nlsgdrf/locale_env.htm - IBM locale settings reference
http://docs.hp.com/en/5187-3605/ch02.html - HP Encoding Characters Reference
http://docs.hp.com/en/UXL10N-90302/UXL10N-90302.pdf - HP Language configuration reference
http://developers.sun.com/dev/gadc/faq/locale.html - Sun Solaris locale reference
http://www.opengroup.org/onlinepubs/007908799/xbd/locale.html - UNIX Open Group locale specifications reference
Note 119119.1 - AL32UTF8 / UTF8 (unicode) Database Character Set Implications



Nov 6, 2008

12 ways to read a file

 
#!/usr/bin/ksh
#
# SCRIPT: 12_ways_to_parse.ksh.ksh
#
#
# REV: 1.2.A
#
# PURPOSE:  This script shows the different ways of reading
#       a file line by line.  Again there is not just one way
#       to read a file line by line and some are faster than
#       others and some are more intuitive than others.
#
# REV LIST:
#
#       03/15/2002 - Randy Michael
#       Set each of the while loops up as functions and the timing
#       of each function to see which one is the fastest.
#
#######################################################################
#
#       NOTE: To output the timing to a file use the following syntax:
#
#          12_ways_to_parse.ksh file_to_process  > output_file_name 2>&1
#
#       The actaul timing data is sent to standard error, file
#       descriptor (2), and the function name header is sent
#       to standard output, file descriptor (1).
#
#######################################################################
#
# set -n  # Uncomment to check command syntax without any execution
# set -x  # Uncomment to debug this script
#

FILENAME="$1"
TIMEFILE="/tmp/loopfile.out"
>$TIMEFILE
THIS_SCRIPT=$(basename $0)

######################################
function usage
{
echo "\nUSAGE: $THIS_SCRIPT  file_to_process\n"
echo "OR - To send the output to a file use: "
echo "\n$THIS_SCRIPT  file_to_process  > output_file_name 2>&1 \n"
exit 1
}
######################################
function while_read_LINE
{
cat $FILENAME | while read LINE
do
        echo "$LINE"
        :
done
}
######################################
function while_read_LINE_bottom
{
while read LINE
do
        echo "$LINE"
        :

done < $FILENAME
}
######################################
function while_line_LINE_bottom
{
while line LINE
do
        echo $LINE
        :
done < $FILENAME
}
######################################
function cat_while_LINE_line 
{
cat $FILENAME | while LINE=`line`
do
        echo "$LINE"
        :
done
}
######################################
function while_line_LINE
{
cat $FILENAME | while line LINE
do
        echo "$LINE"
        :
done
}
######################################
function while_LINE_line_bottom
{
while LINE=`line`
do
        echo "$LINE"
        :

done < $FILENAME
}
######################################
function while_LINE_line_cmdsub2
{
cat $FILENAME | while LINE=$(line)
do
        echo "$LINE"
        :
done
}
######################################
function while_LINE_line_bottom_cmdsub2
{
while LINE=$(line)
do
        echo "$LINE"
        :

done < $FILENAME
}
######################################
function while_read_LINE_FD
{
exec 3<&0
exec 0< $FILENAME
while read LINE
do
        echo "$LINE"
        :
done
exec 0<&3
}
######################################
function while_LINE_line_FD
{
exec 3<&0
exec 0< $FILENAME
while LINE=`line`
do
        echo "$LINE"
        :
done
exec 0<&3
}
######################################
function while_LINE_line_cmdsub2_FD
{
exec 3<&0
exec 0< $FILENAME
while LINE=$(line)
do
        print "$LINE"
        :
done
exec 0<&3
}
######################################
function while_line_LINE_FD
{
exec 3<&0
exec 0< $FILENAME

while line LINE
do
        echo "$LINE"
        :
done

exec 0<&3
}
######################################
########### START OF MAIN ############
######################################

# Test the Input

# Looking for exactly one parameter
(( $# == 1 )) || usage

# Does the file exist as a regular file?
[[ -f $1 ]] || usage

echo "\nStarting File Processing of each Method\n"

echo "Method 1:"
echo "\nfunction while_read_LINE\n" >> $TIMEFILE
echo "function while_read_LINE"
time while_read_LINE >> $TIMEFILE
echo "\nMethod 2:"
echo "\nfunction while_read_LINE_bottom\n" >> $TIMEFILE
echo "function while_read_LINE_bottom"
time while_read_LINE_bottom >> $TIMEFILE
echo "\nMethod 3:"
echo "\nfunction while_line_LINE_bottom\n" >> $TIMEFILE
echo "function while_line_LINE_bottom"
time while_line_LINE_bottom >> $TIMEFILE
echo "\nMethod 4:"
echo "\nfunction cat_while_LINE_line\n" >> $TIMEFILE
echo "function cat_while_LINE_line"
time cat_while_LINE_line >> $TIMEFILE
echo "\nMethod 5:"
echo "\nfunction while_line_LINE\n" >> $TIMEFILE
echo "function while_line_LINE"
time while_line_LINE >> $TIMEFILE
echo "\nMethod 6:"
echo "\nfunction while_LINE_line_bottom\n" >> $TIMEFILE
echo "function while_LINE_line_bottom"
time while_LINE_line_bottom >> $TIMEFILE
echo "\nMethod 7:"
echo "\nfunction while_LINE_line_cmdsub2\n" >> $TIMEFILE
echo "function while_LINE_line_cmdsub2"
time while_LINE_line_cmdsub2 >> $TIMEFILE
echo "\nMethod 8:"
echo "\nfunction while_LINE_line_bottom_cmdsub2\n" >> $TIMEFILE
echo "function while_LINE_line_bottom_cmdsub2"
time while_LINE_line_bottom_cmdsub2 >> $TIMEFILE
echo "\nMethod 9:"
echo "\nfunction while_read_LINE_FD\n" >> $TIMEFILE
echo "function while_read_LINE_FD"
time while_read_LINE_FD >> $TIMEFILE
echo "\nMethod 10:"
echo "\nfunction while_LINE_line_FD\n" >> $TIMEFILE
echo "function while_LINE_line_FD"
time while_LINE_line_FD >> $TIMEFILE
echo "\nMethod 11:"
echo "\nfunction while_LINE_line_cmdsub2_FD\n" >> $TIMEFILE
echo "function while_LINE_line_cmdsub2_FD"
time while_LINE_line_cmdsub2_FD >> $TIMEFILE
echo "\nMethod 12:"
echo "\nfunction while_line_LINE_FD\n" >> $TIMEFILE
echo "function while_line_LINE_FD"
time while_line_LINE_FD >> $TIMEFILE
 


Nov 3, 2008

Doodles


via jessblog by Jess Lee on 11/2/08

It's been a while since I posted any of my artwork here. Here are some of my latest drawings.

A pretty boy. He turned out rather angular.

Kate Bosworth. The proportions are a bit off but I managed to finish this one very quickly.

She was supposed to have freckles, but I chickened out at the last minute and decided not to draw them for fear of messing up her skin.

This was an exercise in drawing hair. I HATE drawing hair.

The assignment was to draw a mechanical bird. I attempted to do a steampunk hummingbird. I'm not sure I'll ever finish this piece, because I prefer drawing people.

Halloween ghouls. My second (ever!) attempt at color pencil. I chose a limited color palette so I could focus on blending technique.



Several sql using to check tablespace usage

--I will add my comments when I'm free:)

select t.tablespace_name, round(t.free) free, f.extended_size, f.max_size,round((extended_size-free)*100/max_size,1) "pct_used%"
from (SELECT tablespace_name, SUM(bytes / 1024 / 1024) free
FROM DBA_FREE_SPACE
group by tablespace_name) t
join (select tablespace_name,
--file_name,
sum(bytes / (1024 * 1024)) as extended_size,
sum(greatest(maxbytes, bytes) / 1024 / 1024) as max_size
from dba_data_files
group by tablespace_name) f on t.tablespace_name =
f.tablespace_name
where (extended_size-free)/max_size >0
order by 1,5;

select * from dba_free_space

select tablespace_name,
file_name,
bytes / (1024 * 1024) as extended_size,
greatest(maxbytes, bytes) / 1024 / 1024 as max_size
from dba_data_files
order by 1

select * from dba_tablespaces

select q2.other_tname ts_name, pieces, ts_size ts_size,
nvl(largest_chunk,0) largestpc, nvl(total_free,0) totalfree,
nvl(round((total_free/ts_size)*100,2),0) pct_free,
ts_size-total_free whatsused,
nvl(100-round((total_free/ts_size)*100,2),100) pct_used,
decode(nvl(100-round((total_free/ts_size)*100,0),100),
85,'+',86,'+',87,'+',88,'+',89,'++',90,'++',91,'++',
92,'++',93,'++',94,'+++',95,'+++',96,'+++',97,'++++',
98,'+++++',99,'+++++',100,'+++++','') problem
from (select dfs.tablespace_name,count(*) pieces,
round(max(dfs.bytes)/1024/1024,2) largest_chunk,
round(sum(dfs.bytes)/1024/1024,2) total_free
from dba_free_space dfs group by tablespace_name) q1,
(select tablespace_name other_tname,
round(sum(ddf2.bytes)/1024/1024,2) ts_size
from dba_data_files ddf2 group by tablespace_name) q2
where q2.other_tname = q1.tablespace_name(+)
order by nvl(100-round((total_free/ts_size)*100,0),100) desc;


A wonderful PLSQL script is used to generate a index creation script.

--we want to create the index, using the same index name and table name with another schema
--the following procedure is used to generate the create index script

declare
val varchar2(10);
sql_val varchar2(100);
begin
for i in (select distinct index_name
from dba_ind_columns where table_owner='USR_LT' and table_name in
(select table_name from dba_tables where owner='LTYZDZ')
or 'ACCT_'table_name in (select table_name from dba_tables where owner='LTYZDZ')) loop
Select 'create index 'index_name' on 'table_name '('Substr(Sys_Connect_By_Path(column_name, ','), 2)')' into sql_val
From (Select Rownum rid, Rownum + 1 Next_Rid, column_name ,table_name,index_name From dba_ind_columns where index_name=i.index_name)
Where rid = (select count(*) from dba_ind_columns where index_name=i.index_name)
Start With Rid = 1
Connect By Prior Next_Rid = Rid;
dbms_output.put_line(sql_val);
end loop;
end;

--========output result===============================
create index IDX_ASSURE_NO on INFO_ASSURE(ASSURE_NO)
create index IDX_CDMA_SPEC_001 on CDMA_SPEC(USER_NO,DESTROY_DATE)
create index IDX_CHARGE_DEPOSIT_001 on CHARGE_DEPOSIT(USER_NO)
create index IDX_CODE_GROUP_001 on CODE_GROUP(GROUP_CODE)
create index IDX_CUSTOMER_NO on INFO_CUSTOMER(CUSTOMER_NO)
create index IDX_DEVICE_NUMBER on INFO_IVPN(DEVICE_NUMBER)
create index IDX_DIY_LIFECYCLE_001 on DIY_LIFECYCLE(USER_NO,DINNER_EXP_DATE)
create index IDX_INCDMA_SPEC on INCDMA_SPEC(USER_NO,DESTROY_DATE)
create index IDX_INCDMA_SPEC_002 on INCDMA_SPEC(USER_NO)
create index IDX_INFO_ASSURE_001 on INFO_ASSURE(LOCAL_NET)
create index IDX_INFO_BANK_001 on INFO_BANK(PAY_NO)
create index IDX_INFO_CDMA_001 on INFO_CDMA(LOCAL_NET)
create index IDX_INFO_CDMA_002 on INFO_CDMA(DEVICE_NUMBER)
create index IDX_INFO_CDMA_TEMP_001 on INFO_CDMA_TEMP(LOCAL_NET)
create index IDX_INFO_CUSTOMER_002 on INFO_CUSTOMER(LOCAL_NET)
create index IDX_INFO_FF_001 on INFO_FF(USER_NO,DESTROY_DATE)
create index IDX_INFO_INCDMA_001 on INFO_INCDMA(LOCAL_NET)
create index IDX_INFO_INCDMA_002 on INFO_INCDMA(USER_NO)
create index IDX_INFO_NAI_001 on INFO_NAI(USER_NO,FLAG)
create index IDX_INFO_PAY_001 on INFO_PAY(LOCAL_NET)
create index IDX_PAY_NO on INFO_PAY(PAY_NO)
create index IDX_PAY_NO_INVALID on PAY_RELATION(PAY_NO,INVALID_DATE)
create index IDX_PAY_RELATION_01 on PAY_RELATION(USER_NO)
create index IDX_PAY_RELATION_USER_NO on PAY_RELATION(USER_NO)
create index IDX_RES_INUIM_NUMBER_001 on RES_INUIM_NUMBER(UIM_NUMBER)
create index IDX_UIM_NUMBER on RES_UIM_NUMBER(UIM_NUMBER)
create index IDX_USER_NO on DINNER_LIFECYCLE(USER_NO)
create index IDX_USER_NO_001 on INFO_IVPN(USER_NO)
create index IDX_USER_NO_INVALID on PAY_RELATION(USER_NO,INVALID_DATE)
create index IDX_USER_NO_STATUS on INFO_CDMA(USER_NO,STATUS_CODE)
create index INDEX_PAY_NO on PAY_RELATION(PAY_NO)
create index INDEX_USER_DINNER on DINNER_USER(USER_DINNER)
create index INDEX_USER_NO on INFO_CDMA(USER_NO)

/*
the author of this script is by lusheng.
I give this task to him, then he created it. looks it works well.
wenderful.*/



2 Funny Maps from Elena Lee


From: Elena Lee
meirong.li@hp.com




Nov 2, 2008

How to remove a VG and using its space to extend another VG?

**************************************************
test:/tmp>df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/sys_vg-root_lv
48G 26G 20G 58% /
/dev/sda1 99M 13M 82M 14% /boot
none 2.0G 0 2.0G 0% /dev/shm

**************************************************
test:/tmp>pvscan
PV /dev/sda3 VG ifmx_vg lvm2 [222.93 GB / 119.93 GB free]
PV /dev/sda2 VG sys_vg lvm2 [49.97 GB / 32.00 MB free]
Total: 2 [272.90 GB] / in use: 2 [272.90 GB] / in no VG: 0 [0 ]

**************************************************
If i want to remove ifmx_vg,then using the free space to extend sys_vg, How I should do?

**************************************************
step follows:
1,lvremove /dev/ifmx_vg/plogdbs --remove lv
2,vgchange -a n ifmx_vg --stop vg
3,vgremove ifmx_vg --remove vg
4,vgextend sys_vg /dev/sda3 --extend sys_vg
5,lvextend -L +20000M /dev/sys_vg/root_lv --extend root_lv
6,ext2online /dev/sys_vg/root_lv

Is the six step ==ext2online /dev/sys_vg/root_lv ==must done, not sure?




Continue talk about ORA-01110 error

From Oracle Error Tips by Burleson Consulting (S. Karam)


The Oracle docs note this on the ora-01110 error:

ORA-01110: data file string: "string"
Cause: Reporting file name for details of another error
Action: See associated error message
ORA-01110 is thrown in conjunction with other errors.
Resolving ORA-01110 consists of viewing error message associated with the string you are running.
--------------------------------------------------------------------------------
Don Burleson offers great advice on retrieving the associateded errors on
BC Praetoriate Oracle Support in an excerpt from the book
<Oracle Internals Monitoring & Tuning Scripts>:
Statement processed.
ALTER DATABASE OPEN resetlogs
*
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u03/oradata/tstc/dbsyst01.dbf'
Or:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 48 needs more recovery to be consistent
ORA-01110: data file 48: '/vol06/oradata/testdb/ard01.dbf'

If all available archive logs and online redo logs are applied, and the error is not corrected, then the use of the _allow_resetlogs_corruption parameter should be considered. Make sure a good backup of the database in a closed state (all files) is taken before attempting recovery using this parameter.

It cannot be emphasized enough that the database will no longer be supported by Oracle until it is rebuilt after using _allow_resetlogs_corruption for recovery.
Another helpful link to resolving ORA-01110 and ORA-01116 is on the IBM website:

Problem

The ITM for Oracle agent may fails in retrieving data and the TEP workspaces show up empty. The log file shows errors ORA-01116 and ORA-01110

Solution

1. Problem:

Oracle Workspace do not report any data from the oracle agent being monitored

2. Cause:

The log files called *_col.out may shows ORA-01116 and ORA-01110 errors when retrieving info for some cursors, for example:

=============================================
ORU0007T (161958) Stmt = SELECT /*+RULE*/ COUNT(*) EXTENTS FROM
SYS.DBA_EXTENTS
ORU0008W (161958) ORACLE Message ORA-01116: error in opening database
file 26
ORA-01110: data file 26: for OPEN
=============================================
3. Solutions:

It could be due to authorization issue or due to the fact the data files does not exists anymore

1) check the data file corresponding to ID 26 in the sys.dba_data_files table
2) check the user specified in korgrant.sql or korgrantcustom.sql can access that directory
--> if cannot be access make sure to that the user has read access to it
3) check that the files corresponding that ID still exist in the reported location
--> if not you need to contact the Oracle DBA to re-establish consistency across info stored in Oracle system table.


A Case of ORA-01110 error

1.Copy from AA1 to AB1
cloned by copying a restore from AA1
2.Create controlfile

STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "AB1" RESETLOGS ARCHIVELOG
...
alter database open resetlogs;

Then got the following message after the db-copy!
ORA-01190: controlfile or data file 75 is from before the last RESETLOGS
ORA-01110: data file 75: '/oracle/AB1/data2/data15/data15'

i make now a restore from the file 75: '/oracle/AB1/data2/data15/data15'
--file 75 from the last cold backup from the AA1 database
then i wolud to create new controlfile
and got the following message
ORA-01503: CREATE CONTROLFILE failed
ORA-01161: database name AA1 in file header does not match given name of AB1
ORA-01110: data file 75: '/oracle/AB1/data2/data15/data15'

Can anybody advise me how i can solve tis problem?

Marcio Paiva's advise on this issue:

Flow these steps:
1. Take a physicall backup of the ur primary database
2. take a backup controlfile trace file from primary
3. restore only the datafiles and plan backup controlfile to secondary
4. recreate controlfile changing ur database name and file locations
5. issue recover automatic database until cancel using backup controlfile
6. alter database open resetlogs
7. add tempfiles

About Oracle's NLS_DATE_FORMAT

NLS_DATE_FORMAT is used if a date format mask is not specified in application code. The effective NLS_DATE_FORMAT is determined by the following (in order of precedence):

1. Session NLS_DATE_FORMAT (via ALTER SESSION command)
2. Client side NLS_DATE_FORMAT (from client environment variables/registry settings)
3. Instance NLS_DATE_FORMAT (from init.ora file)
4. Database NLS_DATE_FORMAT

Session NLS_DATE_FORMAT is set to client side NLS_DATE_FORMAT (explicit or implicit) ONLY if NLS_LANG is set. Another words, if NLS_LANG is set, then session NLS_DATE_FORMAT will be taken from the client. If clent NLS_LANG is set and client NLS_DATE_FORMAT is not, session NLS_DATE_FORMAT will default to DD-MON-YY (and that was exactly what you experienced). If NLS_LANG is not specified on the client side, NLS_DATE_FORMAT will be taken from instance NLS_DATE_FORMAT which is NLS_DATE_FORMAT from INIT.ORA. If NLS_DATE_FORMAT is not set in INIT.ORA, session NLS_DATE_FORMAT will default to DD-MON-YY.


Nov 1, 2008

Get the SQL using ORA_SQL_TXT via Trigger


From 9i,you can using the system event ORA_SQL_TXT to get the SQL by Trigger.

SQL> CREATE TABLE T (ID NUMBER, NAME VARCHAR2(30));

SQL> CREATE TABLE T_LOG (EXECUTE_DATE DATE, SQL_STATMENT CLOB);

SQL> CREATE OR REPLACE TRIGGER T
2 BEFORE INSERT OR UPDATE OR DELETE ON T
3 FOR EACH ROW
4 DECLARE
5 V_SQL_OUT ORA_NAME_LIST_T;
6 V_NUM NUMBER;
7 V_SQL_STATMENT VARCHAR2(32767);
8 BEGIN
9 V_NUM := ORA_SQL_TXT(V_SQL_OUT);
10 FOR I IN 1 .. V_NUM LOOP
11 V_SQL_STATMENT := V_SQL_STATMENT || V_SQL_OUT(I);
12 END LOOP;
13 INSERT INTO T_LOG (EXECUTE_DATE, SQL_STATMENT)
14 VALUES (SYSDATE, V_SQL_STATMENT);
15 END;
16 /

SQL> INSERT INTO T VALUES (1, 'A');

SQL> INSERT INTO T (ID) VALUES (2);

SQL> UPDATE T SET NAME = 'B' WHERE ID = 2;

SQL> DELETE T;

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';

SQL> SELECT * FROM T_LOG;

EXECUTE_DATE SQL_STATMENT
------------------- -----------------------------------------------
2007-09-13 22:55:35 INSERT INTO T VALUES (1, 'A')
2007-09-13 22:55:46 INSERT INTO T (ID) VALUES (2)
2007-09-13 22:55:54 UPDATE T SET NAME = 'B' WHERE ID = 2
2007-09-13 22:56:00 DELETE T
2007-09-13 22:56:00 DELETE T

After 9206,Oracle no more support this function in trigger for DML.just only support in system event.

--DML触发器中使用ORA_SQL_TXT函数,这个函数返回NULL,而且传入的OUT变量将不会被初始化。

From yangtinkun




A Perl script used to load a remote file by FTP.

sub FTP_Get_File {
  
    my ($hostname,$remote_file)=@_;
    my $filename = basename($remote_file);
    my $ftp;

    $ftp = Net::FTP->new($hostname, Debug => 0)
        or die "Cannot connect to $hostname: $@";
    #Please modify the login 
    $ftp->login("ora9i",'ora9i') or die "Cannot login ", $ftp->message;
   
    #$ftp->cwd("$remote_file") or die "Cannot change working directory ", $ftp->message;
    $ftp->get($remote_file,$filename)
        or die "get the romote file $remote_file failed ", $ftp->message;
    $ftp->quit;
}
 



How to do a full database backup automatically by RMAN -- shell script show

#!/bin/ksh
# function: backup database(cdmaetl) or archive with rman
# usage: crontab on AIX

#env
export ORACLE_SID=cdmaetl
#export SHELL=/usr/bin/ksh
export ORACLE_HOME=/ora9i/OraHome1
export NLS_LANG=american_america.zhs16gbk
export PATH=$PATH:$ORACLE_HOME/bin
#variable
RMAN=$ORACLE_HOME/bin/rman
SQLPLUS=$ORACLE_HOME/bin/sqlplus
TEE=/usr/bin/tee

LOGFILE=/ora9i/logs/rmanback.log

#dictionary structure
DBDEST=/oradata/backup/rmanbak/${ORACLE_SID}/datafiles
CTLFILE=/oradata/backup/rmanbak/${ORACLE_SID}/ctlfiles/${ORACLE_SID}_ctl_`date +%Y%m%d%H%M`.ctl

#start running RMAN to backup database and redo log, also have a log for those actions
echo "----------------backup start----------------------" > $LOGFILE
date
$RMAN <<EOF $TEE -a $LOGFILE
connect target

#allocate channel for backup, and only bak database
Run{
allocate channel c1 type disk format '$DBDEST/%U_%s.bak' maxpiecesize = 2000M;
backup database filesperset = 4;
backup archive log delete input;

}

#save just only one bak
delete noprompt obsolete redundancy = 1 device type disk;
exit;
EOF

#start bak control file
$SQLPLUS "/ as sysdba" <<EOF $TEE -a $LOGFILE
alter database backup controlfile to '$CTLFILE';
exit
EOF
date
echo "-----------------backup end-----------------------" >> $LOGFILE

How to switch the Autotrace and Explain Plan functions of Oracle

$cd $ORACLE_HOME/sqlplus/admin
$ls
$sqlplus /nolog
sql> @connect sys/passwd as sysdba
sys@ora9i> @plustrce.sql
sys@ora9i> grant plustrace to public;
sys@ora9i> exit
$ cd $ORACLE_HOME/rdbms/admin
sql> @connect sys/passwd as sysdba
sys@ora9i> @utlxplan.sql
sys@ora9i> exit
 
SQL> set autotrace on
SQL> select * from all_users where rownum<3;
USERNAME                          USER_ID CREATED
------------------------------ ---------- ---------
SYS                                     0 28-SEP-08
SYSTEM                                  5 28-SEP-08

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   COUNT (STOPKEY)
   2    1     NESTED LOOPS
   3    2       NESTED LOOPS
   4    3         TABLE ACCESS (FULL) OF 'USER$'
   5    3         TABLE ACCESS (CLUSTER) OF 'TS$'
   6    5           INDEX (UNIQUE SCAN) OF 'I_TS#' (NON-UNIQUE)
   7    2       TABLE ACCESS (CLUSTER) OF 'TS$'
   8    7         INDEX (UNIQUE SCAN) OF 'I_TS#' (NON-UNIQUE)
 

Statistics
----------------------------------------------------------
          7  recursive calls
          0  db block gets
         16  consistent gets
          0  physical reads
          0  redo size
        717  bytes sent via SQL*Net to client
        655  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          2  rows processed
SQL> explain plan for select * from all_objects where rownum<3;
 
         Explained.
SQL> select * from table(dbms_xplan.display);
...