Friday, November 19, 2010

Another example to prove using index, rather than full table scan to delete large amount of data

The envrionment is Redhat 5.3 + Oracle 10.2.0.4 (x64) + Dell PowerEdge 2950 + EMC Clarrion

Click here to refer to original test case on PC Server: http://www.dbaglobe.com/2010/11/deleting-large-amount-of-data-using.html


SQL> create table large1
2 (id number,
3 char20 varchar2(20),
4 char180 char(180))
5 tablespace assm;

Table created.

SQL> create table large2
2 (id number,
3 char20 varchar2(20),
4 char180 char(180))
5 tablespace assm;

Table created.

SQL> set timing on
SQL> begin
2 for i in 1..4000000
3 loop
4 insert into large1 values (i,dbms_random.string('X',20),' ');
5 insert into large2 values (i,dbms_random.string('X',20),' ');
6 if mod(i,1000)=0 then
7 commit;
8 end if;
9 end loop;
10 end;
11 /

PL/SQL procedure successfully completed.

Elapsed: 00:08:43.77
SQL> create unique index large1_pk on large1(id);

Index created.

Elapsed: 00:00:43.64
SQL> create unique index large2_pk on large2(id);

Index created.

Elapsed: 00:00:43.66
SQL> create index large1_n1 on large1(char20);

Index created.

Elapsed: 00:00:43.47
SQL> create index large2_n1 on large2(char20);

Index created.

Elapsed: 00:01:00.06
SQL> exec dbms_stats.gather_schema_stats('');

PL/SQL procedure successfully completed.

Elapsed: 00:03:25.59
SQL> set timing off
SQL> col segsize format 999,999,999,999
SQL> col segment_name for a30
SQL> select segment_name,sum(bytes) segsize from user_extents
2 where segment_name like 'LARGE%'
3 group by segment_name
4 order by segment_name;

SEGMENT_NAME SEGSIZE
------------------------------ ----------------
LARGE1 973,078,528
LARGE1_N1 149,946,368
LARGE1_PK 75,497,472
LARGE2 973,078,528
LARGE2_N1 149,946,368
LARGE2_PK 75,497,472

6 rows selected.

SQL> explain plan for
2 delete from large1 where id<1000000;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 1040529653

-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 1000K| 25M| 2256 (1)| 00:00:28 |
| 1 | DELETE | LARGE1 | | | | |
|* 2 | INDEX RANGE SCAN| LARGE1_PK | 1000K| 25M| 2256 (1)| 00:00:28 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"<1000000)

14 rows selected.

SQL> explain plan for
2 delete /*+ full(t) */ from large2 t where id<1000000;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2228625945

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 999K| 25M| 25917 (1)| 00:05:12 |
| 1 | DELETE | LARGE2 | | | | |
|* 2 | TABLE ACCESS FULL| LARGE2 | 999K| 25M| 25917 (1)| 00:05:12 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("ID"<1000000)

14 rows selected.

SQL> alter system flush buffer_cache;

System altered.

SQL> set timing on
SQL> delete from large1 where id<1000000;

999999 rows deleted.

Elapsed: 00:03:05.79
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> delete /*+ full(t) */ from large2 t where id<1000000;

999999 rows deleted.

Elapsed: 00:05:15.56
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01

No comments:

Post a Comment