重庆思庄Oracle、Redhat认证学习论坛
标题: 《收获,不止oracle》 学习笔记 [打印本页]
作者: 王硕 时间: 2015-3-7 23:42
标题: 《收获,不止oracle》 学习笔记
[em51][em51]存储结构的简化
1.创建一个简单的存储过程。实现将1到10万的值插入t表
第一次创建
create or replace procedure proc1
as
begin
for i in 1..100000
loop
execute immediate
'insert into t values('||i||')';
commit
end loop;
end;
/
drop table t purge;
create table t (x int);
alter system flush shared_pool;
set timing on
exec proc1;
select count(*) from t;
第一次分析
select t.sql_text,t.sql_id,t.PARSE_CALLS,t.EXECUTIONS
from v$sql t
where sql_text like '%insert into t values%';
发现共享池中大量的类似sql语句,而sql_id都各不相同,每个语句都被解析,共解析了10万次。
未设置绑定变量
第二次创建
create or replace procedure proc2
as
begin
for i in 1..100000
loop
execute immediate
'insert into t values (:x)' using i;
commit;
end loop;
end;
/
drop table t purge;
create table t (x int);
alter system flush shared_pool;
set timing on
exec proc1;
select count(*) from t;
第二次分析
设置了绑定变量:x,被HASH唯一的HASH值
select t.sql_text,t.sql_id,t.PARSE_CALLS,t.EXECUTIONS
from v$sql t
where sql_text like '%insert into t values%';
又因为execute immediate 是一种动态sql写法,常用于表名字段名是变量、入参的情况。
由于表名不知道,所以不能直接写成 sql语句 。要靠动态sql语句根据传入的表名参数,来拼成一条sql语句,由execute immediate 调用执行。
但是我们知道表名t ,insert into t values(i)即可满足。
第三次创建
create or replace procedure proc3
as
begin
for i in 1..100000
loop
insert into t values (i);
commit;
end loop;
end;
/
drop table t purge;
create table t (x int);
alter system flush shared_pool;
set timing on
exec proc1;
select count(*) from t;
第三次分析
select t.sql_text,t.sql_id,t.PARSE_CALLS,t.EXECUTIONS
from v$sql t
where sql_text like '%insert into t values%';
proc3 实现了绑定变量,而且动态sql的特点是在执行过程中再解析,而静态sql的特点是编译的过程就解析好了。这就是提升速度的原因。
第四次创建
create or replace procedure proc4
as
begin
for i in 1..100000
loop
insert into t values(i);
end loop;
commit;
end;
/
drop table t purge;
create table t (x int);
set timing on
exec proc1;
select count(*) from t;
第四次分析
commit 触发LGWR将REDO BUFFER 写到REDO BUFFER中,并将回滚段的活动失误标记为不活动,同时让回滚段中记录对应前镜像记录的所在位置标记为可以重写。
切记 commit不是写数据的动作。
第五次创建
drop table t purge;
create table t (x int);
alter system flush shared_pool;
set timing on
insert into t select rownum from dual connect by leve <=100000;
commit;
select count(*) from t;
第六次创建
drop table t purge;
alter system flush shared_pool;
set timing on
create table t as select rownum x from dual connect by level<=10000000;
第六次分析
create table t 跳过了数据缓存区,直接写到磁盘。
直接路径读写的方式缺点是 ,一定会有物理读。
但快速插入数据是第一目的,用此方法完成海量数据的插入。
第七次创建
create table t nologging parallel 16
as select rownum x from dual connet by level<=10000000;
[此贴子已经被作者于2015-03-07 23:43:45编辑过]
作者: 王硕 时间: 2015-3-7 23:45
[em53][em53]相关段和空间指令
2.查看数据库当前的在用回滚段
show parameter undo
3.查看数据库有几个回滚段
select tablespace_name,status from dba_tablespaces where c;
4.查看数据库有几个回滚段,并得出它们的大小
select tablespace_name,sum(bytes)/1024/1024
from dba_data_files
where tablespace_name in ('UNDOTBS1','UNDOTBS2')
group by tablespace_name;
5.切换回滚段
alter system set undo_tablespace=undotbs2 scope=both;
1.查看用户默认的表空间和临时表空间
select DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE from dba_users where username=' ';
2.查看其他用户的临时表空间
select default_tablespace, temporary_tablespace
from dba_users
where username='SYSTEM';
3.指定SYSTEM用户切换到指定的临时表空间
alter user system temporary tablespace TEMP_LJB;
4.观察不同用户在不同临时表空间的分配情况
select TEMPORARY_TABLESPACE, COUNT(*)
from dba_users
group by temporary_tablespace;
5.切换所有用户到指定的临时表空间
alter database default temporary tablespace temp_ljb;
6.查询临时表空间情况
select * from dba_tablespace_groups;
7.新建临时表空间组
create temporary tablespace temp1_1 tempfile 'TMP1_1.DBF' size 100M
create temporary tablespace temp1_2 tempfile 'TMP1_2.DBF' size 100M
create temporary tablespace temp1_3 tempfile 'TMP1_3.DBF' size 100M
8.查看临时表空间组情况,增加了3个成员
select * from dba_tablespace_groups;
9.可指定某临时表空间移到临时表空间
alter tablespace temp_ljb tablespace group tmp_grp1;
10.将用户指定到临时表空间
alter user LJB temporary tablespace tmp_grp1;
11.查看指定用户的临时表空间
select temporary_tablespace
from dba_users
where username='LJB';
作者: 王硕 时间: 2015-3-7 23:48
[em54][em54]表的设计
delete 无法释放空间
1.观察未删除表时产生的逻辑读、
drop table t purge;
create table t as select * from dba_objects;
set autotrace on
select count(*) from t; 逻辑读(Logical Reads= Consistent Gets + DB Block Gets)
2.观察delete删除t表所有记录后,逻辑读不变
set autotrace off
delele form t;
commit;
set autotrace on
select count(*) from t;
3.truncate 清空表后, 逻辑读大幅下降
truncate table t; 截断表
select count(*) form t;
结论:
delete 后保留空块,后面查询会依然查询本来没有数据的空块、DML操作
truncate是一种释放高水平位的动作,空块被收回。DDL操作,后面不能加条件,
但是如果对表truncate分区(alter table t truncate partition 'fenqu1')则能实现truncate table t where ..的功能
作者: 王硕 时间: 2015-3-7 23:49
redo 相关
1.查看产生多少日志
select a.name,b.value
from v$statname a, v$mystat b
where a.statistic#=b.statistic#
and a.name='redo size';
先执行上述语句,再更新命令,再执行上述语句 ,两个结果相减可得。
2.实验准备工作,建观察redo的视图
grant all on v_$mystat to abc;
grant all on v_$statname to abc;
create table t as select * from dba_objects; 防止破坏原数据
create or replace view v_redo_size
as
select a.name,b.value
from v$statname a,v$mystat b
where a.statistic#=b.statistic#
and a.name='redo size'; 创建视图,方便后续直接用select * from v_redo_size
3.观察删除记录产生多少redo
select * from v_redo_size;
delete from t
select * from v_redo_size; 查出字节数
4.观察插入记录产生多少redo
insert into t select * from dba_objects;
select * from v_redo_size;
5.观察更新记录产生多少redo
update t set object_id=rownum;
select * from v_redo_size;
6.结论
删除产生的undo 最多,而undo也是需要redo保护的。虽然删除undo量不大,但由于保护undo的redo量变大,导致redo最多。
作者: 王硕 时间: 2015-3-7 23:51
表记录太大检索减慢
1.创建基于事务和session的全局临时表
drop table t_tmp_session purge;
drop table t_tmp_transaction purge;
create global temporary table T_TMP_session
on commit preserve rows 基于会话的全局临时表
as
select * from dba_objects
where 1=2;
select table_name,temporary,duration from user_tables
where table_name='T_TMP_SESSION';
create global temporary table t_tmp_transaction
on commit delete rows
as
select * from dba_objects
where 1=2;
2.分别观察两种全局临时表针对各类DML语句产生的redo量
insert into :
select * from v_redo_size;
insert into t_tmp_transaction select * from dba_objects;
select * from v_redo_size;
得出的结果相减
同理
select * from v_redo_size;
insert into t_tmp_session select * from dba_objects;
select * from v_redo_size;
update
select * from v_redo_size;
update t_tmp_transaction set object_id=rownum;
select * from v_redo_size;
同理
select * from v_redo_size;
update t_tmp_session set object_id=rownum;
select * from v_redo_size;
delete
select * from v_redo_size;
delete from t_tmp_transaction set object_id=rownum;
select * from v_redo_size;
3.全局临时表和普通表产生日志情况的比较
drop table t purge;
create table t as select * from dba_objects where 1=2;
select * from v_redo_size;
insert into t select * from dba_objects;
select * from v_redo_size;
update t set object_id=rownum;
select * from v_redo_size;
delete from t;
select * from v_redo_size;
结论 :
无论插入更新还是删除,操作普通表产生的日志都比全局临时表要多。
全局临时表的两大特性:
高效删除性: 基于事务的全局临时表commit或者session退出后,临时表记录自动删除;
基于会话的全局临时表则是session连接退出后,临时表记录自动删除;
针对不同会话数据独立,不同的session访问全局临时表,看到的结果不同。
4.基于事务的全局临时表的高效删除
select count(*) from t_tmp_transaction;
select * from v_redo_size;
insert into t_tmp_transaction select * from dba_objects;
select * from v_redo_size;
commit;
select * from v_redo_size; commit 方式删除全局临时表记录产生的日志,只有commit本身产生的。
select count(*) from t_tmp_transaction;基于事务的全局临时表插入了55721行,commit后再查这个
表的记录没有了。
5.基于session的全局临时表commit后并不情况记录
select * from v_redo_size;
insert into t_tmp_session select * from dba_objects;
select * from v_redo_size;
commit;
select count(*) from t_tmp_session;
select * from v_redo_size;
6.基于session的全局临时表退出后再等入,记录清空
exit
sqlplus
select count(*) from t_tmp_session;
7.基于全局临时表的会话独立性观察第一个session
select * from v$mystat where rownum=1;
select * from t_tmp_session select * from dba_objects;
commit;
select count(*) from t_tmp_session;
另外打开一个窗口连入后,发现不能找到插入的数据
select * from v$mystat where rownum=1;
select count(*) from t_tmp_session;
insert into t_tmp_session select * from dba_objects where rownum=1;
commit;
select count(*) from t_tmp_session;
作者: 王硕 时间: 2015-3-7 23:52
簇表
1.蔟表
普通表中order by 语句中的排序不可避免,
设计蔟表:
drop table cust_orders;
drop cluster shc;
create cluster shc
(
cust_id number,
order_dt timestamp sort
)
hashkey 10000
hash is custS_id
size 8192 建立簇
create table cust_orders
( cust_id number,
order_dt timestamp sort,
order_number number,
username varchar2(30),
slip_addr number,
bill_addr number,
invoice_num number
)
cluster shc (cust_id,order_dt) 建立表
开始分析
set autotrace traceonly explain
variable x number
select cust_id,order_dt,order_number
from cust_orders
where cust_id=:x
order by order_dt;
蔟表和索引组织表一样结构特殊,更新操作开销非常大,谨慎使用。
作者: 王硕 时间: 2015-3-7 23:52
分区表描述
分区表: 范围分区、列表分区、HASH分区、组合分区
1.范围分区
drop table range_part_tab purge;
create table range_part_tab (id number,deal_date date,area_code number,contents varchar2(4000))
partition by range (deal_date)
(
partition p1 values less than (TO_DATE('2012-02-01','YYYY-MM-DD')),
partition p2 values less than (TO_DATE('2012-03-01','YYYY-MM-DD')),
partition p3 values less than (TO_DATE('2012-04-01','YYYY-MM-DD')),
partition p4 values less than (TO_DATE('2012-05-01','YYYY-MM-DD')),
partition p5 values less than (TO_DATE('2012-06-01','YYYY-MM-DD')),
partition p6 values less than (TO_DATE('2012-07-01','YYYY-MM-DD')),
partition p7 values less than (TO_DATE('2012-08-01','YYYY-MM-DD')),
partition p8 values less than (TO_DATE('2012-09-01','YYYY-MM-DD')),
partition p9 values less than (TO_DATE('2012-10-01','YYYY-MM-DD')),
partition p10 values less than (TO_DATE('2012-11-01','YYYY-MM-DD')),
partition p11 values less than (TO_DATE('2012-12-01','YYYY-MM-DD')),
partition p12 values less than (TO_DATE('2013-01-01','YYYY-MM-DD')),
partition p_max values less than (maxvalue) 超出范围的放在此分区,避免出错。总共建立了13个分区。
)
;
insert into range_part_tab (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-365,'J')+TRUNC(DBMS_RANDOM.VALUE(0,365)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connet by rownum<=100000; 插入2012年一整年日期随机数和福建地区号含义(591到599)的随机数记录。
commit;
2.列表分区
drop table list_part_tab purge;
create table list_part_tab (id number,deal_date date,area_code number,contents varchar2(4000))
partiton by list(area_code) 列表分区
(
partition p_591 values (591), 仅需 values 即可确定范围,数值也可为多个
partition p_592 values (592),
partition p_593 values (593),
partition p_594 values (594),
partition p_595 values (595),
partition p_596 values (596),
partition p_597 values (597),
partition p_598 values (598),
partition p_599 values (599),
partition p_other values (DEFAULT), 共10个分区 ,其余分区
)
;
insert into list_part_tab (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-365,'J')+TRUNC(DBMS_RANDOM.VALUES(0,365)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum<=100000; 插入2012一整年日期随机数和表示福建地区号含义(591到599)的随机数记录
commit;
3.散列分区 ( HASH分区 )
drop table hash_part_tab purge;
create table hash_part_tab (id number,deal_date date,area_code number,contents varchar2(4000))
partition by hash(deal_date) 散列分区
partitions 12 分区个数,偶数
;
insert into hash_part_tab(id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-365,'J')+TRUNC(DBMS_RANDOM.VALUE(0,365)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum<=100000;
commit;
散列分区没有指定分区名,只指定了分区个数partitions 12
4.组合分区
drop table range_list_part_tab purge;
create table range_list_part_tab (id numberm,deal_date date,area_code number,contents varchar2(4000))
partition by range(deal_date)
subpartition by list (area_code)
subupartition by TEMPLATE
(subpartition p_591 values (591),
subpartition p_592 values (592),
subpartition p_593 values (593),
subpartition p_594 values (594),
subpartition p_595 values (595),
subpartition p_596 values (596),
subpartition p_597 values (597),
subpartition p_598 values (598),
subpartition p_599 values (599),
subpartition p_other values (DEFAULT))
(
partition p1 values less than (TO_DATE('2012-02-01','YYYY-MM-DD')),
partition p2 values less than (TO_DATE('2012-03-01','YYYY-MM-DD')),
partition p3 values less than (TO_DATE('2012-04-01','YYYY-MM-DD')),
partition p4 values less than (TO_DATE('2012-05-01','YYYY-MM-DD')),
partition p5 values less than (TO_DATE('2012-06-01','YYYY-MM-DD')),
partition p6 values less than (TO_DATE('2012-07-01','YYYY-MM-DD')),
partition p7 values less than (TO_DATE('2012-08-01','YYYY-MM-DD')),
partition p8 values less than (TO_DATE('2012-09-01','YYYY-MM-DD')),
partition p9 values less than (TO_DATE('2012-10-01','YYYY-MM-DD')),
partition p10 values less than (TO_DATE('2012-11-01','YYYY-MM-DD')),
partition p11 values less than (TO_DATE('2012-12-01','YYYY-MM-DD')),
partition p12 values less than (TO_DATE('2013-01-01','YYYY-MM-DD')),
partition p_max values less than (maxvalue)
)
;
insert into range_list_tab(id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-365,'J')+TRUNC(DBMS_RANDOM.VALUE(0,365)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum<=100000;
commit;
5.与普通表插入相比
drop table norm_tab purge;
create table norm_tab (id number,deal_date date,area_code number,contents varchar2(4000));
insert into norm_tab (id,deal_date,area_code,contents)
select rownum,
to_date(to_char(sysdate-365,'J')+TRUNC(DBMS_RANDOM.VALUE(0,365)),'J'),
ceil(dbms_random.value(590,599)),
rpad('*',400,'*')
from dual
connect by rownum<=100000;
commit;
set linesize 666
set pagesize 5000
column segment_name format a20
column partition_name format a20
column segment_type format a20
select segment_name,
partition_name,
segment_type,
bytes/1024/1024,
tablespace_name
from user_segments
where segment_name in('range_part_tab','norm_tab');
结论:
分区表产生多个segment,普通表只产生一个segment。
分区表可以分配在不同的表空间
6.观察HASH分区的段分配情况
set linesize 666
set pagesize 5000
column segment_name format a20
column partition_name format a20
column segment_type format a20
select segment_name,
partition_name,
segment_type,
bytes/1024/1024,
tablespace_name
from user_segments
where segment_name in('HASH_PART_TAB');
hash分区,无法把指定的数据到指定的分区去,不利于索引。
hash算法,均匀分布到不同的分区中,避免查询数据时集中到一个地方、可以精确匹配,无法范围扫描。
7.观察组合分区的段分配的个数
select count(*)
from user_segments
where segment_name='RANGE_LIST_PART_TAB';
组合分区产生了很多的user_segment 段
作者: 王硕 时间: 2015-3-7 23:53
分区表特效
高效的分区消除
1.范围分区表的分区消除带来的性能优势
对分区表
select linesize 1000
set autotrace traceonly
set timing on
select *
from range_part_tab
where deal_date>=TO_DATE('2012-09-04','YYYY-MM-DD')
and deal_date<=TO_DATE('2012-09-07','YYYY-MM-DD');
对普通表
select *
from range_part_tab
where deal_date>=TO_DATE('2012-09-04','YYYY-MM-DD')
and deal_date<=TO_DATE('2012-09-07','YYYY-MM-DD');
查看cost%cpu、consistent gets比较
分区表消除中p_start,和p_stop 都显示一个数,表示只在一个分区执行消除计划。
作者: 王硕 时间: 2015-3-7 23:53
索引组织表
1.索引组织表
建立索引组织表和普通表
drop table heap_addresses purge;
drop table iot_addresses purge;
create table heap_address
(empno number(10),
addr_type varchar2(10),
street varchar2(10),
city varchar2(10),
state varchar2(2),
zip number,
primary key (empno)
)
create table iot_addresses
(empno number(10),
addr_type varchar2(10),
street varchar2(10),
city varchar2(10),
state varchar2(10),
zip number,
primary key (empno)
)
organization index
insert into heap_addresses
select object_id,'WORK','123street','washington','DC',20123
from all_objects;
insert into iot_addresses
select object_id,'WORK','123street','washington','DC',20123
from all_objects;
索引组织表和普通表的查询性能
set linesize 1000
set autotrace traceonly
select *
from heap_addresses
where empno=22;
select *
from iot_addresses
where empno=22;
结论:
索引组织表的逻辑读为2,普通表的逻辑读为3。
另外普通表读取主键索引后,为了获取索引列以外的信息,产生了回表table access by index rowid.这正是普通表性能差的原因。
而索引组织表最大的特点就是,表就是索引,索引就是表。且适用于很少更新、频繁读的应用场合。
欢迎光临 重庆思庄Oracle、Redhat认证学习论坛 (http://bbs.cqsztech.com/) |
Powered by Discuz! X3.2 |