Quantcast
Channel: Blogs from Franck Pachot
Viewing all 98 articles
Browse latest View live

Never gather WORKLOAD stats on Exadata...

$
0
0

For Exadata, oracle has introduced an 'EXADATA' mode which sets a high transfer rate (with IOTFRSPEED as in NOWORKLOAD statistics) and set a MBRC (as in WORKLOAD statistics). Those values are set rather than gathered because all the SmartScan optimization done at storage cell level, which makes the multiblock reads less expensive, is difficult to measure from the database.
Here I will explain what I stated in a previous blog: direct-path reads are not counted as multiblock reads for the MBRC system statistic. And direct-path read should be the main i/o path in Exadata as you probably bought that machine to benefit from SmartScan.

With direct-path reads

On a test database that has no activity, I’m creating a 1000 blocks table. My goal is to gather WORKLOAD system statistics during a simple table full scan on that table, and see how it calculates SREADTIM, MREADTIM and MBRC.

SQL> connect demo/demo
Connected.

SQL> drop table DEMO;
Table dropped.

SQL> create table DEMO pctfree 99 as select rpad('x',1000,'x') n from dual connect by level <=1000;
Table created.
Then I run a simple select between the calls to ‘start’ and ‘stop’ procedures of the dbms_stats WORKLOAD system stats gathering.
SQL> exec dbms_stats.gather_system_stats('start');
PL/SQL procedure successfully completed.

SQL> connect demo/demo
Connected.

SQL> select count(*) from DEMO;

  COUNT(*)
----------
      1000

I check the physical read statistics (this is why have reconnected my session so that I can query v$mystat without doing the delta)
SQL> select name,value from v$mystat join v$statname using(statistic#) where (name like 'phy%' or name like 'cell%') and value>0;

NAME                                                              VALUE
------------------------------------------------------------ ----------
physical read total IO requests                                      22
physical read total multi block requests                              7
physical read total bytes                                       8306688
cell physical IO interconnect bytes                             8306688
physical reads                                                     1000
physical reads direct                                              1000
physical read IO requests                                            15
physical read bytes                                             8192000
cell scans                                                            1
cell blocks processed by cache layer                               1000
cell blocks processed by txn layer                                 1000
cell blocks processed by data layer                                1000
cell physical IO bytes eligible for predicate offload           8192000
cell physical IO interconnect bytes returned by smart scan       130760
cell IO uncompressed bytes                                      8192000
I’ve read 1000 blocks in 15 i/o calls so I'm sure it is multiblock reads. All of them (1000 x 8k) was eligible for SmartScan and those 1000 blocks have been processed by the storage cell.

If you wonder why I have only 7 'physical read total multi block requests' it's because it accounts only the 'full' multiblock reads - not those that are limited by extent boundary. See here for that analysis.

If you wonder why I have only 22 'physical read total IO requests' then I've not the answer. The sql_trace shows only the 15 'direct path read'. And dbms_stats counts only the 'physical read IO requests'. If you have any idea, please comment.

I stop my WORKLOAD statistics gathering:
SQL> exec dbms_stats.gather_system_stats('stop');
PL/SQL procedure successfully completed.
And check the system statistics that have been set:

SQL> select * from sys.aux_stats$;

SNAME           PNAME           PVAL1
--------------- ---------- ----------
SYSSTATS_INFO   STATUS
SYSSTATS_INFO   DSTART
SYSSTATS_INFO   DSTOP
SYSSTATS_INFO   FLAGS               1
SYSSTATS_MAIN   CPUSPEEDNW       2300
SYSSTATS_MAIN   IOSEEKTIM          10
SYSSTATS_MAIN   IOTFRSPEED       4096
SYSSTATS_MAIN   SREADTIM
SYSSTATS_MAIN   MREADTIM         .151
SYSSTATS_MAIN   CPUSPEED         2300
SYSSTATS_MAIN   MBRC
SYSSTATS_MAIN   MAXTHR
SYSSTATS_MAIN   SLAVETHR
I have no SREADTIM which is expected as I've done only multiblock reads. I have a MREADTIM. But I don't have the MBRC set.

With conventional (aka buffered) reads

Let's do the same after disabling serial direct-path reads:

SQL> alter session set "_serial_direct_read"=never;
Session altered.
I do the same as before, but now my session stats show only conventional reads:
NAME                                                              VALUE
------------------------------------------------------------ ----------
physical read total IO requests                                      44
physical read total multi block requests                             28
physical read total bytes                                       8192000
cell physical IO interconnect bytes                             8192000
physical reads                                                     1000
physical reads cache                                               1000
physical read IO requests                                            44
physical read bytes                                             8192000
physical reads cache prefetch                                       956

and here are the gathered stats:
SNAME           PNAME           PVAL1
--------------- ---------- ----------
SYSSTATS_INFO   STATUS
SYSSTATS_INFO   DSTART
SYSSTATS_INFO   DSTOP
SYSSTATS_INFO   FLAGS               1
SYSSTATS_MAIN   CPUSPEEDNW       2300
SYSSTATS_MAIN   IOSEEKTIM          10
SYSSTATS_MAIN   IOTFRSPEED       4096
SYSSTATS_MAIN   SREADTIM
SYSSTATS_MAIN   MREADTIM         .028
SYSSTATS_MAIN   CPUSPEED         2300
SYSSTATS_MAIN   MBRC               23
SYSSTATS_MAIN   MAXTHR
SYSSTATS_MAIN   SLAVETHR

Now the MBRC is set with the gathered value.

This proves that MBRC is set only for conventional multiblock reads. Direct-path reads are not accounted.

Conclusion

If you are on Exadata, you probably want to benefit from SmartScan. Then you probably want the CBO to choose FULL TABLE SCAN which will do direct-path reads for large tables (according that they don't have a lot of updated buffers in SGA). If you gather WORKLOAD statistics they will set MBRC without accounting for those direct-path reads and it will probably be set lower than the average actual multiblock read (which - in direct-path reads - is close the the db_file_multiblock_read - default or set value).
This is the reason why Oracle introduced the EXADATA mode: it sets the MBRC from the db_file_multiblock_read value. They also set the IOTFRSPEED to a high value because gathering MREADTIM will probably get a very low value - lower than SREADTIM - thanks to the SmartScan. And CBO ignores values where MREADTIM is less than SREADTIM.

An alternative to EXADATA mode can be setting those values as NOWORKLOAD statistics and keep the db_file_multiblock_read_count set. You will have the same behavior because CBO uses db_file_multiblock_read_count when it is set and there are no MBRC system stats. But the danger is that if someone resets the db_file_multiblock_read_count (and I often advise to keep defaults) then the CBO will use a value of 8 and that will probably increase the cost of full table scans too much.

All formulas are here with a script that shows what is used by the CBO.

Never say never

Well, that blog post title is too extreme because: So I should say:
Never gather WORKLOAD stats on Exadata... except if your workload is not an Exadata optimized one.
If you are using Exadata for OLTP, then yes, you can gather WORKLOAD statistics as they probably fit OLTP behaviour. But in any case, always check the gathered stats and see if they are relevant.

OracleText: inserts and fragmentation

$
0
0

I plan to write several posts about OracleText indexes, which is a feature that is not used enough in my opinion. It's available in all editions and can index small text or large documents to search by words. When you create an OracleText index, a few tables are created to store the words and the association between those words and the table row that contains the document. I'll start to show how document inserts are processed.

Create the table and index

I'm creating a simple table with a CLOB

SQL> create table DEMO_CTX_FRAG
     (num number constraint DEMO_CTX_FRAG_PK primary key,txt clob);

Table created.
and a simple OracleText on that column
SQL> create index DEMO_CTX_INDEX on DEMO_CTX_FRAG(txt)
     indextype is ctxsys.context;

Index created.
That creates the following tables:
  • DR$DEMO_CTX_INDEX$I which stores the tokens (e.g words)
  • DR$DEMO_CTX_INDEX$K which index the documents (docid) and links them to the table ROWID
  • DR$DEMO_CTX_INDEX$R which stores the opposite way navigation (get ROWID from a docid)
  • DR$DEMO_CTX_INDEX$N which stores docid for deferred maintenance cleanup.

Inserts

I'm inserting a row with some text in the clob column

SQL> insert into DEMO_CTX_FRAG values (0001,'Hello World');

1 row created.
I commit
SQL> commit;

Commit complete.
And here is what we have in the OracleText tables:
SQL> select * from DR$DEMO_CTX_INDEX$K;
no rows selected

SQL> select * from DR$DEMO_CTX_INDEX$R;
no rows selected

SQL> select * from DR$DEMO_CTX_INDEX$I;
no rows selected

SQL> select * from DR$DEMO_CTX_INDEX$N;
no rows selected
Nothing is stored here yet. Which means that we cannot find our newly inserted row from an OracleText search.

By default, all inserts maintain the OracleText tables asynchronously.
The inserted row is referenced in a CTXSYS queuing table that stores the pending inserts:

SQL> select * from CTXSYS.DR$PENDING;

   PND_CID    PND_PID PND_ROWID          PND_TIMES P
---------- ---------- ------------------ --------- -
      1400          0 AAAXUtAAKAAABWlAAA 13-FEB-15 N
and we have a view over it:
SQL> select pnd_index_name,pnd_rowid,pnd_timestamp from ctx_user_pending;

PND_INDEX_NAME                 PND_ROWID          PND_TIMES
------------------------------ ------------------ ---------
DEMO_CTX_INDEX                 AAAXUtAAKAAABWlAAA 13-FEB-15

Synchronization

let's synchronize:

SQL> exec ctx_ddl.sync_index('DEMO_CTX_INDEX');

PL/SQL procedure successfully completed.
The queuing table has been processed:
SQL> select pnd_index_name,pnd_rowid,pnd_timestamp from ctx_user_pending;

no rows selected
and here is how that document is sotred in our OracleText tables.

$K records one document (docid=1) and the table rowid that contains it:

SQL> select * from DR$DEMO_CTX_INDEX$K;

     DOCID TEXTKEY
---------- ------------------
         1 AAAXUtAAKAAABWlAAA
$R table stores the docid -> rowid is a non-relational way:
SQL> select * from DR$DEMO_CTX_INDEX$R;

    ROW_NO DATA
---------- ------------------------------------------------------------
         0 00001752D0002800000569404141
How is it stored? It's an array of ROWIDs which are fixed length. Then from the docid we can directly go to the offset and get the rowid. Because DATA is limited to 4000 bytes, there are several rows. But a docid determines the ROW_NO as well as the offset in DATA.

$I stores the tokens (which are the words here as we have TEXT token - which is the type 0) as well as their location information:

SQL> select * from DR$DEMO_CTX_INDEX$I;

TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
---------- ---------- ----------- ---------- ----------- ----------
HELLO               0           1          1           1 008801
WORLD               0           1          1           1 008802
For each word it stores the range of docid that contains the work (token_first and token_last are those docid) and token_info stores in an binary way the occurrences of the word within the documents (it stores pairs of docid and offest within the document). It's a BLOB but is limited to 4000 bytes so that it is stored inline. Which means that if a token is present in a lot of document, several lines in $I will be needed, each covering a different range of docid. This has changed in 12c and we will see that in future blog posts.

Thus, we can have several rows for one token. This is the first cause of fragmentation. Searching for documents that contain such a word will have to read several lines of the $I table. The $N has nothing here because we synchronized only inserts and there is nothing to cleanup.

SQL> select * from DR$DEMO_CTX_INDEX$N;

no rows selected

Several inserts

I will insert two lines, which also contain the 'hello' word.

SQL> insert into DEMO_CTX_FRAG values (0002,'Hello Moon, hello, hello');

1 row created.

SQL> insert into DEMO_CTX_FRAG values (0003,'Hello Mars');

1 row created.

SQL> commit;

Commit complete.
And I synchronize:
SQL> exec ctx_ddl.sync_index('DEMO_CTX_INDEX');

PL/SQL procedure successfully completed.
So, I've now 3 documents:
SQL> select * from DR$DEMO_CTX_INDEX$K;

     DOCID TEXTKEY
---------- ------------------
         1 AAAXUtAAKAAABWlAAA
         2 AAAXUtAAKAAABWlAAB
         3 AAAXUtAAKAAABWlAAC
The reverse mapping array has increased:
SQL> select * from DR$DEMO_CTX_INDEX$R;

    ROW_NO DATA
---------- ------------------------------------------------------------
         0 00001752D000280000056940414100001752D00028000005694041420000
And now the tokens:
SQL> select * from DR$DEMO_CTX_INDEX$I;

TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
---------- ---------- ----------- ---------- ----------- ----------
HELLO               0           1          1           1 008801
WORLD               0           1          1           1 008802
HELLO               0           2          3           2 0098010201
MARS                0           3          3           1 008802
MOON                0           2          2           1 008802
What is interesting here is that the previous lines (docid 1) have not been updated and new lines have been inserted for docid 2 and 3.
  • 'moon' is only in docid 2
  • 'mars' is only in docid 3
  • 'hello' is in 2 (token_count) documents, from docid 2 to docid 3 (token_first and token_last)

This is the other cause of fragmentation coming from frequent sync. Each sync will add new rows. However, when multiple documents are processed in the same sync, then only one $I entry per token is needed.

There is a third cause of fragmentation. We see here that the token_info is larger for that HELLO covering docid 2 to 3 because there are several occurrences of the token. All that must fit in memory when we synchronize. So it's good to synchronize when we have several documents (so that the common tokens are not too fragmented) but we need also to have enough memory. The default is 12M and is usually too small. It can be increased with the 'index memory' parameter of the index. And there is also a maximum set by ctx_adm.set_parameter for which the default (50M) is also probably too low.

Nothing yet in the $N table that we will see in the next post:

SQL> select * from DR$DEMO_CTX_INDEX$N;

no rows selected

summary

The important points here is that inserted document are visible only after synchronization, and synchronizing too frequently will cause fragmentation. If you need to synchronize in real time (on commit) and you commit for each document inserted, then you will probably have to plan frequent index optimization. If on the other hand we are able to synchronize only when we have inserted a lot of documents, then fragmentation is reduced according that we had enough memory to process all documents in one pass.

The next posts will be about deletes and updates.

OracleText: deletes and garbage

$
0
0

In the previous post we have seen how the OracleText index tables are maintained when new document arrives: At sync the new documents are read up to the available memory and words are inserted in the $I table with their mapping information. Now we will see how removed documents are processed. We will not cover updates as their are just delete + insert.

Previous state

Here is the state from the previous post where I had those 3 documents:

  • 'Hello World'
which was synced alone, and then the two following ones were synced together:
  • 'Hello Moon, hello, hello'
  • 'Hello Mars'
The $K is a IOT which maps the OracleText table ROWID to the DOCID (the fact that the primary key TEXTKEY is not at start is a bit misleading):
SQL> select * from DR$DEMO_CTX_INDEX$K;

     DOCID TEXTKEY
---------- ------------------
         1 AAAXUtAAKAAABWlAAA
         2 AAAXUtAAKAAABWlAAB
         3 AAAXUtAAKAAABWlAAC
The $R is a table mapping the opposite navigation (docid to rowid) storing a fixed-length array of ROWIDs indexed by the docid, and split into several lines:
SQL> select * from DR$DEMO_CTX_INDEX$R;

    ROW_NO DATA
---------- ------------------------------------------------------------
         0 00001752D000280000056940414100001752D00028000005694041420000
The $I table stores the tokens, the first 5 columns being indexed ($X) and the TOKEN_INFO blob stores detailed location of the token:
SQL> select * from DR$DEMO_CTX_INDEX$I;

TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
---------- ---------- ----------- ---------- ----------- ----------
HELLO               0           1          1           1 008801
WORLD               0           1          1           1 008802
HELLO               0           2          3           2 0098010201
MARS                0           3          3           1 008802
MOON                0           2          2           1 008802
We have seen that the $I table can be fragmented for 3 reasons:
  • Each sync insert his tokens (instead of merging with other ones)
  • TOKEN_INFO size is limited to fit in-row (we will see 12c new features later)
  • Only tokens that fit in the allocated memory can be merged
And the $N is empty for the moment:
SQL> select * from DR$DEMO_CTX_INDEX$N;

no rows selected

Delete

Do you remember how inserts are going to the CTXSYS.DR$PENDING table? Deletes are going to CTXSYS.DR$DELETE table:

SQL> delete from DEMO_CTX_FRAG where num=0002;

1 row deleted.

SQL> select * from CTXSYS.DR$DELETE;

DEL_IDX_ID DEL_IXP_ID  DEL_DOCID
---------- ---------- ----------
      1400          0          2
I've deleted docid=2 but the tokens are still there:
SQL> select * from DR$DEMO_CTX_INDEX$I;

TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
---------- ---------- ----------- ---------- ----------- ----------
HELLO               0           1          1           1 008801
WORLD               0           1          1           1 008802
HELLO               0           2          3           2 0098010201
MARS                0           3          3           1 008802
MOON                0           2          2           1 008802
as well as their mapping to the ROWID:
SQL> -- $R is for rowid - docid mapping (IOT)
SQL> select * from DR$DEMO_CTX_INDEX$R;

    ROW_NO DATA
---------- ------------------------------------------------------------
         0 00001752D000280000056940414100001752D00028000005694041420000
However, the $N has been maintained to know that docid=2 has been removed:
SQL> select * from DR$DEMO_CTX_INDEX$N;

 NLT_DOCID N
---------- -
         2 U
This is the goal of $N (Negative) table which records the docid that should not be there and that must be deleted at next optimization (garbage collection).

From there, a search by words ('normal lookup') will give docid's and rowid's and the CTXSYS.DR$DELETE must be read in order to know that the document is not there anymore. It's an IOT and the docid can be found with an index unique scan.

However for the opposite way, having a ROWID and checking if it contains some words ('functional lookup') we need to know that there is no document. In my case I deleted the row, but you may update the document, so the ROWID is still there. There is no pending table for that. It is maintained immediately in the $K table:

SQL> select * from DR$DEMO_CTX_INDEX$K;

     DOCID TEXTKEY
---------- ------------------
         1 AAAXUtAAKAAABWlAAA
         3 AAAXUtAAKAAABWlAAC
the entry that addressed docid=2 has been deleted.

Commit

All those changes were done within the same transaction. Other sessions still see the old values. No need to read CTXSYS.DR$DELETE for them. What I described above is only for my session: the normal lookup reading the queuing table, and the functional lookup stopping at $K. We don't have to wait a sync to process CTXSYS.DR$DELETE. It's done at commit:

SQL> commit;

Commit complete.

SQL> select * from CTXSYS.DR$DELETE;

no rows selected

SQL> select * from DR$DEMO_CTX_INDEX$R;

    ROW_NO DATA
---------- ------------------------------------------------------------
         0 00001752D000280000056940414100000000000000000000000000000000
Of course we can't read it but we see that part of it has been zeroed. That $R table is definitely special: it's not stored in a relational way, and its maintenance is deferred at commit time.

But nothing has changed in $I which contains garbage (and sync is not changing anything to that):

SQL> select * from DR$DEMO_CTX_INDEX$I;

TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
---------- ---------- ----------- ---------- ----------- ----------
HELLO               0           1          1           1 008801
WORLD               0           1          1           1 008802
HELLO               0           2          3           2 0098010201
MARS                0           3          3           1 008802
MOON                0           2          2           1 008802
And of course $N row is still there to record the deleted docid:
SQL> select * from DR$DEMO_CTX_INDEX$N;

 NLT_DOCID N
---------- -
         2 U

Sync

I've not reproduced it here, but sync is not changing anything. Sync is for new documents - not for deleted ones.

Conclusion

What you need to remember here is:
  • New documents are made visible through OracleText index at sync
  • Removed document are immediately made invisible at commit
Of course, you can sync at commit, but the second thing to remember is that
  • New documents brings fragmentation
  • Removed document brings garbage
and both of them increase the size of the $I table and its $X index, making range scans less efficient. We will see more about that but the next post will be about queries. I've talked about normal and functional lookups and we will see how they are done. Let's detail that.

Is CDB stable after one patchset and two PSU?

$
0
0

There has been the announce that non-CDB is deprecated, and the reaction that CDB is not yet stable.

Well. Let's talk about the major issue I've encountered. Multitenant is there for consolidation. What is the major requirement of consolidation? It's availability. If you put all your databases into one server and managed by one instance, then you don't expect a failure.

When 12c was out (and even earlier as we are beta testers) - 12.1.0.1 - David Hueber has encountered an important issue. When a SYSTEM datafile was lost, then we cannot revocer it without stopping the whole CDB. That's bad of course.

When Patchet 1 was out  (and we were beta tester again) I tried to check it that had been solved. I've seen that they had introduced the undocumented "_enable_pdb_close_abort" parameter in order to allow a shutdown abort of a PDB. But that was worse. When I dropped a SYSTEM datafile the whole CDB instance crashed immediately. I opened a SR and Bug 19001390 'PDB system tablespace media failure causes the whole CDB to crash' was created for that. All is documented in that blog post.

Now the bug status is: fixed in 12.1.0.2.1 (Oct 2014) Database Patch Set Update

Good. I've installed the latest PSU which is 12.1.0.2.2 (Jan 2015) And I test the most basic recovery situation: loss of a non-system tablespace in one PDB.

Here it is:

 

RMAN> report schema;
Report of database schema for database with db_unique_name CDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB/system01.dbf
3 770 SYSAUX NO /u02/oradata/CDB/sysaux01.dbf
4 270 UNDOTBS1 YES /u02/oradata/CDB/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB/pdbseed/system01.dbf
6 5 USERS NO /u02/oradata/CDB/users01.dbf
7 490 PDB$SEED:SYSAUX NO /u02/oradata/CDB/pdbseed/sysaux01.dbf
11 260 PDB2:SYSTEM NO /u02/oradata/CDB/PDB2/system01.dbf
12 520 PDB2:SYSAUX NO /u02/oradata/CDB/PDB2/sysaux01.dbf
13 5 PDB2:USERS NO /u02/oradata/CDB/PDB2/PDB2_users01.dbf
14 250 PDB1:SYSTEM NO /u02/oradata/CDB/PDB1/system01.dbf
15 520 PDB1:SYSAUX NO /u02/oradata/CDB/PDB1/sysaux01.dbf
16 5 PDB1:USERS NO /u02/oradata/CDB/PDB1/PDB1_users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 60 TEMP 32767 /u02/oradata/CDB/temp01.dbf
2 20 PDB$SEED:TEMP 32767 /u02/oradata/CDB/pdbseed/pdbseed_temp012015-02-06_07-04-28-AM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB/PDB1/temp012015-02-06_07-04-28-AM.dbf
4 20 PDB2:TEMP 32767 /u02/oradata/CDB/PDB2/temp012015-02-06_07-04-28-AM.dbf


RMAN> host "rm -f /u02/oradata/CDB/PDB1/PDB1_users01.dbf";
host command complete


RMAN> alter system checkpoint;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
RMAN-03002: failure of sql statement command at 02/19/2015 22:51:55
ORA-03113: end-of-file on communication channel
Process ID: 19135
Session ID: 357 Serial number: 41977
ORACLE error from target database:
ORA-03114: not connected to ORACLE

 

Ok, but I have the PSU:

 

$ /u01/app/oracle/product/12102EE/OPatch/opatch lspatches
19769480;Database Patch Set Update : 12.1.0.2.2 (19769480)

 

Here is the alert.log:

 

Completed: alter database open
2015-02-19 22:51:46.460000 +01:00
Shared IO Pool defaulting to 20MB. Trying to get it from Buffer Cache for process 19116.
===========================================================
Dumping current patch information
===========================================================
Patch Id: 19769480
Patch Description: Database Patch Set Update : 12.1.0.2.2 (19769480)
Patch Apply Time: 2015-02-19 22:14:05 GMT+01:00
Bugs Fixed: 14643995,16359751,16870214,17835294,18250893,18288842,18354830,
18436647,18456643,18610915,18618122,18674024,18674047,18791688,18845653,
18849537,18885870,18921743,18948177,18952989,18964939,18964978,18967382,
18988834,18990693,19001359,19001390,19016730,19018206,19022470,19024808,
19028800,19044962,19048007,19050649,19052488,19054077,19058490,19065556,
19067244,19068610,19068970,19074147,19075256,19076343,19077215,19124589,
19134173,19143550,19149990,19154375,19155797,19157754,19174430,19174521,
19174942,19176223,19176326,19178851,19180770,19185876,19189317,19189525,
19195895,19197175,19248799,19279273,19280225,19289642,19303936,19304354,
19309466,19329654,19371175,19382851,19390567,19409212,19430401,19434529,
19439759,19440586,19468347,19501299,19518079,19520602,19532017,19561643,
19577410,19597439,19676905,19706965,19708632,19723336,19769480,20074391,
20284155
===========================================================
2015-02-19 22:51:51.113000 +01:00
db_recovery_file_dest_size of 4560 MB is 18.72% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Setting Resource Manager plan SCHEDULER[0x4446]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager CDB plan DEFAULT_MAINTENANCE_PLAN via parameter
2015-02-19 22:51:54.892000 +01:00
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_19102.trc:
ORA-63999: data file suffered media failure
ORA-01116: error in opening database file 16
ORA-01110: data file 16: '/u02/oradata/CDB/PDB1/PDB1_users01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_19102.trc:
ORA-63999: data file suffered media failure
ORA-01116: error in opening database file 16
ORA-01110: data file 16: '/u02/oradata/CDB/PDB1/PDB1_users01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
USER (ospid: 19102): terminating the instance due to error 63999
System state dump requested by (instance=1, osid=19102 (CKPT)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_diag_19090_20150219225154.trc
ORA-1092 : opitsk aborting process
2015-02-19 22:52:00.067000 +01:00
Instance terminated by USER, pid = 19102

 

You can see the bug number in 'bug fixed' and the instance is still terminating after media failure on a PDB datafile. That's bad news. 

 

I've lost one datafile. At first checkpoint the CDB is crashed. I'll have to open an SR again. But for sure consolidation through multitenancy architecture is not yet for sensible production.

How to set NLS for SQL Developer

$
0
0

I'm using Oracle SQL Developer 4.1 Early Adopter for a while and I like it. That version comes with a command line (in beta) which goal is to be fully compatible with sqlplus but running in java, and having a lot more features. 

Becuse it's connecting with thin java driver by default, it doesn't use NLS_LANG. It's java. It's unicode. So here is how to set the language and characterset with the java options.

12c online statistics gathering and startup restrict

$
0
0

I've written about 12c online statistics gathering in a UKOUG OracleScene article. My opinion is clear about it: you sill need to gather stale stats afterwards or you have mising, stale and inconsistent object statistics. This post is about cases where online statistics gathering does not occur (and are not documented) - which is another reason why we can't rely on it.

The case where it works

You can check on the article about how online gathering statistics works (or come to our 12c new feature workshop where we cover and practice all 12c optimizer new features)
In order to do something else here I'm showing how to trace it by activating the 0x10000 trace flag for dbms_stats:

SQL> connect demo/demo@//localhost/PDB1
Connected.
SQL> set serveroutput on
SQL> exec dbms_stats.set_global_prefs('TRACE',1+65536);
PL/SQL procedure successfully completed.

SQL> drop table DEMO;
Table dropped.

SQL> create table DEMO ( n number ) pctfree 99;
Table created.

SQL> insert /*+ append */ into DEMO select rownum from dual connect by 1000>=level;
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 1         1         1         to_char(count("N"))                        100
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 2         1         1         substrb(dump(min("N"),16,0,64),1,240)      9
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 3         1         1         substrb(dump(max("N"),16,0,64),1,240)      17
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 1         1         1         to_char(count("N"))                        100
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 2         1         1         substrb(dump(min("N"),16,0,64),1,240)      9
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 3         1         1         substrb(dump(max("N"),16,0,64),1,240)      17
DBMS_STATS: postprocess online optimizer stats gathering for DEMO.DEMO: save statis
DBMS_STATS: RAWIDX    SELMAPPOS RES                            NNV       NDV       
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 1         1          1000      1000      0         2891      1000
DBMS_STATS: RAWIDX    SELMAPPOS RES                            NNV       NDV       
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 2         2         Typ=2 Len=2: c1,2              NULL      NULL      
DBMS_STATS: RAWIDX    SELMAPPOS RES                            NNV       NDV       
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 3         3         Typ=2 Len=2: c2,b              NULL      NULL      
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 1         1         1         to_char(count("N"))                        100
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 2         1         1         substrb(dump(min("N"),16,0,64),1,240)      9
DBMS_STATS: SELMAPPOS CLISTIDX  INTCOL    SELITEM                                    GATHFLG
DBMS_STATS: ------------------------------------------------------------------------
DBMS_STATS: 3         1         1         substrb(dump(max("N"),16,0,64),1,240)      17

1000 rows created.

From the trace, online statistics gathering occured for that direct-path load.
We can see it also in the execution plan:

SQL> select * from table(dbms_xplan.display_cursor('1k2r9n41c7xba'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------
SQL_ID  1k2r9n41c7xba, child number 0
-------------------------------------
insert /*+ append */ into DEMO select rownum from dual connect by 1000>=level

Plan hash value: 1600317434

---------------------------------------------------------------------------------
| Id  | Operation                        | Name | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |      |       |     2 (100)|          |
|   1 |  LOAD AS SELECT                  |      |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |      |     1 |     2   (0)| 00:00:01 |
|   3 |    COUNT                         |      |       |            |          |
|   4 |     CONNECT BY WITHOUT FILTERING |      |       |            |          |
|   5 |      FAST DUAL                   |      |     1 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------

and statistics are there:

SQL> select last_analyzed,num_rows,blocks from user_tables where table_name='DEMO';

LAST_ANAL   NUM_ROWS     BLOCKS
--------- ---------- ----------
21-FEB-15       1000        179

Don't forget to set the trace off:

SQL> exec dbms_stats.set_global_prefs('TRACE',0);
PL/SQL procedure successfully completed.

Ok. That is the known case. Table statistics are there.

 

startup restrict

When you want to do some online maintenance, being sure that the application is not connected, you start the database in restrict mode.

SQL> alter system enable restricted session;
System altered.

Then you can do you imports, reorg, bulk load, etc. and be sure that nobody will write or read into the table you are working on. Imagine you have tested the previous load and you have observed that the online gathered statistics are sufficient. Now you run the same in production in restricted mode.

SQL> connect demo/demo@//localhost/PDB1
Connected.
SQL> set serveroutput on
SQL> exec dbms_stats.set_global_prefs('TRACE',1+65536);
PL/SQL procedure successfully completed.

SQL> drop table DEMO;
Table dropped.

SQL> create table DEMO ( n number ) pctfree 99;
Table created.

SQL> insert /*+ append */ into DEMO select rownum from dual connect by 1000>=level;
1000 rows created.

No trace related to online statistics gathering.

SQL> select * from table(dbms_xplan.display_cursor('1k2r9n41c7xba'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
SQL_ID  1k2r9n41c7xba, child number 0
-------------------------------------
insert /*+ append */ into DEMO select rownum from dual connect by 1000>=level

Plan hash value: 1600317434

-------------------------------------------------------------------------------
| Id  | Operation                      | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | INSERT STATEMENT               |      |       |     2 (100)|          |
|   1 |  LOAD AS SELECT                |      |       |            |          |
|   2 |   COUNT                        |      |       |            |          |
|   3 |    CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   4 |     FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

no STATISTICS GATHERING operation.

SQL> select last_analyzed,num_rows,blocks from user_tables where table_name='DEMO';

LAST_ANAL   NUM_ROWS     BLOCKS
--------- ---------- ----------

and no statistics.

 

10053 trace

Because we can't see the STATISTICS GATHERING operation in the execution plan, I know that it's an optimizer decision done at compilation time. I've dump the 10053 trace and got the following line:

ONLINEST: Checking validity of online stats gathering
ONLINEST: Failed validity check: database not open, in restricted/migrate mode, suspended, readonly, instance not open or OCI not available.

So we have a few reasons where online statistics does not occur and that are not documented as Restrictions for Online Statistics Gathering for Bulk Loadsand restricted mode is one of them.

 

Thin JDBC

Because the preceding line mentions OCI I wanted to be sure that online statistics gathering occurs even when connected though thin jdbc, and I used the sqlcl beta from SQL Developer 4.1 Early Adopter. Note that I'm not in restricted session anymore.

sql.bat demo/demo@//192.168.78.113/pdb1

SQLcl: Release 4.1.0 Beta on Sat Feb 21 21:10:59 2015

Copyright (c) 1982, 2015, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> show jdbc
-- Database Info --
Database Product Name: Oracle
Database Product Version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Database Major Version: 12
Database Minor Version: 1
-- Driver Info --
Driver Name: Oracle JDBC driver
Driver Version: 12.1.0.2.0
Driver Major Version: 12
Driver Minor Version: 1
Driver URL: jdbc:oracle:thin:@//192.168.78.113/pdb1

SQL> create table DEMO ( n number ) pctfree 99;

Table DEMO created.

SQL> insert /*+ append */ into DEMO select rownum from dual connect by level select last_analyzed,num_rows,blocks from user_tables where table_name='DEMO';

LAST_ANALYZED                 NUM_ROWS     BLOCKS
--------------------------- ---------- ----------
21.02.15                          1000        100


Ok. no problem. I don't know what that 'OCI not available' is but it works even though JDBC Thin.

 

Conclusion

As I already said for other reasons, don't rely on online statistics gathering and always gather stale stats afterwards. It's good to have it as it saves some work to do by dbms_stats later. There are cases where it is better than no statistics (when combined with GTT private statistics for example) but don't rely on it. but don't rely on it.

Query the Enterprise Manager collected metrics

$
0
0

Enterprise Manager (Cloud Control for example) gathers a lot of metrics. You can display them from the GUI, but you can also query the SYSMAN views directly. Today, I wanted to get the history of free space in an ASM disk group for the previous week. Here is how I got it.

Generic query for multicriteria search - part I: USE_CONCAT (OR Expansion)

$
0
0

You have a multicriteria search screen on the EMPLOYEE table where you can enter an employee id, a department id, a manager id or a job id. Either you put the value you want to filter on, or you leave it null when you don't want to filter on it. How will you code that? You can build the query on the fly with dynamic SQL or use a generic query like this one:

       SELECT *
       FROM employees
       WHERE (job_id = NVL(:job_id, job_id))
       AND (department_id = NVL(:department_id, department_id))
       AND (manager_id = NVL(:manager_id, manager_id))
       AND (employee_id = NVL(:employee_id, employee_id))
This is good for the code maintainability, but having a one-fit-all query will not be optimial for each cases. Markus Winand (every database developer should read his book) describes the danger ot that in his website: Use The Index, Luke


RAC Attack! next month 12c in Las Vegas

$
0
0

 

b2ap3_thumbnail_5---Banner---Official-COLLABORATE-Speaker.jpg

 

RAC is the most complex installation you can have for an oracle database. A RAC DBA is involved not only on database, but storage, network, and system as well. It involves also the application in order to be sure that the application service can follow the database service high availability. It's also brings every database skills to the highest level: small contention on single instance database can become a big bottleneck in RAC.

But RAC is also fascinating. It's the highest service availability. When correctly configured you can stop a node without any impact on your users. It's the highest scalability: you are not limited by the number of CPU or the amount of memory that you can put in a server. Just add a node. RAC is not new. Oracle 6 was already able to open the same database from several instances. It was called parallel server.

Do you think it's impossible to learn and practices that king of infrastructure when you don't have already one in your data center? No. You can install and practice RAC on your laptop. This is what RAC Attack! is about: at various events, experienced RAC Attack volunteers (ninjas) will help you address any related issues and guide you through the setup process and you will have a RAC on your laptop. Next month in Las Vegas is the IOUG event: COLLABORATE15. I'll be there as a speaker and I'm also very happy to help as a RAC Attack! Nija. 

Here you can find all information about it:

http://collaborate.ioug.org/precon#rac

Hope to see you there.  

 

 

Generic query for multicriteria search - part II: BIND_AWARE (Adaptive Cursor Sharing)

$
0
0

In the previous post I explained the performance issue encountered when using a generic query to deal with optional search criteria on multiple columns. The statement was shared by all executions, was marked as bind sensitive, but never became bind aware. Let's use the BIND_AWARE hint.

Oracle compression, availability and licensing

$
0
0

Various methods of table compression have been introduced at each release. Some require a specific storage system Some requires specific options. Some are only for static data. And it's not always very clear for the simple reason that their name has changed. 

Name change for technical reasons (ROW/COLUMN STORE precision when a columnar compression has been introduced) or for marketing reason (COMPRESS FOR OLTP gave the idea that other - Exadata - compression level may not be suited for OLTP).

Of course that brings a lot of ambiguity such as:

  • HCC is called 'COLUMN STORE' even if it has nothing to do with the In-Memory columns store
  • COMPRESS ADVANCED is only one part of Advanced Compression Option
  • EHCC (Exadata Hybrid Columnar Compression) is not only for Exadata
  • COMPRESS FOR OLTP is not called like that anymore, but is still the only compression suitable for OLTP
  • HCC Row-Level Locking is not for ROW STORE but for COLUMN STORE. It's suited for DML operation but is different than FOR OLTP. Anyway COLUMN STORE compression can be transformed to ROW STORE compression during updates. And that locking feature is licenced with the Advanced Compression Option, and available in Exadata only... 
  • When do you need ACO (Advanced Compression Option) or not?

Let's make it clear here.

Index on trunc(date) - do you still need old index?

$
0
0

Sometimes we have to index on ( trunc(date) ) because a SQL statement uses predicate on it instead of giving a range from midnight to midnight. When you do that you probably keep the index on the column. That's two indexes to maintain for DML. Do we need it?

12c: shutdown abort a PDB?

$
0
0

Can we shutdown abort a PDB? Let's try:

SQL> show con_id
CON_ID
------------------------------
3
SQL> shutdown abort;
Pluggable Database closed.


But is it really a shutdown abort?

Standard Edition on Oracle Database Appliance

$
0
0

The Oracle Database Appliance is really interresting for small enterprises. It's very good hardware for very good price. It's capacity on demand licensing for Enteprise Edition. But small companies usually go to Standard Edition for cost reasons.

Then does it make sense to propose only Enterprise Edition to the small companies that are interrested by ODA?

Index on SUBSTR(string,1,n) - do you still need old index?

$
0
0

In a previous post I've shown that from 12.1.0.2 when you have an index on trunc(date) you don't need additional index. If you need the column with full precision, then you can add it to the index on trunc(). A comment from Rainer Stenzel asked if that optimization is available for other functions. And Mohamed Houri has linked to his post where he shows that it's the same with a trunc() on a number.

Besides that, there is the same kind of optimization with SUBSTR(string,1,n) so here is the demo, with a little warning at the end.

I start with the same testcase as the previous post.

SQL> create table DEMO as select prod_id,prod_name,prod_eff_from +rownum/0.3 prod_date from sh.products,(select * from dual connect by level>=1000);
Table created.

SQL> create index PROD_NAME on DEMO(prod_name);
Index created.

SQL> create index PROD_DATE on DEMO(prod_date);
Index created.

string>Z

I've an index on the PROD_NAME and I can use it with equality or inequality predicates:

SQL> set autotrace on explain
SQL> select distinct prod_name from DEMO where prod_name > 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 72593368

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX RANGE SCAN | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROD_NAME">'Z')

And I also can use it with a LIKE when there is no starting joker:
SQL> select distinct prod_name from DEMO where prod_name like 'Z%';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 72593368

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX RANGE SCAN | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROD_NAME" LIKE 'Z%')
       filter("PROD_NAME" LIKE 'Z%')

That optimization is available for several releases (9.2 if I remember well but I didn' check).

substr(string,1,n)

But sometimes, when we want to check if a column starts with a string, the application uses SUBSTR instead of LIKE:

SQL> select distinct prod_name from DEMO where substr(prod_name,1,1) = 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1665545956

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX FULL SCAN  | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SUBSTR("PROD_NAME",1,1)='Z')

But - as you see - there is no access predicate here. The whole index has to be read.

Of course, I can use a function based index for that:

SQL> create index PROD_NAME_SUBSTR on DEMO( substr(prod_name,1,1) );
Index created.

SQL> select distinct prod_name from DEMO where substr(prod_name,1,1) = 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 4209586087

-------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |     1 |    31 |
|   1 |  HASH UNIQUE                 |                  |     1 |    31 |
|   2 |   TABLE ACCESS BY INDEX ROWID| DEMO             |     1 |    31 |
|*  3 |    INDEX RANGE SCAN          | PROD_NAME_SUBSTR |     1 |       |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

One index only?

Then, as in the previous post about TRUNC I'll check if that new index is sufficient. Let's fdrop the first one.

SQL> drop index PROD_NAME;
Index dropped.
The previous index is dropped. Let's see if the index on SUBSTR can be used with an equality predicate:
SQL> select distinct prod_name from DEMO where prod_name = 'Zero';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 953445334

-------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT          |                  |     1 |    27 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| DEMO             |     1 |    27 |
|*  3 |    INDEX RANGE SCAN          | PROD_NAME_SUBSTR |     1 |       |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME"='Zero')
   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

Good. The index on substring is used for index range scan on the prefix, and then the filter occurs on the result. This is fine as long as the prefix is selective enough.

It is also available with inequality:
SQL> select distinct prod_name from DEMO where prod_name > 'Z';
no rows selected

...

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME">'Z')
   3 - access(SUBSTR("PROD_NAME",1,1)>='Z')

And we can use it even when using a substring with a different number of characters:
SQL> select distinct prod_name from DEMO where substr(prod_name,1,4) = 'Zero';
no rows selected

...

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SUBSTR("PROD_NAME",1,4)='Zero')
   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

However, if we use the LIKE syntax:

SQL> select distinct prod_name from DEMO where prod_name like 'Z%';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 51067428

---------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |
---------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    27 |
|   1 |  HASH UNIQUE       |      |     1 |    27 |
|*  2 |   TABLE ACCESS FULL| DEMO |     1 |    27 |
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME" LIKE 'Z%')

The LIKE snytax does not allow to filter from the index on SUBSTR. So there are cases where we have to keep all indexes. Index on full column for LIKE predicates, and index on substring for SUBSTR predicates.

Note that indexes on SUBSTR are mandatory when you have columns larger than your block size, which is probably the case if you allow extended datatypes (VARCHAR2 up to 32k)


Oracle tuning silver bullet: add an order by to make your query faster

$
0
0

You have read all Cary Millsap work about Oracle database performance tuning. You know that there are no silver bullets. Reducing the response time requires a methodical approach in order to analyze the response time with the goal of eliminating all unnecessary work.

But I'll show something completly opposite here. A performance tuning silver bullet. Do more work in order to run it faster: just add an ORDER BY to your query and its faster. 

DataGuard wait events have changed in 12c

$
0
0

There are several new features in 12c about Data Guard: cascaded standby, far sync instance. But there are also some architecture changes: new processes and new wait events.

Dbvisit replicate REDO_READ_METHOD

$
0
0

A frequent question about replication is the overhead in the source, because in a lot of cases the source is production. Dbvisit replicate comes with the possibility to do the minimum on the source: only the FETCHER process is there to read de redo logs and sends it to the MINE process wich can be on another server.

But maybe even that - reading those critical online redo logs - is worrying you. That's not a reason to avoid to do a PoC with the production as source. Let's see how we can use the 'archived logs only'.

RAC Attack! was another great success at C15LV

$
0
0

The RAC Attack  - install a RAC in your own laptop - is a great success at Las Vegas.

The idea is to help people follow the RAC Attack cookbook which is available at:

http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c/Hardware_Requirements

It is a complex configuration and there is always problems to troubleshoot:

  • get Virtual Box be able to run a 64-bits guest, and that might involve some BIOS settings
  • be able to install VirtualBox, and we have people with their company laptop where some security policies makes things difficule
  • Network configuration is not simple and any misconfiguration will make things more difficult later

So it is a very good exercise for troubleshooting.

The organisation way excellent: Organisation by Ludovico Caldara, infrastructure by Erik Benner, food sponsored by OTN, and Oracle software made available on USB sticks thanks to Markus Michalewicz. Yes the RAC Product Manager did the racattack installation.

 It's also a very good networking event where people meet people around the technology, thanks to IOUG Collaborate.

When people manage to get a VM with the OS installed, they can get the red tee-shirt. Look at the timelapse of the full day and you will see more and more red T-shirts: https://www.youtube.com/watch?v=mqlhbR7dYm0

Do you wonder why we are so happy to see people having only the OS installed? Because it's the most difficult part. Creating a cluster on a laptop is not easy. You have to create the VM, you have to setup networking, DNS, etc.

Once this setup is good, then installing Grid Infrastructure and Database is straightforward with graphical installer.

Cloning a PDB from a standby database

$
0
0

Great events like IOUG Collaborate is a good way to meet experts we know through blogs, twitter,etc. Yesterday evening, with nice music in the background, I was talking with Leighton Nelson about cloning PDB databases. Don't miss his session today if you are in Las Vegas. The big problem with PDB cloning is that the source must be read-only. The reason is that it works like transportable tablespaces (except that it can transport the datafiles through database link and that we transport SYSTEM as well instead of having to import metadata). There is no redo shipping/apply here, so the datafiles must be consistent.

Obviously, being read-only is a problem when you want to clone from production.

But if you have a standby database, can you open it read-only and clone a pluggable database from there? From what we know, it should be possible, but better to test it.

Here is my source - a single tenant standby database opened in read-only:

SQL> connect sys/oracle@//192.168.78.105/STCDB as sysdba
Connected.
SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> select name,open_mode,database_role from v$database;

NAME      OPEN_MODE            DATABASE_ROLE
--------- -------------------- ----------------
STCDB     READ ONLY            PHYSICAL STANDBY

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
STDB1                          MOUNTED

Then from the destination I define a database link to it:

SQL> connect sys/oracle@//192.168.78.113/CDB as sysdba
Connected.
SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
CDB       READ WRITE

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB                            READ WRITE

SQL>
SQL> create database link DBLINK_TO_STCDB connect to system identified by oracle using '//192.168.78.105/STCDB';

Database link created.

and create a pluggable database from it:

SQL> create pluggable database STDB2 from STDB1@DBLINK_TO_STCDB;

Pluggable database created.

SQL> alter pluggable database STDB2 open;

Pluggable database altered.

So yes. This is possible. And you don't need Active Data Guard for that. As long as you can stop the apply for the time it takes to transfer the datafiles, then this is a solution for cloning. Of course, just do one clone and if you need others then you can do it from that first clone. And within the same PDB they can be thin clones if you can use snapshots.

Ok, It's 5 a.m here. As usual, the jetlag made me awake awake a bit early, so that was a good occasion to test what we have discussed yesterday...

Viewing all 98 articles
Browse latest View live